text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Covid-19 vaccine development is witnessing higher pace due to rise in number of patients across the globe and high mortality rate. Manufacturers are focused on development of various solutions such as antiviral medicine, plasma therapy, and immunotherapy for the treatment of virus.
Gradual increase in number of covid-19 patients across the globe and increasing investment by major players for the development of advanced vaccines are major factors expected to drive the growth of global covid-19 vaccine market. Covid-19 virus is affecting people in different ways; most common symptoms witnessed are fever, dry cough, and tiredness. Rise in mortality due to unanticipated virus event, the government of developed and developing countries are investing high for the R&D activities and development of vaccines in order to control the situation this is expected to impact the growth of covid-19 vaccine market. In 2020, the government of US announced an investment of US$ 1.6 Bn to support pharmaceutical company Novavax in the development of 100 million doses of a Covid-19 vaccine by early 2021. In addition, Finance Minister of India announced US$ 9 Bn funding for vaccine research in the country.
The minister clarified the funding will only be for research and development of COVID-19 vaccine. The US$ 9 Bn provided for COVID Suraksha Mission for research and development of the Indian COVID vaccine to the Department of Biotechnology. Collaborative work approach by the government and public players in order to strengthen the vaccine R&D is another factor expected to augment the growth of covid-19 vaccine market. In 2020, Johnson & Johnson a medical product manufacturing company announced the plans to team up with the US government to invest more than US$1 Bn in a new vaccine against COVID-19.
With the rising pandemic effect and time consuming process of vaccine development, the government is inclining towards providing emergency approval to manufacturer this preventive measure step is expected to boost the covid-19 vaccine market growth.
In 2020, Pfizer-BioNTech a drug manufacturing company received emergency approval from the Food and Drug Administration (FDA) for use the use of their vaccine in people 16 years and older. Emergency approval of the vaccine gives the United States another tool for reversing the surge in COVID-19 cases and deaths.
In 2020, Bharat Biotech an India Drug manufacturing company received emergency use authorization for Covid-19 vaccine Covaxin. Covaxin is being indigenously developed by Bharat Biotech in collaboration with the Indian Council of Medical Research (ICMR).
Factors such as high cost associated with R&D and side effects of vaccines are expected to hamper the growth of global covid-19 vaccine market. European countries such as Denmark, Iceland and Norway have suspended the use of AstraZeneca's COVID-19 vaccine after reports of blood clots among some people who had received the inoculation.
In addition, lack of awareness among consumers related to vaccine intake is expected to challenge the growth of target market. However, growing number of phase-3 clinical trial actions by the major players and increasing awareness activities by the government related to covid-19 vaccine are factors expected to create new opportunities for players operating in the covid-19 vaccine market over the forecast period. In addition, increasing partnership and agreements between regional and international players for introduction of novel products is expected to support the revenue transaction of the target market.
Segment Analysis by Region
The market in Asia Pacific is expected to account for a major revenue share in the global covid-19 vaccine market due to increasing government spending on the development of healthcare infrastructure. In addition, increasing investment by major players and developing healthcare regulatory scenarios are factors expected to support the growth of the target market. Increasing medical tourism in emerging economies and manufacturers approach towards strategic partnership for factors expected to boost the growth of the covid-19 vaccine regional market.
The global covid-19 vaccine market is high highly competitive due to presence of large number of players and innovative product offerings. In addition, business expansion activities through partnerships and agreements are factors expected to further increase the competition.
Covid-19 Vaccine Market Segment Analysis, 2019
The global covid-19 vaccine market is segmented into product type, application, and end-use. The product type is segmented into Covishield, Covaxin, BNT162b2, mRNA-1273, JNJ-78436735, Sputnik V, Covi-Vac, and others. Among product types, the Covishield segment is expected to account for a noticeable revenue share in the global covid-19 vaccine market. The end-use segment is divided into hospitals and clinics. Among the end-use, the hospital segment is expected to account for major revenue share in the global market. The players profiled in the report are Pfizer, Inc., ModernaTX, Inc., Janssen Pharmaceuticals Company, AstraZeneca, Novavax, Bharat Biotech, Gamaleya Research Institute, Chumakov, Vector State Research Center, and CNBG Beijing.
Market By Product Type
Market By Application
Market By End Use
Market By Geography
• Rest of Europe
• South Korea
• Rest of Asia-Pacific
• Rest of Latin America
Middle East & Africa
• South Africa
• Rest of Middle East & Africa
Gradual increase in number of covid-19 patients across the globe and increasing investment by major players for development of advanced vaccine are major factors expected to drive the growth of global covid-19 vaccine market.
In product type the Covishield segment is growing at faster pace.
In the global market the Asia Pacific region is expected to grow faster.
Some of the players considered in the report scope are Pfizer, Inc., ModernaTX, Inc., Janssen Pharmaceuticals Company, AstraZeneca, Novavax, and Bharat Biotech.
The Asia Pacific is expected to account for major revenue share in the global market.
In end use the hospital segment is growing at faster pace.
Factors such as high cost associated to R&D and side effects of vaccines are expected to hamper the growth of global covid-19 vaccine market. | <urn:uuid:ab395bac-936b-43ce-b1a8-9c4162f20032> | CC-MAIN-2022-40 | https://www.acumenresearchandconsulting.com/covid-19-vaccine-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00669.warc.gz | en | 0.939372 | 1,412 | 2.546875 | 3 |
A Brief Intro to the Cloud
Apple Podcasts: https://apple.co/3t2OmY0
It’s like a friend whose name you should know, but it’s a little awkward to ask if you don’t.
You may pretend like you completely understand how the cloud functions or how it came to be, but chances are, you’re a bit confused about what the cloud truly is. And you’re not alone! Many people have questions.
To fully understand the cloud, you have to look back to the beginning.
Starting in the 80s, IBM allowed remotely centralized data and was a common, key element in connectivity. This mainframe was very expensive and was mainly used by bigger businesses. Although this model was limited to only text and difficult to use, it was fast and you could operate the whole system from the keyboard. The main problem with this system was the fact that there weren’t pop-ups or instructions on how to best use the mainframe. It was not conducive to the communications that they had at the time.
This led to the mainframe 2.0. This model moved back towards the way things used to be, but it looked totally different. This new system allowed data to live on personal computers, and put power into the user's hands while still getting the mainframe experience. The problem with this system was that the data became unorganized and lived in multiple areas which muddled and confused the information.
Thus enters the early 2000s with the .com bubble! During this time, people began to create websites and share data at extraordinary rates. This allowed users to research data at any time and find information much easier and in more variety. This came at a cost. All data living on these computers were not backed up and there was no way to restore the information. In entered data centers, and the main system became centralized again.
2004 was a crazy year of technological advancement with VMware. VMware would run any system from any hardware, which allowed data to be moved from one computer to the next with ease. So if your computer died, you could easily move your important data to the next. This created reliability and an easier way to back up information.
VM gave the ability to centralize data while spreading it out and keep it available. Here’s a simple way to look at it.
Imagine your company and your employees. If one of your employees leaves, you have to train a new person and help them learn the ropes. Now imagine if you didn’t have to train your employees anymore, you could just give all the previous employee’s knowledge and abilities to the new person. This is what VMware started, and where the cloud got inspiration from.
Now this is only a brief introduction into where the cloud originated, but what does this mean for today, and how does the cloud truly help us? Check back next week for the second episode of this four-part series to find out! | <urn:uuid:d207ddb1-beec-4a3f-ab34-d46a8f9b9e91> | CC-MAIN-2022-40 | https://www.humanizeit.biz/post/a-brief-intro-to-the-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00069.warc.gz | en | 0.966242 | 632 | 2.84375 | 3 |
The year 2016 has been called "the year of stolen credentials," and with good reason. Between the massive breaches at Yahoo, LinkedIn, Tumblr, Twitter, and Dropbox, it’s estimated that over 2 billion records were stolen. Although attackers steal all kinds of data, a vast majority of what’s stolen are user credentials, and they’re being put to bad use. The 2017 Verizon Data Breach Investigation Report found that 81% of hacking-related breaches leveraged stolen and/or weak passwords. What’s more, these stolen credentials are readily available for sale on the dark Web to anyone willing to pay the price.
What Is Credential Stuffing?
With this glut of stolen credentials, we’re seeing a rise in what are known as “credential stuffing” attacks. Attackers use automated tools to test stolen credentials in the login fields of other, targeted websites (hence, the name credential “stuffing”). When a username/password pair grants the attackers access, they take over that account for fraudulent purposes. By some estimates, as many as 90% of all login attempts on web-based applications at Fortune 100 firms are actually credential stuffing attempts rather than legitimate logins.
Often organizations think that they’re "safe" if their own data has not been stolen, but that’s simply not true. One of the reasons credential stuffing is so wildly successful is that many people (73%, by a 2015 estimate) reuse their passwords for multiple applications—both personal and work-related. This significantly increases the attack surface and the risk to everyone, because if attackers can gain access to one application with stolen user credentials, there’s a good chance those credentials will work with another application, and another, and another…
This is why credential stuffing is such a critical threat to organizations. Many enterprises have multiple web-based applications exposed on the Internet that are protected by nothing more than—you guessed it—login credentials. So, even if your own internal systems have not been breached, it’s conceivable that your external applications—whether you have 5 or 500 of them—will be targeted by attackers using stolen credentials. Breach or not, your applications are potentially at risk. This problem is compounded by the fact that few applications (yet) support multi-factor authentication (MFA). Without it, applications are especially vulnerable because they have only one layer of protection and are therefore easily compromised using stolen credentials alone.
For many organizations, the attack surface is even broader still because their application programming interfaces (APIs) are also vulnerable. Typically, APIs are the set of clearly defined methods of communication between various software components. Although there are several methods for authenticating APIs, it’s surprising how many are still authenticated using only login credentials.
Consider, too, that the authentication and authorization process is typically separate for each application or API, so organizations must monitor and protect each application independently. It’s kind of like trying to manage a border wall built in 50 separate sections by 50 different contractors, each section with its own gate, varying levels of staff and monitoring, and unique admittance policies. Without any coordination or consistency across those 50 sections, each gate is a penetrable target. The potential exists for thousands of people using stolen credentials to pass through those 50 gates. Now consider the nightmare scenario in which millions of people’s passports have been stolen and handed out indiscriminately to a bunch of bad guys trying to enter through those gates. That’s fundamentally what credential stuffing is like—only it’s automated, so it’s far more dangerous. This kind of approach to border control—which is essentially the same function that authorization solutions provide to web-based applications—can quickly become a security and management nightmare.
Methods for Dealing with Credential Stuffing
There’s no shortage of advice online about how you can help mitigate credential stuffing attacks. Of course, it makes sense to train users not to use duplicate passwords, implement multi-factor authentication wherever possible, and strengthen your access policies, for example, by forcing password resets after significant breaches occur. It’s all good advice, but it’s not sufficient. It bypasses the heart of the problem, which is that our approach to authorization is quickly becoming outdated.
It’s time we considered the feasibility of a token-based authorization model. What is token-based authorization? In simplest terms, it’s a framework that enables a user to access an application without having to provide their credentials to that application itself. Instead, the user is granted access using managed access tokens. As a user, you’ve already proven your identity (authentication) using your Facebook, Google, or Microsoft credentials, so whatever application you are trying to access isn’t looking for you to supply your credentials again. Instead, as an OAuth-enabled application, it’s only looking for a token to authorize your access. If it receives a valid one, the user is granted access; if it doesn’t, the user is denied access; It’s that simple. Because token-enabled applications don’t even use credentials to authorize users, they can reduce the incidence of credential stuffing attacks by drastically reducing the attack surface area.
A more practical evolution of this token-enabled model would be to avoid rewriting applications altogether. Instead, implement an authorization gateway that supports OAuth and can translate OAuth authorization back to the application. In doing so, you essentially move all authorization away from being handled at the application/API level and centralize this service for all applications. In our border wall analogy, it would be like closing 49 of the border wall gates in favor of one, centralized gate through which all visitors would pass. This would dramatically reduce the credential stuffing attack surface and gives you a single point of control for all authorization. | <urn:uuid:36da26ef-2d8e-4928-9279-81df7d81ee0c> | CC-MAIN-2022-40 | https://www.darkreading.com/f5/fight-credential-stuffing-with-a-new-approach-to-authorization | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00069.warc.gz | en | 0.936637 | 1,211 | 2.59375 | 3 |
As wise moms and dads, we attempt to instruct our kids regarding one of the most essential things in life. We make sure our kids recognize to keep away from strangers, to deal with others as they would like to be treated as well as the value of education and learning. Why not begin educating our children regarding money as well as just how to handle money? This short article discusses children and finance as well as exactly how to educate your kids concerning financing.
Offer Your Youngsters a ‘Task’ –
Many children do house jobs when they get to a particular age. Why not transform this into a vital lesson in money? Besides their typical jobs, you could provide an optional task or 2 each week that they can generate income from. You may offer them a few bucks to rake the lawn or sort the laundry – anything that will really be assisting and that they can make money from. Certainly, if your youngsters don’t get the job done, they don’t make the money! This is a terrific means to instruct your youngsters that cash doesn’t come without effort and also time!
Start a Savings Account for Your Youngster –
Another thing you can do (which would certainly work in mix with offering your children a work) is start a savings account for your kid. Discuss to them how the financial institution keeps their cash and also provides a little added every month for saving it. You can have them place their allocation money in their interest-bearing account and show them their statements monthly so they can see their cash building up. This will aid your youngster discover the significance of conserving – and if you desire, you can let them think of something truly terrific they wish to buy as soon as they’ve conserved so much money. This will certainly show them that by conserving their money, they can get points they really want!
Older Youngsters –
If your children are older, there are numerous things you can do in order to reveal them regarding finance. For instance, you might have them get a real part time task so they discover what it’s like to work for money as well as what goes into making an income. If they drive, they can aid pay insurance policy on the auto or provide you a portion of their income for gas money. Certainly, if they do not spend for the insurance policy or gas cash – they do not drive. This may appear cruel however when your kid obtains a real work, if they do not pay their expenses, they will not delight in the benefits of the solutions. If they do not function, they won’t get a paycheck. These approaches will correctly prepare your youngster for the real life as well as a working environment.
These are some truly wonderful ways to educate your children regarding money to ensure that they will comprehend the value of cash and also just how hard it is to earn. This is a valuable lesson that you can provide to your kid and you can utilize the suggestions and also pointers in this short article to do it. All the best!
Check out IX global review to find out more important information. | <urn:uuid:37fe678e-6002-418b-bc88-1e452224cfc3> | CC-MAIN-2022-40 | http://anonymousone.com/teach-your-kids-about-finance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00069.warc.gz | en | 0.972404 | 633 | 2.71875 | 3 |
The Internet as we know it is apparently running out of space. No, this does not mean thatexisting websites will not be able to add more content. But sometimein the next few years the space for new IP addresses — the kind normally used up to this point, anyway — will be nearlydepleted, according to IPv6.net.
The ubiquitous growth of mobile devices and never-ending tide ofmalware and other browser exploits are hogging all the allocatedspace. So are the mega address blocks that large corporations swept upover the last decade, explained Michael Sutton, vice president forsecurity research at Zscaler.
Despite his agreement that the industryis running out of IP address space, his company’s researchers recently issued a report stating that ample space remains — if betterusage is applied.
Much like any other commodity, IP addresses on the Internet are amatter of supply and demand. The supply used to outpace the demand.Now, however, the present IP version 4 (IPv4) protocol used to manage theInternet has a rapidly dwindling number of IP address left. The NorthAmerican Region of the Internet Assigned Numbers Authority (IANA)issued a call for the industry to complete the changeover by Jan. 1,2012.
“The Internet still isn’t close to coming to a grinding halt. No oneknows when that will occur. I don’t feel it’s a case of the sky isfalling,” Sutton told TechNewsWorld.
A Question of Scale
Zscaler’s State of the Web research report for Q1 2010 suggests much of the Internet remains untouched. Sutton does notdispute that IP address space is filling up. But he sees a shortagelater rather than sooner.
“Part of the problem is driven by a lack of policing the allocationand utilization. There is still unallocated space that can bereclaimed. Still, it’s a finite end,” he agreed.
Only about 6 percent of the available address space in IPv4 is left. IPv4has been used since the Internet began. It provided a finite number ofaddresses, somewhere around 4 billion, he noted.
Change Is Coming
The movers and shakers that sit in the Internet’s control room havenot been sitting on their hands just watching the Internet fill up.They have a plan. That solution is called “IPv6.”
“IPv6 is the next step. The industry foresaw this and developed IPv6,which has 340 undecillion unique addresses, or more than 50 billionbillion billion for each person on Earth — more than enough to continueto support the ever-increasing demand for IP addresses,” Kevin R.Petschow, Global Technology Strategy Public Relations / CorporateCommunications for Cisco Systems, told TechNewsWorld.
Cisco designs and sells consumer electronics,networking and communications technology and services. Industry-wideattention has focused on a gradual changeover from IPv4 to the IPv6protocol. See here for a detailed view of thisprocess.
Forget Version 5
Ideas on how to keeping the Internet from filling up moved faster thanthe actual implementation. As is typical for things in the computerfield, the terminology and technology often play a name-changing game.
The successor to IPv4 had to be called “IPv6.” The ExperimentalStreaming Protocol Version 2 had already received the v5 designation,according to IPv6.net.
“Regulatory bodies don’t move as quickly as technology. It’s a bitlike smoke and mirrors. Version 5 did exist but wasn’t put intomainstream use,” Scott Testa, marketing consultant and professor ofbusiness administration at Cabrini College in Philadelphia, toldTechNewsWorld.
What’s the Difference?
To understand the upgrade from a technical perspective, IPv6 increasesthe IP address size from 32 bits to 128 bits, he explained. The newprotocol supports more levels of addressing hierarchy and provides formany more addressable nodes with simpler auto-configuration ofaddresses.
IPv4 address depletion is predicted sometime in mid-2011, according toPotaroo, a website that tracks IPv4 address allocation by IANA, noted Petschow. But differentguestimates provide for sooner-and-later scenarios regarding when theInternet will fill up, suggested Sutton.
“A target date for the industry to have made the upgrade is Jan. 1,2012, based on a directive by the leader of the North American Region,the group that issues IP addresses. But that is questionable since noone has the authority to legislate such things,” said Sutton.
IP addresses get allocated by five regional organizations. No oneentity is in charge and no one has the authority to order a change, hesaid. Instead, various elements within the industry are pursuing thehardware changes and newer technology to bring about an eventualupgrade.
“The move to IPv6 is already happening slowly. There is a plan. But itmay not be moving as quickly as some would like,” he said.
Who Moves First?
As is thecase with any upgrade to technology, changeovers involve costs for newhardware and software.
“The move is not critical now for small-business owners and enterpriseusers of the Internet,” Testa said. “There will be a cost to theequipment upgrade. But by the time ISPs and those with large networksget there, the existing equipment will need to be replaced anyway.”
On a much larger scale, the move to IPv6 resembles what themovie industry recently faced, according to Sutton. It requiredreplacing existing film equipment with digital equipment.
End Users Go Last
Ultimately, everyone will have to make the move. Some segments arewaiting before they have to spend the money to make the upgrade,Sutton said. To the lay person, it will be mostly transparent.
But from the end users’ perspective, there is really nothing for themto do. Corporations have to handle their own networks. ISPs and othershave to manage the change.
“Still, the change to IPv6 will be gradual. It will happen over time.We haven’t reached that point yet that will force compliance.Eventually, the agencies handling IP addresses will have to say no tonew requests because there won’t be any left,” said Sutton.
Oil and Water Impact
When the mass migration to IPv6 gets fully underway is only part ofthe process. Some double jeopardy will exist, Petschow warned. The twoprotocols are mutually exclusive.
“So migrating a network from IPv4 to IPv6 requires technologysolutions to preserve IPv4 while executing a carefully orchestrated,step-by-step implementation plan,” he explained.
Regardless, the upgrade is not optional. The only leeway is when to doit and how much to pay.
The Cost Factor
The cost to enable IPv6 on a network depends on the number of productsand applications deployed and the strategy of deployment, Petschowsaid. For example, the integration of IPv6 includes fixed costs, suchas training and human resources associated with the project, andvariable costs dependent on the network devices and applications thatrequire IPv6 support.
For Cisco’s customers, networks built with the Cisco 7200 Seriesrouters only need a software upgrade to one of the Cisco IOS Softwarereleases that supports IPv6. For users on an older infrastructure ofmore than five years, a hardware upgrade would likely be required togain IPv6 support (for example, Cisco 2500 or 4000 series routers).
If older hardware needs to be replaced, looking ahead to use a “normallifecycle” replacement strategy will minimize the explicit cost todeploy IPv6 by acquiring the capability before it is needed, addedPetschow. | <urn:uuid:1b2554d3-383b-44cf-a7ee-af9a62e3d29f> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/the-rocky-road-to-ipv6-70370.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00069.warc.gz | en | 0.915522 | 1,656 | 2.515625 | 3 |
Here’s a collection of cheat sheets we created to go along with our course: The Practical Guide to sqlmap for SQL Injection. If you find these helpful, please share them on social media and tag @cybrcom. Thanks!!
1. sqlmap’s source code structure and how to navigate it
The main repository: https://github.com/sqlmapproject/sqlmap
sqlmap repository structure
Let’s start from the bottom up:
sqlmapapi.py: sqlmap can be used as an API, which is something we’ll look at later in this course, but this serves as the entry point to enable and control our API
sqlmap.py: this, on the other hand, is the entry point for using sqlmap itself (
python sqlmap.py -h)
sqlmap.conf: this is the configuration file for sqlmap’s options, so this is where we can modify some of sqlmap’s default configuration values in a more permanent way than typing them out in the terminal each time we issue a command
Next we have README, LICENSE, Travis CI (Continuous Integration), pylint code analysis file, and git files
thirdparty: this is where we can see the 3rd party tools needed for certain sqlmap functionality (ie:
identywaf used to identify WAFs)
tamper: these are our tamper scripts, which are used to evade security controls (such as WAFs, IPSs, etc). There are over 60 scripts included by default, but we can also add our own
plugins: these are generic and DBMS-specific sets of plugins which are used by sqlmap to connect, fingerprint, enumerate, takover, etc… so these are very important functions
lib: another set of really important functions is in
/lib. These are the libraries used by sqlmap, and it contains
techniques functions (
utils (utilities) functions
extra: extra contains additional functionality that doesn’t quite fit in
plugins. For example, there is a
vulnserver that we can use to test sqlmap functionality. There’s also a cloak script that can be used to encrypt and compress binary files in order to evade anti viruses. When using backdoors through sqlmap, sqlmap automatically takes care of that for you. But if you needed to manually cloak backdoors or other files that could be blocked by detection software, you could manually use
doc: this contains general files about sqlmap’s authors, its changelog, a thanks file for contributors, a list of third parties and their licenses, copyrights, etc, and translations for different languages other than english.
data: finally, we have
data which contains a lot of templates and text documents that sqlmap uses extensively during its operations.
htmlis simply a demo page
procscontains SQL snippets used on target systems, and so they’re separated by DBMS
shellcontains backdoor and stager shell scripts, useful for the takeover phase
txtcontains common columns, tables, files, outputs, keywords, user-agents, and wordlists, all useful for brute-force operations, fingerprinting, bypassing basic security controls, and masking sqlmap’s identity
udfstands for user-defined functions, and this contains user-defined function binary files which can be used in the takeover phase to try and create our own functions in the target DBMS, which could help us assume control over that database.
xmlis where you will find
payloadsfor each technique. You will also find something called
bannerwhich sqlmap uses to identify which DBMS we’re dealing with, and more specifically, what versions are installed. These files also help identify what webserver is in place, and what languages as well as settings power the application(s) that we’re targeting. We also have:
boundaries.xml: contains a list of boundaries that are used in SQL queries
errors.xml: contains known error messages separated by DBMS
queries.xml: contains the correct syntax for each DBMS for various operations (ie:
.github is just a convention folder used to place GitHub related information inside of it, like the Code of Conduct, Contribution guidelines, how to donate to the project, and the format to follow for opening bug reports or feature requests.
All of this data, all of those functions, and all of those configuration files serve a purpose. They’re there to give sqlmap its functionality. Understanding how its structured and how it works together is important for a number of reasons:
- As you become a more advanced user of sqlmap, you can extend its functionality. You can add more tamper scripts. You can change payloads. You can change default configurations, etc…
- If you run across an issue, you can try to troubleshoot yourself before opening an issue ticket, and then if you find a solution, you can propose that solution to the authors of sqlmap
- As changes get pushed to sqlmap, and as you update your version, you can keep track of changes and get a better understanding of what’s been added, fixed, or removed
Now that we looked at the entire repo and how it’s structured, let’s narrow it down a bit more to some of the most useful directories.
2. Important and useful sqlmap directories
We already briefly mentioned most of these in the prior section, but let’s take a closer look.
This is where you’ll find tamper scripts, which are used primarily to bypass WAFs and evade security controls.
Using these scripts is simple, as you can use the
This would instruct sqlmap to use all of the scripts separated by commas.
This directory contains text files that sqlmap uses quite extensively during its operations:
These files are used for everything from randomizing user-agent header values, to brute-forcing common column/file/table names, to guessing values for optimization.
You can add/remove values in these text files to your heart’s content.
Curious to see what payloads sqlmap is using with its fingerprinting, enumeration, and takeover actions? This is where you’ll find them.
You can also add/remove payloads to your heart’s content.
The payloads are broken down by SQL injection technique:
This is the heart of sqlmap’s configuration. This file includes defaults for all options that need defaults to function, which means you can change these defaults either directly in this file, or via the terminal when you issue commands. Note that if you update a default in this conf file and then issue a different value via the terminal, the terminal value will take precedence.
This directory (usually located at
/home/kali/.local/share/sqlmap/output/ if you use kali) is where results from sqlmap commands get stored which you can then explore and review. This is helpful when you need to share results in your reports and with developers, or if you want to perform additional analysis with 3rd party tools.
This directory (usually located at
/home/kali/.local/share/sqlmap/history/ if you use kali) is where a SQL file gets generated and updated automatically by sqlmap as you issue commands. This essentially acts as a SQLite database which sqlmap can pull from to remember actions and results.
3. Test –levels and the impact they will have on your commands
This option decides what tests are performed and what tests aren’t performed. Let’s take a look at each level. (You can view payloads and which get triggered at which levels here.)
This is the most basic level. sqlmap tests all GET and POST parameters. So regardless of the level that we choose, GET and POST parameters will always be tested by default, unless we specifically tell sqlmap not to.
This level starts to also look at HTTP
Cookie headers for SQL injection vulnerability.
We can also set cookie headers manually with
--cookie=COOKIE, and we can use
--param-exclude=EXCLUDE to bypass testing of certain cookies that match the given regular expression. We can also skip testing the
Cookie headers by using
--skip="cookies" or by using
-p and not including
cookies, even if we have this level enabled.
sqlmap -u 'http://localhost:8440/" --level=2
sqlmap -u 'http://localhost:8440/" --level=2 --cookie="PHPSESSID=..." --param-exclude="PHPSESSID"
sqlmap -u 'http://localhost:8440/" --level=2 --cookie="PHPSESSID=..." --skip="cookies"
sqlmap -u 'http://localhost:8440/" --level=2 --cookie="PHPSESSID=..." -p "id"
This level adds 2 new types of headers into the mix:
So by including this level, we are now testing for level 1 + level 2 + level 3.
Level 4 seems to mostly implement more payloads for certain types of techniques, not necessarily new headers to test as compared to the other levels. For example:
- Boolean-blind level 4 includes, as some examples (there are others):
- MySQL boolean-based blind – Parameter replace (MAKE_SET)
- MySQL boolean-based blind – Parameter replace (ELT)
- MySQL boolean-based blind – Parameter replace (bool*int)
- PostgreSQL boolean-based blind – Parameter replace (original value)
- Microsoft SQL Server/Sybase boolean-based blind – Parameter replace (original value)
- etc… (filter by
- Stacked queries
- Time blind
- Union query
- Inline query (only includes tests for levels 1-3)
Finally, the highest level adds HTTP
Host headers to test for SQL injections, as well as additional checks that we can also look for in each respective file.
One thing to keep in mind as you increase the levels, you will be increasing the number of requests, so if you set level 5, it will take significantly longer than if you choose level 2.
Code language: HTTP (http)
1: Always (<100 requests) 2: Try a bit harder (100-200 requests) 3: Good number of requests (200-500 requests) 4: Extensive test (500-1000 requests) 5: You have plenty of time (>1000 requests) Source: https://github.com/sqlmapproject/sqlmap/blob/master/data/xml/payloads/boolean_blind.xml#L21
4. –risk levels and the impact they will have on your commands
This option is similar to the
--level option, but instead of dictating which headers and techniques to include in tests, this option looks at the risk levels.
Certain payloads that can be used to test for SQL injections can be destructive, because they can make modifications to databases and their entries, or they can take down databases by using resource-intensive queries. In some situations, that could be unacceptable since it would go outside of your testing scope or cause damage to a business. That’s why the authors of sqlmap added 3 levels.
The first level, level 1, is intended to not cause any damage to databases and applications. It is the least offensive of all levels, so it’s a great place to start and is the default value.
The 2nd level starts to add heavy time-based SQL injection queries. This can slow down the database or even potentially take it down. So be careful when using this risk level.
The 3rd and final risk level adds
OR based SQL injection tests. The reason this is in the highest risk level is because injecting
OR payloads in certain queries can actually lead to updates of entries in database tables. Changing data in the database is never what you would want unless you are testing a throw-away environment and database. If you were to do that in a production environment, it could have disastrous consequences.
Only use this risk level if you know what you are doing, if you have explicit permissions, and if everyone is on the same page as to what this risk level does.
To get a comprehensive list of which payloads get executed at which risk levels, you can again take a look at all of the default payloads that sqlmap uses here. You can also add your own or make modifications, by the way, as you become a more advanced user of sqlmap, and to customize it to your needs or your client’s needs.
5. Verbosity levels for troubleshooting and to see what sqlmap is doing under the hood
Verbosity is used to control how much information sqlmap outputs when we’re using the tool. Some people may want more feedback from the tool to understand what’s going on and to debug, while others may find all of that extra information unnecessary.
By default, sqlmap uses a verbosity level of 1, which they define as
Show Python tracebacks, errors, and critical messages from level 0, plus
Show information and warning messages
So each of these levels stack on top of each other:
- 0: Show only Python tracebacks, error and critical messages.
- 1: Show also information and warning messages.
- 2: Show also debug messages.
- 3: Show also payloads injected.
- 4: Show also HTTP requests.
- 5: Show also HTTP response headers.
- 6: Show also HTTP response page content.
Again, this is personal preference and it depends on what you’re doing, but level 2 is recommended for the detection and takeover phases.
Level 3 is recommended if you want to see what payloads are being injected and if you want to be able to share those payloads with your developers or your client in order to show them exactly what worked and what didn’t work.
Otherwise, levels 4 – 6 include HTTP requests information, response headers, and response page content, which would be a lot of information to sift through, so it’s not recommended unless you absolutely need to know that information.
One more note to take here is that you can also replace the numeric values for this option (ie:
-v 4) with the corresponding number of
sqlmap -v 4 sqlmap -vvvv
You can also further filter results with grep:
Code language: HTML, XML (xml)
sqlmap -v 4 | grep <filter>
This option has to be used with other mandatory options, so if you try to set it by itself, it will give you an error and ask you to provide another mandatory option. This means you have to set the verbosity level for each of your commands, unless you set it in the sqlmap configuration file.
6. List of sqlmap’s Tamper scripts and what they do
sqlmap, by default, does very little to obfuscate payloads. Obfuscation, if you’re not familiar with the term already, is the act of hiding the true intention of our payload, which is a technique used to try and evade detection because it makes the payload deliberately difficult to understand. Just by looking at it, you wouldn’t be able to tell that it’s malicious.
This could be a problem if you’re trying to evade WAFs, IPSs, or other types of security controls and monitoring systems.
So in cases that you are trying to bypass input validation, or trying to slip through a Web Application Firewall, you may want to try and use
With this option, you can pass in a number of different values that are all separated by commas, and these values will load different tampering scripts. You can also create your own tamper scripts.
You can also use this command to list all of the tamper scripts in your terminal:
Code language: PHP (php)
If we navigate to
/tamper on GitHub, we’ll find a list of all the included tamper scripts. From there, we can click on one and see what the code does, since these are all python scripts. To make it a little bit easier, you can download the cheat sheet above which includes all of the available tamper scripts and a brief description of what each one does.
Want to learn more about sqlmap beyond cheat sheets?
Found these cheat sheets helpful? Please consider sharing on social media and tagging us @cybrcom 🙂 | <urn:uuid:02357e79-ba79-4933-a2f9-83fb814ea5bd> | CC-MAIN-2022-40 | https://cybr.com/ethical-hacking-archives/sqlmap-cheat-sheets-to-help-you-find-sql-injections/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00069.warc.gz | en | 0.891284 | 3,806 | 2.609375 | 3 |
PNRP name resolution protocol uses this two steps:
- Endpoint determination – In this step the peer is determining the IPv6 address of the computer network card on which the PNRP ID service is published.
- PNRP ID resolution – After locating and testing the reachability of the peer with the PNRP ID with desired PNRP service, the requesting computer sends a PNRP Request message to that peer for the PNRP ID of the desired service. Other side is sending a reply in which it confirms the PNRP ID of the requested service. It also sends a comment, and up to 4 kilobytes of additional information in that reply. Using the comment and additional 4 kilobytes there can be some custom information sent back to the requestor about the status of server or computer services.
In the process of discovering needed neighbor, PNRP is making an iterative process in which it locates all nodes that have published their PNRP ID. The node performing the resolution is in charge of communicating with the nodes that are closer to the target PNRP ID.
Peer is firstly examining all entries in its own cache. If an entry that matches the target PNRP ID is found, it sends a PNRP Request message to the peer and waits for a response. In this way it can be sure that the node is available and not cached but unavailable. On other side, if an entry for the PNRP ID is not found, the peer sends a PNRP Request message to the peer with a PNRP ID that most closely matches the PNRP ID of the target node.
The node that receives the PNRP Request looks at its own cache and then:
- If the PNRP ID is found, it sends positive reply with the answer.
- If the PNRP ID is not found and there is no PNRP ID in its cache that is closer to the target PNRP ID, the requested peer sends back a response that he is not a peer that can help him. The requesting peer then chooses the next-closest PNRP ID.
- If the PNRP ID is not found and a PNRP ID in the cache is closer to the target PNRP ID, the peer sends an answer with the IPv6 address of the peer that corresponds to PNRP ID that most closely matches the target PNRP ID. Using IPv6 address received in first step the requestor will now ask this next peer with that IP address if he knows where the node with specified PNRP ID is.
Peer who is requesting name resolution in this iterative process eventually locates the node that has the searched PNRP ID registered.
There is an example of simple PNRP ID searching process. For example, PC1 has entries for PNRP ID (200) and the PNRP ID with 450 and 500. In the picture below every blue arrow from one PC to other PC symbolizes that the PC from which the arrow starts has an entry in its cache for the node to which the arrow is going.
When PC1 wants to fide a PC on which the process PNRP ID is 800, this is taking place:
- Because 500 is closer to 800 than other values here, PC1 sends PNRP Request message to PC3
- PC3 does not have an entry for the PNRP ID value of 800 and does not even have entries close to 800. PC3 sends back to PC1 a negative response. I can help you, more or less.
- 450 is the next close PNRP ID value to 800, PC1 continues with sending request to PC2.
- PC2 knows about PNRP ID of 800 in its cache. He can respond to PC2 with IPv6 address of PC5.
- PC1 then sends a PNRP Request to PC5.
- PC5 responds with an positive response, and sends name resolution response to PC1. | <urn:uuid:d5eeec95-0711-4d48-8120-ab37eceefc73> | CC-MAIN-2022-40 | https://howdoesinternetwork.com/2012/pnrp-name-resolution-how-it-works | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00270.warc.gz | en | 0.919452 | 831 | 2.625 | 3 |
Newly Found Data Privacy Regulations in Kazakhstan
Kazakhstan’s Law on Amendments and Additions to Some Legislative Acts of the Republic of Kazakhstan on the Regulation of Digital Technologies, also known as the Amendment Law for short, is a data privacy law that was recently passed in 2020. The Amendment Law was introduced to increase the level of obligation that organizations and individuals within Kazakhstan must adhere to when collecting, processing, storing, and disclosing personal data. To this point, the Amendment law establishes the legal grounds and guidelines for data activities that occur within Kazakhstan, as well as the punishments that can be levied against individuals and organizations who fail to comply with the various provisions that are set forth in the law.
How are data controllers and processors defined under the law?
As is the case with many other data privacy laws that have been passed within Central Asian countries in recent years, such as Uzbekistan’s Law on Personal Data and Tajikstan’s Law on Personal data, the Amendment Law does not provide a definition for the term “data processor”. Instead, the law uses the concept of a “database owner” defined to mean the “state authority, natural person and/or legal entity executing in accordance with the law the right of possession, use, and disposal of the database containing personal data”. Conversely, the law does not provide a definition for the term “data controller” either.
Alternatively, the Amendment Law uses the concept of a “database operator” defined to mean “the state authority, individual, and/or legal entity engaged in the collection, processing, and protection of personal data”. Moreover, the law defines personal data as “information related to the definite subject or related to the subject definable on the basis of such information, recorded on an electronic, paper and/or other tangible form (e.g. name, surname, age, address etc.)”. In terms of the scope and application of the law, the personal scope of the law applies to all “relations in the sphere of personal data”, while the material scope of the law is not explicitly stated. Furthermore, mandates that data processing is “limited to the achievement of specific, predetermined, and legitimate purposes”.
What are the responsibilities of database owners and operators under the Amendment Law?
Under the provisions set forth in the Amendment Law, database owners and operators within Kazakhstan are required to abide by the following principles when engaging in data processing activities:
- Ensuring that personal data is only collected and processed for purposes that are necessary for its operation.
- Ensuring that personal data is only processed for purposes that are in accordance with the purposes for which said personal data was collected.
- Taking protective measures to ensure that personal data is not accessed via unauthorized means, as well as minimizing any adverse consequences that may result from such access. In instances where database owners or operators are unable to prevent the unauthorized access of personal data, they are still responsible for detecting and reporting such access in a timely manner.
- Ensuring that all laws pertaining to data protection are followed and observed at all times.
- Ensuring that personal data is deleted after the purpose for which it was collected has been fulfilled and as such, the personal data is no longer relevant.
- Providing evidence proving that all personal data that has been collected and processed has been done so with the consent of all applicable data subjects.
In addition to following these data protection principles, database owners and operators are also required to meet several other obligations as it relates to data protection. These obligations include providing data subjects with data breach notifications in the event that a data breach occurs, as well as appointing a data protection officer or DPO to ensure that the provisions of the Amendment Law are complied with at all times. Additionally, database owners and operators are also required to follow specific procedures and regulations as it relates to special categories of personal data under the law. Such categories include personal data relating to the personal health of data subjects within Kazakhstan.
What are the rights of data subjects under the Amendment Law?
Under the Amendment Law, data subjects within Kazakhstan are entitled to the following rights as it relates to the protection of personal data and privacy:
- The right to be informed.
- The right to access.
- The right to rectification.
- The right to erasure.
- The right to object or opt-out.
- The right of a data subject to protect their legal rights and interests.
- The right of data subjects to seek compensation in the event that their rights are violated under the law.
In terms of penalties that can be imposed as a result of failing to comply with the law, Article 147 of the Penal Code states that “non-compliance with measures for personal data protection by a natural person responsible for taking such measures if such action caused significant harm to rights and legitimate interests of other persons may lead to a fine up to 3,000 monthly calculated indices ($20,261), correctional labor for the same amount, community service for 600 hours, restriction of freedom for up to two years, or imprisonment for up to two years with deprivation of the right to take certain positions or certain activity for a period of up to three years or without such deprivation depending on the violation”, among various other administrative punishments and monetary penalties.
While the Constitution of the Republic of Kazakhstan does provide data subjects with the rights to data protection and privacy, the Amendment Law manifests these rights in modernized terms. As such, the Amendment Law outlines the requirements that database owners and operators within Kazakhstan must follow in order to maintain compliance with the law. What’s more, the law also puts Kazakhstan in league with various other countries throughout Asia that have passed data privacy legislation in the last few years, such as Malaysia’s Personal Data Protection Act 2010 and Thailand’s Personal Data Protection Act. More importantly, data subjects within Kazakhstan have an avenue for recourse in the event that their personal data is improperly collected, processed, or disclosed. | <urn:uuid:c917eaf6-6889-438e-8370-195d2da138af> | CC-MAIN-2022-40 | https://caseguard.com/articles/comprehensive-data-privacy-regulation-in-kazakhstan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00270.warc.gz | en | 0.947089 | 1,235 | 2.65625 | 3 |
What Is Risk?
Risk is the chance that the outcome differs from what is expected. Usually, when we talk about business risk, we are referring to possible negative impact and consequences of an event or decision.
In business, there will always be a certain degree of risk that any organization must face to achieve its goals. At the essence, risk is a fundamental requirement for growth, development, profit and prosperity. In a broad range of every business industry, including healthcare, finance, accounting, technology and supply chain, effectively managed risks provide pathways to success. But like any path, you need to know all the divots, detours, and dangers along the way.
Even though risks are a part of doing business, we must find ways to identify and manage those risks swiftly and effectively since they can often develop out of nowhere, creating the possibility for greater risks and damages. It is crucial to find ways to manage risks with the goal of minimizing their threats and maximizing their potential.
Risks come from a variety of sources, which include the following:
- Uncertainties in financial markets and the economy.
- Threats associated with project failures at any phase, which includes design, development, production, or maintenance of life cycles.
- Legal liabilities.
- Credit risk.
- Threat of natural or man-made disasters.
- Security and cybersecurity risk.
- Impact of uncertain or unpredictable events, such as a pandemic.
- Competitive risk.
- Fallout from a company’s damaged reputation.
- Compliance risk.
- Third-party risk that comes with relying on external suppliers and vendors.
To help you better understand various risks, there is a set of international standards for information security that can help. Together, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) create and publish the ISO 270000 standards cooperatively for better guidance.
What Is the Difference Between Risk Assessment, Risk Management and Risk Analysis?
It can become confusing trying to sift through the different terms dealing with risk, including risk assessment, risk management, and risk analysis. The main difference is breadth.
- Risk management is the macro-level process of assessing, analyzing, prioritizing, and making a strategy to mitigate threats to an organization’s assets and earnings.
- Risk assessment is a meso-level process within risk management. It aims to breaks down threats into identifiable categories and define all the potential impact of each risk.
- Risk analysis is the micro-level process of measuring risks and their associated impact.
Let’s take a closer look at what differentiates these terms.
“The purpose of risk management is not to change the future, not to explain the past.” – Dr. Dan Borge, a financial expert and former aeronautical engineer who designed the RAROC risk-management system and wrote The Book of Risk.
Instead, risk management is the overarching umbrella when it comes risk. It includes both risk assessment and risk analysis.
Management involves the identification, analysis, evaluation, and prioritization of current and potential risks. This allows you to address loss exposures, monitor risk control and financial resources in order to minimize possible adverse effects of potential loss. Further, a solid risk management strategy gives you the ability to maximize the realization of available opportunities to avoid risk.
Risk assessment helps you identify and categorize risks. Plus, it provides an outline for potential consequences.
Performing a risk assessment involves processes and technologies that help identify, evaluate and report on any risk-related concern. According to NIST 800-30, risk assessment is a “key component” of the risk management process and is primarily focused on the identification and analysis phases of risk management.
If we take the example of a security risk assessment, it involves the following steps:
- Identify the critical assets and sensitive data,
- Build a risk profile for each asset,
- Determine cybersecurity risks for each asset,
- Mapping how critical assets are linked,
- Prioritize which assets to address in case of a security threat,
- Create a mitigation plan with security controls to eliminate or mitigate the impact of each risk,
- Continually monitor risks, threats, and vulnerabilities.
Risk analysis is the crucial evaluation component within the broader risk management and assessment processes. Risk analysis determines the significance of identified risk factors identified in the risk assessment process and provides. Plus, it qualifies risk, measuring the likelihood of hazards occurring and tolerances for certain events. One example is when an auditor calculates the probability and magnitude of a potential loss.
Scoring the risks identified takes into account the likelihood of occurrence and the estimated extent of possible impact. Together, this makes it possible to prioritize risks and set a strategy for mitigating them.
Related article: Business Leaders’ Top Concerns as Enterprise Risk Rises in 2021.
Do You Feel Confident About Your Organization’s Risk Management Strategy?
Are you confident that your risk management strategy is sound? Do you worry that there are risk factors that you are missing during the risk assessment and risk analysis phases of risk management? Our team at I.S. Partners, LLC. can help you get up to speed on any lurking risks to help you find ways to prevent and mitigate them. | <urn:uuid:bec3af69-90fa-4730-bd05-2ed8dc95c5ff> | CC-MAIN-2022-40 | https://www.ispartnersllc.com/blog/risk-management-risk-assessment-or-risk-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00270.warc.gz | en | 0.923982 | 1,104 | 3.078125 | 3 |
As the number, power and flexibility of mobile devices has increased, so has their use for shopping. In 2008, a Nielsen survey found that 9 million people in the United States “have used their mobile phone to pay for goods or services,” and many more expect to do so soon. In a 2009 survey by Deloitte, 45 percent of respondents said they would use their mobile phone to research prices, and 25 percent said they would use their phone to make purchases.
Consumers will use mobile devices to research and make purchases as long as they feel safe. Without consumer confidence, people will hesitate to expose their interests — and, more importantly, financial information such as credit card numbers and PayPal account information — where attackers can get it. Thus far, confidence in the security of purchasing using mobile devices seems to be growing.
How accurate is that perception? How safe is shopping from mobile devices? The answer: pretty safe, if you take some precautions and are careful.
Good Phishing Spots
Shopping using a mobile device is similar to shopping from a desktop computer, but with two important differences. The first is the size of the device. Mobile devices such as cellphones have considerably less memory and storage than computers. The second is their mobility. Desktops generally stay in a single location that is generally considered “safe.” Mobile devices, on the other hand, travel with their owner, and consequently are often in “hostile” territory. They can easily be misplaced or stolen, much more so than desktop computers. So there are additional, and different, risks.
The screen on a mobile device is very small, so the software on the mobile device often abbreviates Web addresses, shows only part of the address, or shows the address in very tiny print. In any of these cases, consumers may think they are giving data (such as credit card numbers) to reputable vendors — but in reality, they could be giving it to scammers.
Here’s an example. A phishing attack occurs when someone tries to trick you into going to what you think is your bank’s sign-in Web page. You then log in, using your account name and password. The phishing site now has this information, and the owners of that site can now access your accounts at the bank’s actual Web site.
If the full address of the Web page is visible, you might notice that the address of the Web site you went to was “http://www.mybank.phishing.example” and not the bank’s real Web site, “http://www.mybank.example.” (Obviously, neither of these URLs is associated with a real bank or phishing site.) If there is not enough room to show the full address, it might be shown as “http://www.myban … ple”– and from that, you cannot tell whether the address is that of the real bank site or an impersonator.
Advice: Check whether the site you are going to is actually the one you intended to go to.
Similarly, be sure that you know what you are buying. The Web page may not be completely visible; you may need to scroll around to see everything. Also, some vendors provide different Web pages to mobile devices than they do to desktop computers. While not strictly a security issue, this can save grief and unwanted expense when using mobile devices.
Along the lines of protecting your information, be careful of what you use your mobile device to send, even to Web sites you trust. (This is good advice for shopping over the Internet in general.) There are two reasons for this:
- Even trusted vendors get attacked and have customer information stolen. Numerous reports of large-scale thefts of credit card numbers and other personal information have been in the news recently. This can happen for many reasons — for example, due to security lapses or to untrustworthy people, or even by accident. So, when you purchase something, do so in such a way that your liability is limited if someone steals the data you send to the vendor.
- Your browser and the vendor’s Web server exchange sensitive information (like payment information), by using a special protocol to protect the data. Unfortunately, a researcher has found flaws that would allow an astute attacker to compromise this connection. While experts are still discussing how serious this problem is, it remains a threat to protecting your information.
Fortunately, there’s an easy way to limit your liability: Use your credit card. The Federal Trade Commission says that “if the loss involves your credit card number, but not the card itself, you have no liability for unauthorized use.” The rules for ATM and debit cards are somewhat different; check with your bank about them.
Also, if a thief does steal your payment information, you will need to render that information useless. So use something you can easily cancel or change so the thief cannot use the stolen data.
Advice: When purchasing something using a mobile device, use a payment method that minimizes your liability and is easy to render useless if the information is stolen.
It’s tempting to store data in your mobile device so you can easily use it. Frequent flyer numbers, credit card numbers, phone numbers, account names and passwords are examples of what people store. The problem with mobile devices is they travel with their owners. So, someone can forget a cellphone at a restaurant, for example, or it could fall out of someone’s pocket or handbag on the subway or in a taxi. The finder then has access to all the information on the device.
The solution is to assume that the mobile device might be stolen. What data would you not want a criminal to see? Either remove that data from the mobile device or get an application that will keep the data encrypted except when you are using it. (These are often called “wallets” or “password wallets.”) That way, if you accidentally misplace your mobile device — or worse, a thief steals it — you have protected your information.
Advice: Think like a thief. Figure out what information on your mobile device you don’t want anyone to see, and either delete it or encrypt it. If you do the latter, remember to choose a good password!
Everyone needs to balance the convenience of mobile shopping against the risks of purchasing errors or data theft. Given the proliferation of smartphones, mobile commerce will undoubtedly continue to grow. Shopping using mobile devices can be reasonably safe if you take proper precautions — so let’s be careful out there!
Matt Bishop is a member of IEEE and a computer science professor at the University of California at Davis. | <urn:uuid:6db57f05-a67c-41cf-827e-b79c8924b25d> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/sidestepping-swindlers-in-the-new-m-commerce-frontier-69093.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00270.warc.gz | en | 0.947405 | 1,387 | 3.046875 | 3 |
A new automotive system is aiming to keep drivers and their passengers safe by monitoring drivers’ pupil biometrics.
Developed by Harman International Industries, the system analyzes the driver’s pupil dilation to determine levels of tiredness or distraction. When a driver does appear to be very tired or suffering from a “high cognitive load”, the system works with the car’s built-in safety features to react to the driver’s condition. For example, the system could put a mobile device into do-not-disturb mode to ensure the driver doesn’t experience further distraction.
Commenting on the system in a press release, Harman VP Alon Atsmon suggested that it could be an important part of the smart car of the future, asserting that Harman’s technology “is advancing the state of the art for solutions that balance drivers’ desire to stay connected in the car without a compromise to their safety and security.”
Using biometrics for driver safety is a relatively novel concept. Late in 2014, Fujitsu filed a patent for a system that would use steering wheel-mounted electrodes to monitor a driver’s cardiac signals and thereby track alertness; more recently Olea Sensor Networks developed a seatbelt-based cardiac sensor that could trigger emergency services alerts in the event of an accident. Harman’s technology, clearly, is something novel, and could find an interested audience as it is shown off at this week’s Consumer Electronics Show.
(Originally posted on Mobile ID World) | <urn:uuid:1c6bee72-867d-4dbe-b353-20dd7e6fcd91> | CC-MAIN-2022-40 | https://findbiometrics.com/keeping-drivers-safe-with-pupil-biometrics-301058/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00270.warc.gz | en | 0.943004 | 326 | 2.546875 | 3 |
Most of us know what the reputed company Tesla has done to the world, enhancing the technological boundaries into a dream realm. Tesla has transformed the automotive industry with electric-powered carriers that include self-driving technology. Currently, self-driving cars are taking over the world. Self-driving is designed with the massive help of “collaborative robots”, using the immersive power of Artificial Intelligence (AI). Have you ever wondered how this introduction of self-driving cars by Tesla relate to Digital Transformation nowadays? With KatPro Technologies’ RPA consultation, a significant cut down of manual and mundane activities can be implemented especially in process-oriented and established function such as accounting and finance operations.
What Exactly is RPA or Robotic Process Automation?
Robotic Process Automation (RPA) is a mode of Artificial Intelligence leading to software that automates hand-operated human tasks executed using a sequence of user interface (UI) interaction or descriptor technologies. Corporations today can utilize Robotic Process Automation to diminish manual works and increase operator efficiency, improve pace to deployment, and enhance efficiency by eliminating human errors. RPApresents a practical workforce of collaborative robots, or “bots”. These bots are intended to exercise the constant tasks executed by humans, which lead to being mundane and, as a result, overflowing with errors.
How Do Organizations Benefit from Robotic Process Automation?
Robotic Process Automationsoftware is low code, that evolves rapidly in a quick period of time. which occurs in a fast time to market. The computerization of business processes happens in task automation for steady results, a decrease in human error, and a cost reduction. Hence, teams can be freed to work on value-added tasks. It improves the whole amount of workload they can process.
Along with that, Robotic Process Automationcan be used to create tasks in BPM to be performed by humans. Katpro Technologies focuses on providing top-class developers and technology consultants with niche skills such as Automation Anywhere, Ui Path and other RPA technologies.
Processes that Adapt Well to Robotic Process Automation
Robotic Process Automationpresents the chance to automate methods that were impossible before.
The following conditions are ideal for RPA selection:
- High Process Volume
- Repetitive Tasks
- Manual Data Entry
- Multiple Legacy Systems
- Structured Rules with Low Exceptions
- High FTE (Full Time Equivalent) Number
Like a person sitting in a driver’s seat of a self-driving Tesla going 70 miles an hour down a freeway, organizations can increase customer satisfaction by removing human errors, increasing operational efficiency and time to completion, and increasing accuracy and compliance.
Automation Options of Robotic Process Automation
RPA technology allows several automation options such as:
a) Virtual Workforce: a virtual unit of robots assembling on a distant server. These robots operate in “collaboration” with people to transfer data between applications, monitor for business rule conditions, update and process ideally ten times faster than any human could. These robots work 24/7 while processing work queues that cross multiple organizational methods while the humans work on more complex tasks.
b) Virtual API’s: Stretching out the current IT design to incorporate an integration layer by composing user’s collaboration to front-end requests and detecting the outputs using the advanced assistance of RPA. It becomes convenient when organizations have restricted integration alternatives to legacy systems, closed package applications, or an intermediary resolution to IT projects’ backlog.
Why Should You Bring in a Thriving Automation Strategy?
While the purpose for most business systems is to transform the IT power dynamics and provide their business users with the capability to automate their methods, this ultimately can restrict the overall aim of promoting user productivity, effectiveness, and ROI. An efficient automation strategy must be led from the enterprise prospect.
The benefits here are that the enterprise can properly pick the best tools for automation, balance efficiently, and set practical roadmap tactics. Business users can develop RPA bots, but as stated above, there are a number of factors that will decide if this is an efficient procedure. The future of thriving hyper-automation implementation links to empowered business users that can efficiently produce within a range of forces.
However, in a rush to enhance digital transformation, robotic process automation (RPA) is regularly declared a fast and straightforward way to execute critical processes, often reaching legacy systems’ lives. Hence, RPA is worthwhile in advancing the overall track to digital transformation by reducing tactical shortcomings and expensive disruption.
At Katpro Technologies, RPA and Intelligent automation is provided for enterprises and with the rise in RPA technologies everywhere, experienced assistance can do the work. They have expertise in Master Data Management, Travel Expense Calculations, Penalty Claims, Credit Note Handling, Stock Management, Account Reconciliation and Fixed Asset Accounting. | <urn:uuid:b3c67742-7c10-40e9-afe3-f8065cb95510> | CC-MAIN-2022-40 | https://katprotech.com/blog/how-robotic-process-automation-boosts-automation-in-digital-transformation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00270.warc.gz | en | 0.903284 | 1,011 | 2.640625 | 3 |
Japanese Researchers Streamline Quantum Information Transmission
(Phys.org) The ability for quantum computers to communicate with one another has been limited by the resources required for such exchanges, constraining the amount of information that can be traded, as well as the amount of time it can be stored.
Professor Kae Nemoto, Director of the Global Research Center for Quantum Information Science at the National Institute of Informatics (NII) in Japan, have taken a major step toward addressing these resource limitations.
Nemoto and her team addressed this issue using a process called quantum multiplexing, in which they reduced not only noise, but also the number of resources needed to transmit information. In multiplexing, the information contained within two separate photons is combined into one photon, like two envelopes being sent in a portfolio, so the information is still individually protected but only one stamp is needed for transport.
“In this system, quantum error correction will play an essential role, not only of protecting the quantum information transmitted, but also for significantly reducing the necessary resources to achieve whatever tasks one needs,” said paper co-author William J. Munro, a researcher at NTT’s Basic Research Laboratories. | <urn:uuid:abb8a182-ad62-4e24-ae63-f0f8639071e9> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/japanese-researchers-streamline-quantum-information-transmission/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00270.warc.gz | en | 0.922092 | 247 | 3.21875 | 3 |
Threats to modern organizations and individuals come from a variety of sources. In a world that is more reliant on technology each year, often those threats present themselves virtually. Among those threats are various types of malware, launched by criminals and cyber attackers and affecting millions of individuals and businesses each year. Repercussions from a cyber attack vary in severity but can include data loss, damaged devices, and files and information that are either encrypted until a ransom is paid or destroyed completely. Spyware represents only a portion of malware attacks, but protecting against it is critical for an organization’s sustainability. That protection begins with first understanding this type of malware and its potential impact.
What is Spyware?
In general, malware refers to any malicious software designed to infiltrate and/or damage a piece of technology or steal information stored within it. Examples of malware include ransomware, viruses, and spyware. Spyware programs are those designed to monitor your devices, providing information to the attackers about you. This can mean everything from gaining access to files stored on the device to monitoring keystrokes to determine passwords for websites and software programs. Often, those launching spyware attacks are hoping for enough personal information to carry out identity theft, which is the most commonly reported consumer complaint in 2021.
Spyware programs make their way onto devices in several ways. Often, they are unintentionally downloaded via a pop-up ad or in phishing email schemes designed to impersonate reputable organizations. Therefore, protection against a spyware attack must rely on a multifaceted approach.
Organizations Impacted by Spyware in 2022
It is easy to dismiss a potential IT threat by falsely believing “it could never happen here.” Unfortunately, these attacks do happen, and quite often. The Center for Strategic and International Studies keeps an updated record of major cyber incidents impacting organizations around the globe. Even just halfway into the current year, there are several major spyware incidents reported. This past spring, cyber attackers launched spyware targeting certain members of the European Commission. That same month, activists and several major political figures in Catalonia were also victims of spyware attacks. While it may seem like cyber criminals prioritize large organizations or public figures for their attacks, it’s important to remember that these criminals exist in very large numbers and not all attacks are on such a large scale. The ones we hear about are the ones that are reported, making documenting attacks a critical step in further preventative efforts. Reporting attacks to the FTC or FBI may seem like a waste of time, especially when the harm done by an attack is limited. However, these organizations rely on the public reporting instances of cyber attacks, so they can better compile cases and better develop protection measures against them.
Among the cybersecurity threats for small businesses, malware – including spyware – is among the top. In fact, 43% of cyber-attacks are targeted at small businesses. And, while most business owners are at least aware of the potential for these attacks, many underestimate the impact one could have on their organization. A cyber attack often results in downtime as networks are secured, malware removed, and new devices implemented. Downtime for any business is costly. With the cost of halting operations combined with bringing in IT experts to address the issue and a number of other costs associated with an attack, the median cost of an attack for small and medium-sized businesses is $17,000 in the United States.
That type of unexpected cost can be devastating to a small organization, in many cases resulting in business closure. Rather than wait for an attack to occur, investing in preventative measures now is a much more reasonable approach – and a much less costly one as well.
Protecting a Business from Spyware
There are nearly infinite reasons why a business wouldn’t want spyware affecting any of their devices or networks – most regarding the security of information belonging to the business, the employees, and their customers. Cyber attackers continue to sophisticate their approach year after year, so business owners need to be intentional about taking proactive steps to protect this information from cyber criminals as part of their overall IT management.
A solid IT and cybersecurity plan starts with experts knowledgeable on best practices as well as up-to-date on the latest cyber schemes and how to best defend against them. Most small businesses don’t employ this expertise in-house and choose to partner with a trusted team of industry experts instead. Kustura Technologies and its team of dedicated professionals bring decades of experience and a commitment to excellence in all of our IT services. Our cybersecurity solutions come with continuous expert support and guidance both when IT is functioning normally and when issues arise.
When it comes to defending against spyware, Kustura offers business-level security systems and solutions. We can craft a protection plan based on your company’s needs, protecting company, employee, and client data as well as all network communications. We monitor client networks 24/7, noting vulnerabilities and implementing solutions to keep you protected from cyber threats at all times.
Whether you are worried about spyware, other forms of malware, or simply want to improve IT operations within your organization, the team at Kustura Technologies can help. Contact us today to discuss your needs and take the first step in streamlining IT operations and keeping your business protected!
Contact us today to take advantage of this offer and get your FREE Cybersecurity Assessment. | <urn:uuid:ee9668ae-1a92-4a4c-9748-0ce88c991640> | CC-MAIN-2022-40 | https://www.kustura.com/how-to-defend-against-spyware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00270.warc.gz | en | 0.95819 | 1,096 | 3.234375 | 3 |
Software license compliance means using the software only in accordance with the software developers' conditions of usage.
Software License Compliance Definition
A software licence is a legal agreement, which governs the use and distribution of software. A licensing agreement is legally binding and software license compliance means only using a software in accordance with the software developers conditions of usage.
Some small scripts are frequently released without specifying a license. For example, the website Userscripts.org hosts more than 52,000 licence free user scripts. Similarly, GitHub reported in 2015 that 85% of the projects it hosts are unlicensed. With licence-free software, you can back it up, compile it, run it, share it and even modify it as necessary, without permission from the copyright holder. The same is not true for licensed software. In general, you purchase the right to use the software according to the terms of the software agreement, however, you do not own it and you are not permitted to modify or re-distribute it.
Usually, software ownership remains with the software developer and end-users license a copy for private usage. Proprietary ownership of both the original software and the software copies remains with the software developer (or software vendor) and the software is essentially ‘rented’ by the end-user. Software license compliance essentially means not breaching any of the conditions set out in the software licence agreement associated with the purchase of a software.
The terms of a software license usually dictate whether the copyright is retained by the software developer plus usage restrictions governing:
✅ right to perform
✅ right to display
✅ right to copy
✅ right to modify
✅ right to distribute
✅ right to sub-license
Software License Compliance Management
In enterprise environments, comparing the number of software installations and concurrent use with the number of software licenses purchased is a core component of software license compliance. End users should only be using software that they are legally entitled to use, which usually means only using what they have paid for.
An unauthorized installation of software is more commonly known as software piracy and this is the most publicised example of software licence non-compliance. However, non-compliance can also be accidental, for example where there is a mistaken belief that having a license for an earlier version of the software will suffice or in large organisations where it becomes difficult to reconcile software deployed against permitted allowances. Mergers and acquisitions can also introduce software license compliance issues.
Software license compliance is a very prevalent business concern. Software audits will always uncover non-compliance and trigger ‘true-up’ charges and possibly fines. 56% of software audits result in additional charges to compensate for historical under-licensing and according to the IDC “true-up” charges exceed 1 million dollars in more than 20% of cases. Software license compliance is often regarded as a CIO issue, however, company directors are ultimately responsible for the commercial agreements associated with the purchase and use of software by their organisations.
Software can be very difficult to track manually and investment in tools that help you manage your software assets automatically is a no-brainer. If you’re an ISV these tools will help secure missing revenues and also guarantee compliance for your end-customers.
Learn more about 10Duke Entitlements, a licensing solution that will dynamically ensure license compliance of your software. | <urn:uuid:b3be2f79-5a8a-4ca2-b941-3bfbd3b1c684> | CC-MAIN-2022-40 | https://www.10duke.com/resources/glossary/software-licence-compliance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00270.warc.gz | en | 0.925694 | 686 | 2.6875 | 3 |
Cyber attacks as likely as natural disasters, as devastating as ecosystem collapse: WEF
Data fraud, cyber attacks, and critical information infrastructure breakdown flagged among 2019’s most significant causative risks
Cyber attacks will have a similar global impact to water crises this year and are nearly as likely as natural disasters, the World Economic Forum (WEF) has warned in an analysis of societal risks that underscores that critical importance of effective cybersecurity protection.
Based on the opinions of nearly 1000 global decision-makers, the WEF Global Risks Report 2019 also flagged quantum computing and “emotionally responsive” artificial intelligence as key disruptors but flagged cybersecurity risk as a major and likely issue for already-struggling economies around the world.
Only extreme weather events, natural disasters, water crises, weapons of mass destruction, biodiversity loss and the failure of climate-change mitigation were deemed more impactful, while data fraud or theft was only slightly more likely than cyber attacks but somewhat less impactful.
Cyber attacks were linked to trends such as increasing national sentiment, the increasing polarisation of societies, shifting power, and rising income and wealth disparity.
They were connected to a range of risks including the failure of critical infrastructure, critical information infrastructure breakdown, terrorist attacks, profound social instability, interstate conflict, failure of national governance, and the adverse consequences of technological advances.
“We are going to need new ways of doing globalization that respond to this insecurity,” WEF president Børge Brende wrote in introducing the report. “Renewing and improving the architecture of our national and international political and economic systems is this generation’s defining task.”
Amongst those surveyed, cyber risk was widely identified as a significant subset of this challenge. Fully 82 percent expected 2019 would see increased risk around the theft of money and data – the fifth highest out of 42 areas of concern – while 80 percent anticipated disruption of operations and infrastructure due to cyber attacks.
“The potential vulnerability of critical technological infrastructure has increasingly become a national security concern,” the report’s authors note.
Around 60 percent of respondents expect this year will see increasing loss of privacy to companies and governments, while 69 percent anticipate the risk of fake news.
Identity theft was also high on the list, with 64 percent expecting it to increase this year, while the rest of the list was dominated by broader sociocultural and political issues.
“Given the near daily headline-grabbing data breaches and widespread fears of nation-state attacks, these findings should come as no surprise,” said Tenable co-founder and chief technology officer Renaud Deraison in a statement.
“People not only understand the sheer frequency of cyberattacks, but they also appreciate the risk they pose to our digital economy and our very way of life. These rankings reflect the global impact WannaCry, Equifax and the hundreds of other successful cyberattacks have had on our global psyche.”
That impact was continuing to loom large in consideration of other key risks: cyber risk was positioned in the context of ‘global commons’ alongside issues such as climate change, outer-space policy and management of Earth’s polar regions.
“In the context of rising geopolitical competition and weakening multilateral institutions, debates revolving around these pressures have the potential to be destabilizing and even to foment conflict,” the report notes.
“A more hopeful prospect is that the current flux in the international system instead will lead in pragmatic, open and pluralist directions, but even then a difficult and risky transition lies ahead…. The challenge of establishing norms that can be enforced globally is exacerbated by geo-economic competition across advanced technologies.” | <urn:uuid:aa0acc14-9ca5-466d-b5bc-ead746cdfc51> | CC-MAIN-2022-40 | https://www.adacom.com/news/press-releases/cyber-attacks-as-likely-as-natural-disasters-as-devastating-as-ecosystem-collapse-wef/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00270.warc.gz | en | 0.950375 | 779 | 2.578125 | 3 |
Cloud computing is really taking off, a trend that requires more big data centers to be built. For the most part, cloud providers are doing a great job of erecting green facilities, but what if they could tweak things further?
IBM has a new patent that may help organizations extend their green initiatives to the cloud, provided that their cloud providers support it. U.S. Patent #8,549,125 describes a way of moving workloads to eco-friendly servers and other IT systems.
IBM Master Inventor Keith Walker explains in A Smarter Planet blog post:
Our patent lets companies route their requests to under-utilized servers or datacenters, or even to servers or datacenters powered by alternative energy sources. The idea is that if companies want to reduce their environmental impact, they could sign up for this option through their cloud provider.
Customers would, for instance, check off the green computing option in their dashboards as they set up their capacity and bandwidth requirements. “The cloud provider then routes the requests to the network devices, the server devices, even down to the code functions that will process that service to consume the least amount of electricity,” adds Walker.
Image credit: Flickr user Beraldo Leal – CC | <urn:uuid:dbe321bc-9867-4236-aee8-cd64b41a18f3> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2013/11/ibm-patents-a-greener-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00270.warc.gz | en | 0.947485 | 260 | 2.78125 | 3 |
What is this Internet of Things You Speak of?
How many “smart” devices do you have on you right now? Or just electronic devices? If you’re anything like me, you might have a minimum of 2 or 3 devices every time you leave the house. I never go anywhere without my smart phone, and now that I have a smart watch I also wear that all the time to help with my fitness and activity. I also frequently bring along a pair of Bluetooth earbuds. Many people I know are the same way. We all have smart phones, tablets, laptops, earphones (wired and wireless), and fitness trackers of one kind or another. And most of these devices are connected to the Internet. What are also being connected to the Internet are things in our homes that a few years ago we would never have associated with being Internet-connected things.
Internet of Things
These devices are part of what is called the Internet of Things. The IoT, as it is called, is a growing part of our lives and is something we all need to be aware of. Just last year (2017) there were reportedly 20 billion connected devices all around us. According to IHS Markit, by 2020 there will be just over 30 billion IoT devices. Intel is even more aggressive with their projections saying there will be a many as 200 billion devices by 2020. That’s a lot of things!
But the Internet of Things isn’t just those devices we carry around on us. They are also the snazzy new “smart home” devices we are buying such as smart lighting, smart locks, smart thermostats, smart TVs and even smart appliances. But the vast majority of smart devices are things we don’t ever interact with or even see.
As of this year (2017) approximately 70% of all smart devices are in business, manufacturing and healthcare. These devices are running assembly lines, monitoring inventory and supply chains and providing real-time data to health care providers in hospitals and doctor’s offices. Smart technology is even being integrated into office buildings and other high rises. New buildings are having the smart tech built in, while some existing office buildings are being retrofitted. This smart tech in high rise buildings is controlling heating and cooling, elevators, security, etc.
Vigilance is Required
The Internet of Things has enormous potential to make our lives easier and more convenient. When we can just tell our Alexa enabled refrigerator to order more milk, and it shows up from Whole Foods the next day we have more time to spend on us and our families rather than doing mundane chores. However, we must be aware of the security risks that are inherent in these types of devices. For example, older Alexa devices have a security vulnerability that allows someone with physical access to a device to turn it into an “always on” microphone that can be used to gather sensitive personal data and send it to a remote server. This vulnerability has since been fixed by Amazon, and it was extremely hard to exploit, but this example shows why we must all make device security part of our daily lives.
The same goes for the smart buildings we inhabit and the businesses that use smart technology to track their inventory and supply lines. Unfortunately, there’s not much we can do about other people’s network security, but we can make sure our home networks are as secure as we can make them.
Most of us have Wi Fi networks at home these days, because fewer wires are better! And if we have smart devices it is much easier to connect them to a Wi Fi network. Wired networks are more secure, though. So, one solution is to buy inexpensive switches to connect our TVs, A/V receivers, Blu-ray players, refrigerators, etc. to our home networks. Alternatively, if running wires is not possible, using a “guest” network for our smart devices is another solution. Most new Wi Fi routers sold these days have the capability to set up a “guest” network that is isolated from your main network. This allows you to offer, well, guests access to your Wi Fi without giving them the password to your main Wi Fi network and potentially everything else also on that network. This network can also be used for all our smart devices.
Something we’ve talked about in this blog before is also making sure you have strong passwords. This is very important for connected entertainment devices. For guidance on creating a strong password see this article from NPR. And for managing these passwords, see our previous blog post on credential managers.
The future we’ve been seeing in the movies is here! And while it hopefully does not lead to Skynet or The Matrix, as long as we exercise some solid common sense and security awareness our personal data that we have control of should be safe. In the new year let’s all keep those passwords strong, and our connected devices secure.
Michael with his foster pit bull, Toby.
Michael Allbritton is a Cybersecurity Analyst and Trainer with Alpine Security. He holds several security-related certifications, including Certified Information Systems Security Professional (CISSP), Network+, Security+ and CyberSec First Responder (CFR). Michael has many years of experience in software testing, professional services, and project management. He is equally comfortable working with software engineers on testing and design and with sales to meet and manage customer expectations. Michael’s cybersecurity experience with Alpine includes penetration testing, vulnerability assessments, and social engineering engagements for various clients as well as teaching courses for the above-mentioned certifications. In his spare time Michael is an enthusiastic amateur photographer, diver, and world traveler. He has photographed wildlife and landscapes in the United States, Africa, Central America, West, and East Europe and has amassed several hundred dives as a PADI Divemaster. | <urn:uuid:f72e1ac4-dc8c-4cab-afa7-f87ae410fcff> | CC-MAIN-2022-40 | https://www.alpinesecurity.com/blog/2018-1-3-what-is-this-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00470.warc.gz | en | 0.963545 | 1,212 | 2.59375 | 3 |
What is CISSP Certification?
CISSP (Certified Information System Security Professional), is an internationally acknowledged certification offered by the ISC2 (International Information Systems Security Certification Consortium). The certification confirms a applicant’s knowledge and skill in all fields of information security. CISSP certified professionals’ job is to structure, layout, control, and manage very protected business circumstances, And Certified Information Systems Security Professional (CISSP) is part of an exclusive club.
CISSP was the first certification ever in the field and imposed high principles of ISO/IEC 17024 and is ANSI ISO/IEC 17024:2003 certified to make it a universal standard. It is also approved by the U.S Department of Defense in both the Information Assurance Technical (IAT) and Information Assurance Managerial (IAM). It is classified as the benchmark for the U.S. National Security Agency ISSEP program.
Concept of ISC2 CISSP
The CISSP includes many topics under information security studies. The final examination is in accordance with a Common Body of Knowledge (CBK) – a taxonomy or many relevant topics for IS security professionals throughout the world. The CBK is a broad framework of terms and sources allowing professionals worldwide to discuss, debate, and address situations relevant to common information security understanding.
The CISSP exam covers eight different areas, including:
- Security and Risk Management 15%
- Asset Security 10%
- Security Architecture and Engineering 13%
- Communication and Network Security 14%
- Identity and Access Management (IAM) 13%
- Security Assessment and Testing 12%
- Security Operations 13%
- Software Development Security 10%
Applicants must solve 250 questions in a six-hour exam. The CISSP exam is a challenging exam to crack, but those who are skilled enough to pass it are appointed to work in the information security field and succeed in their careers.
Applicants aspiring to take the ISC2 CISSP exam must have five years of cumulative full-time work experience in two or more domains needed under the ISC2 CISSP CBK. ISC2 offers a drop out of the one-year professional experience if the applicant holds a four-year college degree in the information security field. Its regional equivalent or educational eligibility as admitted under the ISC2 list.
Applicants without the experience may also opt for the exam, but they won’t instantly be rewarded with the CISSP certificate.
They will be given an Associate of ISC2 certification, and once they own the work experience (in the next six years), they can then achieve the CISSP certificate.
Reasons to Earn ISC2 CISSP Certification
It’s a reverence to get certification from ISC2, and it means a lot about the professional you are. Every industry, from online shopping to national defense, is vulnerable when it comes to cybersecurity threats. A CISSP certification indicates that you have the education, banking, networking, and support systems to defend your organization from these threats. It’s a decent job and affects organizations at the highest levels, so hirers are ready to pay higher salaries for the appropriate candidate.
Organizational Benefits of Hiring CISSP – Certified Professionals
Better Risk Management
CISSP professionals are updated with international legal standards like FERPA, FISMA, GLBA, HIPAA, SOX, DoD Directive 8570.1, and many more. Insurance requirements make it imperative that all the developing and evolving security threats are put up with well equipped and seasoned professionals. CISSP applicants are great experts in all fields of information system security and controls to address those requirements.
Organizations perceive that CISSP professionals are the best and the perfect choice for information security. The ISC2 standards guarantee that the professionals have significant knowledge, specified skill sets, and designated experience. HR departments that engage CISSP professionals make sure that their hiring standards and processes are more robust.
Consumers and clients who communicate with organizations want to know communication and information are being secured and kept secret. CISSP professionals make sure that the organization satisfies security and ISO standards and that their reputation with clients remains persistent and safe.
Certified Information Systems Security Professionals have a broad spectrum of knowledge. They have specializations in IT security domains and are considered among the most valued workers to retain the infrastructure safe and secure.
Perks of Earning of CISSP Certification
Get Higher Paid
It has been disclosed that Certified Information Systems Security Professionals are among the highest paid in the IT industry. With growing threats to security systems, businesses are ready to pay much more for the right applicant. The growth factor of a CISSP’s salary has surpassed every other sector.
Go International with CISSP
Professionals with CISSP certification are in huge demand throughout the world. ISC2 certified applicant is most desired for the current job postings in security systems. While this is an internationally recognized certification, so it is likely for professionals to get a quantifiable job throughout the world.
Passing the CISSP exam is not a walk in the park. Only 93,000 professionals worldwide hold a CISSP certification. A lot of effort, dedication, and hard work is needed for passing this exam, but it also gives you high standing among your colleagues if you can get to this peak of professional success.
- Security consultant
- Security analyst
- Security manager
- Security auditor
- Director of security
- IT Manager/Director
- Network Architect
- Security Architect
- Security systems engineer
- The chief information security manager
A CISSP certification provides you a place among the internationally recognized family of networking security professionals. You become a part of an elite group where you have complete access to a global extent of information resources, educational tools, peer networking, and industries. You’re on every hiring manager’s radar, and salaries are higher than other positions that don’t require the certification. If you are looking to reach at your security systems career’s top, aim for the CISSP certification. | <urn:uuid:f7edca11-6ad2-46e6-ada1-c004858fce0a> | CC-MAIN-2022-40 | https://www.isecprep.com/2020/09/15/how-cissp-certification-can-advance-your-cybersecurity-career/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00470.warc.gz | en | 0.921407 | 1,275 | 2.640625 | 3 |
Identity theft and fraud have greatly increased since 2019.
Knowing how to prevent identity theft includes understanding the signs of identity theft, so that you can act quickly if you’ve been a victim of this kind of crime.
- The most immediate method to check for fraud is by reviewing your credit history to see if any new accounts were opened in your name that you don’t recognize.
- Check your voicemail. What might have looked like another spam call could have been your bank calling to report suspicious activity on your accounts, and seeking to confirm if it was you.
- Bills for medical services you didn’t receive. If you receive a bill for a medical procedure or service you don’t recognize, it might be someone using your identity fraudulently. ****Contact the healthcare provider listed on the bill the moment you notice this.
- Someone files a tax return in your name. If you get a notice via email or the mail that your tax return was filed before you actually filed your taxes, you’ll want to dig deeper into the cause of that notification.The IRS provides the following list of potential warning signs that someone has used your identity to file a fraudulent tax return. See the full list here
FYI: You should be using an identity theft monitoring tool.
They monitor your identity across the internet and dark web 24/7, and immediately alert you of fraudulent activity. We recommend Identity Guard. | <urn:uuid:7fbee424-2bb7-405a-a670-0c88d99b76a9> | CC-MAIN-2022-40 | https://battensafe.com/resources/spotting-identity-theft-quick-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00470.warc.gz | en | 0.941488 | 295 | 2.640625 | 3 |
Does Your Brand Protection Program Cover NFTs?
Cybercriminals are early adopters. As soon as new technology hits the market, especially when it starts to become more popular, they study it to find vulnerabilities. Such is the case of Non-Fungible Tokens (NFTs).
While 70% of Americans still wonder what this term means and how NFTs work (we’ll get to that shortly), cybercriminals are already using this blockchain innovation to reach monetary assets and damage brands’ reputations. Let’s take a closer look at the different ways NFT security might be compromised and what brands can do to stay safe.
First Things First: What Are NFTs?
We’re used to thinking of digital creations as unlimited because theoretically, we can make countless copies. The idea of NFTs is revolutionary in the sense that it turns digital assets into unique, one-of-a-kind creations that can be limited, identified, and traded.
NFTs are digital representations of actual objects. They can represent visual artwork, musical pieces, video clips, memes, and anything else that comes to mind. Some NFTs may be created by renowned artists, while others are the result of amateur work.
Blockchain enables us to assign specific identification parameters to digital assets in order to make them tradable, but still decentralized. That's precisely how cryptocurrency is traded and how NFTs were born. Instead of relying on formal organizations to identify these assets and their owners, we use digital keys and contracts to give each asset its unique identity, turning it into a collector’s item with exclusive ownership rights. Digital assets can be bought and sold online, typically using crypto-based mechanisms.
NFTs have been around since 2014, but the rise of blockchain technologies has made them increasingly popular over the past few years. So much so that NFTs generated more than $1.5 billion worth of transactions during the first three months of 2021 alone. Research finds that 23% of US millennials collect NFTs in some form. This new trading arena cannot be ignored, and attackers are definitely paying attention.
The Problem with NFT Security
NFT professionals are the first to admit that the field suffers from cybersecurity issues. Joe Conyers, Global Head of NFTs for Crypto.com, states that, “We’re very early in the technology, and there are bound to be security issues if NFT platforms don’t maintain a basic level of security procedures.”
In addition to being new, this technology faces several cybersecurity challenges:
- Brand impersonation: Brands and artists that do not have NFT products available for trading might discover that someone created them in their name. Recently, a cybercriminal listed fake NFT artwork by Banksy and sold it online for more than $300,000. The sale was completed on the artist’s hacked website and caused a lot of embarrassment to everyone involved. This is a risk that every brand today faces when failing to protect other digital channels. Celebrities might also become victims of this fraudulent attack without proper brand protection services.
- Counterfeit NFTs: Some brands that do sell NFT products, find that attackers can create fraudulent versions of these NFTs to be traded online, the same way physical counterfeiting works. These instances cause copyright issues related to the fake visuals, music and logos involved.
- Unprotected marketplaces: It’s always harder to control procedures that involve 3rd parties and NFTs are often sold via central marketplaces. In addition to the irony of having to rely on centralized players to execute decentralized transactions, this is a significant risk. A few weeks ago, $1.7 million in NFTs were stolen from one of the leading platforms, OpenSea. Companies need a brand protection program that can monitor many platforms simultaneously and effectively.
- Fake platforms: In some cases, cybercriminals build entire platforms pretending to be legitimate NFT marketplaces to trade fake items. This is similar to building fake websites, enabling attackers to publish a large number of false NFTs without having to hack genuine platforms. When such new technology is involved, it’s harder to separate real platforms from fake ones.
- Untraceable payments: Because NFT transactions are based on cryptocurrencies, they are harder to follow and protect. In most cases, once the money reaches cybercriminals, it cannot be traced and retrieved. Cryptocurrency Developer Jack Fransham adds that, “Attacks can be entirely automated so your money is gone before you would have had a chance to lock it away.”
- Cryptocurrency scams: Crypto coins are the key currency used in NFT markets. This necessarily gives cybercriminals another set of vulnerabilities to exploit. This can involve a phishing scam whereby fraudulent websites request users’ private wallet keys, or fund raising for dodgy NFT releases.
With these cybersecurity challenges, it is important to monitor social media not only for keywords but for images as well. It is specifically vital to monitor Telegram as it is extremely popular in the realm of NFT trading. It is equally important to closely monitor online marketplaces such as OpenSea. Scammers can use these platforms to sell counterfeit NFTs, by simply opening an account and auctioning off their fake NFTs.
Monitoring is only the first step. To be truly effective in the detection and takedown of counterfeit NFTs, it is important to truly understand the arena, know how the market works in order to act quickly. Detecting issues quickly, leads to fast takedowns, which is the only way to stay ahead of the scammers, saving your business from significant losses.
To learn how BrandShield’s services work, contact us today and ensure that just like NFTs, your brand remains unique. | <urn:uuid:fbc47d54-1bd8-4754-8bea-b66e85e1571a> | CC-MAIN-2022-40 | https://blog.brandshield.com/does-your-brand-protection-program-cover-nfts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00470.warc.gz | en | 0.943471 | 1,208 | 2.53125 | 3 |
Northrop Grumman donated a a permanent exhibit of the James Webb Space Telescope to the Maryland Science Center on Wednesday. The event featured three Maryland Nobel Prize winners and Maryland Senator Barbara Mikulski.
“I believe in the science and innovation that have made America a world leader in discovery. There is no other mission planned either by NASA or any other space agency that can achieve the scientific goals of the James Webb Space Telescope,” said Sen. Mikulski.
The Webb telescope is currently being built by Northrop Grumman and its teammates under contract to NASA’s Goddard Space Flight Center in Greenbelt, Md. The Webb team has invented new designs and manufacturing technologies for the telescope.
It features an ultra-light weight 6.5-meter (21-foot) diameter primary mirror and a tennis-court-sized five-layer sunshield to enable its infrared instruments to collect very faint images of star and galaxy formation billions of years ago.
The three Maryland Nobel Prize winners in attendance included John Mather, recipient of the 2006 Nobel Prize in Physics and Webb telescope senior project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Md.; Adam Riess, recipient of the 2011 Nobel Prize in Physics, professor of astronomy and physics at the Johns Hopkins University; and Riccardo Giacconi, recipient of the 2002 Nobel Prize in Physics and university professor at the Johns Hopkins University.
“In Maryland, science is jobs. Scientific innovation creates jobs and economic growth through innovative products and new businesses. The James Webb Space Telescope will keep America in the lead for science and technology and inspire students to learn science,” added Mikulski. | <urn:uuid:84ccb426-dd90-4ccf-97bf-e7f4352b8ad7> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2011/10/northrop-donates-telescope-exhibit-to-maryland-science-center-senator-mikulski-speaks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00470.warc.gz | en | 0.905693 | 342 | 2.734375 | 3 |
The channel wire News
More Than Half The World Has Cell Phones
The report shows that mobile technology is becoming the most desirable means of communication -- especially in poor countries. The numbers show dramatic growth: By the end of 2008, there were an estimated 4.1 billion subscriptions globally, compared with roughly 1 billion in 2002, according to the International Telecommunication Union, one of the specialized agencies of the United Nations.
The study also looked at the Internet, and found that worldwide, usage has more than doubled: Approximately 23 percent of the population uses the Internet, up from 11 percent in 2002. Still, poor countries are far less likely to surf the Net. For example, only 1 in 20 people in Africa went online in 2007.
In addition, the report ranked countries according to how advanced their use of information and communications technology (ICT) is. On the U.N. telecommunications agency's ICT Development Index, the U.S. slipped six places this year to 17th. Topping the list was Sweden, which had more cellular accounts than inhabitants by 2007. More than 80 percent of Swedish households have computers and nearly as many have Internet connections. Large developing countries such as China (73) and India (118) were constricted by the size of their populations.
Developing countries, such as Pakistan, Saudi Arabia, China and Vietnam, have moved up significantly in the ICT Index during the past five years, partly due to high cellular growth coupled with an increase in Internet users. | <urn:uuid:70693329-9e48-4f8e-99b3-69a2e07f4e75> | CC-MAIN-2022-40 | https://www.crn.com/blogs-op-ed/the-channel-wire/215600271/more-than-half-the-world-has-cell-phones.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00470.warc.gz | en | 0.966527 | 313 | 2.53125 | 3 |
Antibiotics can’t treat the cold or the flu. Prescribed unwisely, they make bacteria harder to kill and make infections harder to treat. Still, about half the antibiotic prescriptions written in doctors’ offices are useless, or worse. But tools that guide doctors’ decisions can reduce excessive use of antibiotics, according to a study published Tuesday in the Journal of the American Medical Association.
The study compared antibiotic prescriptions in eighteen rural communities. In some communities, a public education campaign urged patients not to get unnecessary antibiotics and doctors were given both paper-based and PDA tools to show whether antibiotics are recommended. Other communities received only the public education campaign or no intervention at all. The only significant decrease in antibiotic use occurred in the communities where doctors had the tools. Furthermore, the more doctors used these tools, the more inappropriate use of the antibiotics declined.
Physicians are generally considered to be slow to change their prescribing habits, even in the face of new clinical data and guidelines. But Matthew Samore, the University of Utah informaticist and epidemiologist who led the study, thinks clinical decision support tools could be more effective. (He is quick to point out that the study did not go on long enough to show how long the change would last; antibiotic prescriptions were tracked from January 2002 to September 2003.)
A separate study published last month found that a clinical decision support systems could change behavior. In that study, doctors entered patient prescriptions into a system that generated alerts when a drug could prove dangerous to a particular patient.
But Samore is particularly interested in decision support tools for physician education. Unlike clinical guidelines, support tools make recommendations for specific individuals based on multiple sources of data. “Doctors, or anybody, don’t like things that are annoying or perceived to be unnecessary,” he said. To make the tools more acceptable, he imagines that tool use could be mandatory for a certain number of patient visits and then made optional. Doctors should also be rewarded for participating, perhaps through continuing education credits.
Overall, doctors liked the PDA version of the tool, said Samore, but because it wasn’t integrated with electronic records or prescribing factors, it adds a small amount of time to patient visits and doctors would only use it short-term. He is currently working on a study that combines electronic prescribing with clinical decision support.
The study published this week in JAMA included over 400,000 inhabitants whose communities were randomly selected to receive a public education campaign or an education campaign combined with decision support tools. For situations defined as ones in which antibiotics are never indicated, antibiotic prescription rates fell 32 percent in communities where doctors were given the tools, versus 5 percent for communities that received only the public education campaign.
About 70 percent of doctors given the option of clinical decision support tools used them. Physicians could choose from three decision support tools and were paid $3 more per visit for any inconvenience of using them. Two were paper forms that physicians helped fill out. The third was a PDA programmed with question prompts. About half of the rural doctors used only the PDA; another quarter used both the PDA and paper forms, and another quarter used only the paper forms. The PDA program was created by TheraDoc Inc for research, but is not available for sale.
Check out eWEEK.com’s for the latest news, views and analysis of technology’s impact on health care. | <urn:uuid:4a7e0560-708e-4555-a7ce-ee26c2a814d3> | CC-MAIN-2022-40 | https://www.cioinsight.com/case-studies/pdas-help-doctors-cut-excessive-antibiotics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00470.warc.gz | en | 0.960715 | 700 | 2.859375 | 3 |
An overly-hyped waste of money or game-changing tools for screen junkies?
If your job requires you to stare into a computer screen for hours at a time, you have likely experienced eye strain at one time or another. One answer being sold by glasses manufacturers are blue-light-blocking glasses. In this piece, we’re going to look at what these glasses do and if they really are the answer to screen-related eye strain.
What is blue light?
As we’ve discussed before in our piece about dark mode computer screen settings, blue light is a color of light that helps trigger our brains to enter a state of heightened awareness with the rising of the sun. Daylight contains a lot of blue light, which reduces melatonin triggers our brains to stay alert. However, constantly staring into a blue-light-emitting screen can overwork our eyes and cause them to be strained and our brains confused after sundown.
What do blue-light-blocking glasses do?
In order to combat the flood of blue light into our eyes from computer and device screens, many glasses manufacturers offer a lens coating that blocks a majority of blue light. These lenses commonly have a yellow-light tint — some very evident and others much less so. Some are designed to be worn only when working for long hours on a computer. Others can be worn all day without a dramatic change to the perception of other colors.
But don’t most devices offer blue-light filters?
It is true that most modern computer systems and devices offer a blue light filtering mode. These filters remove a majority of the blue light emitted from computer screens. In the past, some computer users even used blue-light filters that covered the entire screen of their computers. Still, many do not remove all blue light. Those that do also leave the screens looking yellow and unpleasant.
Blue Light & Circadian Rhythms
Even more than combatting eye strain, one of the most significant reasons for using blue-light-blocking glasses is to regulate your circadian rhythms. Set by the presence of daylight, circadian rhythms dictate when your body feels that it is day time or night time. Because daylight contains ample blue light, our brains have associated blue light with day time. However, everything from televisions, cell phones, and computer screens also give off mass quantities of blue light. If you’ve had trouble sleeping immediately after staring into a computer screen, mobile device, or television for hours leading up to bedtime, it’s likely because your circadian rhythms have been impacted by the blue light. Blue-light-blocking glasses can be extremely helpful in these instances. Even though this is so, you’re best off avoiding screens for at least an hour before bed.
The Availablity of Blue-Light-Blocking Glasses
There was a time when yellowed blue-light-blocking glasses for indoor use were only available from a handful of manufacturers and in limited styles. Today, with the dramatic increase in screentime for the average person, most lens manufacturers offer blue-blocking capabilities as an additional coating on most lenses.
So, do blue-light-blocking glasses work?
As a wearer of blue-light blocking glasses, I will personally say they do help with screen-related eye strain and fatigue. Still, that’s not their main benefit — which is blocking blue light in the evenings. Blocking blue light before bedtime is very helpful in allowing the brain to “believe” that it is night and settle down for a good night’s rest. A good night’s sleep is the best defense against eye strain during the day. Still, the best practice one can do to lessen the amount of blue light one is exposed to before bed is to simply avoid screens about an hour before bed. Instead, opt for a good book, listen to a podcast, or spend time with loved ones. These activities will help you fall asleep faster and stay asleep better.
Can I wear blue-blocking glasses all day?
More people are becoming familiar with the melatonin-altering effects of blue light as well as spending more and more time on their devices. Because of this, many glasses manufacturers now provide the option of a blue-blocking coating that can be comfortably worn all day. These coatings do not overly augment the appearance of other colors. Heavily yellow-tinted computer glasses specifically made for computer usage, much like reading glasses, should probably not be worn all the time to avoid any side effects.
What are some alternatives to blue-blocking glasses?
Most device and systems manufacturers have developed specific modes for night-time use. These modes decrease the emission of blue light via the system's software.
iOS (iPad or iPhone)
You can engage a night mode on most iOS devices in the Settings > Display & Brightness > Night Shift. In addition to manually activating this mode, you can also schedule sessions where the mode turns on automatically at certain hours.
For a similar feature, you can activate or schedule a "Blue light filter" by going to Settings > Display > Blue light filter. | <urn:uuid:a6793786-425e-49bb-b579-678ba24610ba> | CC-MAIN-2022-40 | https://www.jdyoung.com/resource-center/posts/view/198/do-blue-light-blocking-glasses-really-reduce-eye-strain-jd-young | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00470.warc.gz | en | 0.937441 | 1,059 | 2.671875 | 3 |
A major obstacle that management must face in today’s world is the task of securing their organization’s assets. While physical security is a huge component in ensuring that the residual risk level existing within the environment remains at an acceptable level, our reliance on the Internet as a primary resource introduces a new set of risks and threats that are evolving by the second. There are various different pieces that must be taken into consideration and cooperate to build an efficient information security foundation for an organization. These pieces are known as security controls, and there are multiple different categories of these controls. From one standpoint, we can classify these security controls as physical, administrative and technical controls.
Physical security controls are those that secure, as the classification states, physical assets. These are implemented to protect the physical components of an organization; such as constructing high fences around an office building, data center, or other location to prevent intruders from easily gaining access to the building that houses critical data. Ensuring that the area around the building is well-lit, that physical authentication, authentication, and accountability controls are implemented and enforced; you wouldn’t want just anybody walking into your data center that houses your organization’s critical infrastructure devices (i.e. servers that store confidential data, etc.) and doing whatever they please, right? This category encompasses the security controls that prevent this from happening, such as mantraps and security guards.
Technical security controls can be physical devices or logical devices that play a huge role in today’s world. These controls are those that us in the information security and technology fields are most familiar with, and focus on. These include simple devices such as routers that provide NAT (Network Address Translation) services to allow users to access the Internet, making each user’s Internet traffic appear to be sourced from the same static IP address, or firewalls that dictate which IP addresses or what traffic can enter and exit your network.
As time passes, our technical security controls are those that are advancing in sophistication at an exponential rate, at a much faster pace than controls defined within the other categories. We have gone from simple firewalls with limited filtering capabilities to next-generation firewalls that perform the functions of multiple devices all-in-one. We have the capability to check end-user devices and infrastructure devices alike for specific configuration settings, patches, and other characteristics with such precise granularity that allow us to restrict access to an internal network, or place them in an entirely different network (i.e. a quarantine network managed by a NAC (Network Access Control) device) until the device meets the exact conditions that we set.
Technical controls are extremely important, and can be further broken down into three subcategories: Detective, Preventive, and Corrective (or perhaps Reactive is a better term). I will not go into the nuts and bolts of technical controls, but their importance must be stated. A huge roadblock preventing organizations from acquiring and implementing technical controls, especially in the case of smaller organizations, are the financial implications associated with such activities. These devices can be quite expensive, and the costs associated with not only purchasing the actual devices, but actually hiring qualified individuals to configure and manage these utilities can quickly drain a budget.
Administrative security controls are those that are not physical devices nor can they necessarily be considered logical devices. These are the fine-print that many employees seem to skim through, sign off on, and thereby embrace whatever the document says without having a true understanding of what is being presented to them. We encounter administrative controls on a daily basis, perhaps without even realizing it. When you download software, you often are required to read a large blob of text and click a checkbox that states that you have read and understand the terms outlined in the presented text. Administrative controls encompass policies, procedures, guidelines and documentation of this nature. While often overlooked, the administrative side of the fence severely impacts the state of an organization. The foundation—the framework—of every organization must be the development of policies and procedures. These documents govern every aspect of an organization, and lay out what is expected of employees of all levels, what the organization does, how it performs these actions, what to do in certain situations, and guidelines can aid the lesser-experienced and more well-versed employees of all levels with step-by-step instructions detailing how to successfully and correctly carry out a desired action. I will not begin to list and go through the endless number of different administrative controls (or documents) that are in existence, but to focus on only a subset, that when not enforced or implemented properly can have an extremely detrimental impact on an organization.
There are many different components that comprise the infrastructure of an organization that must be implemented and maintained in a manner that provides the highest level of protection, the best defense, as well as an efficient procedure to follow in the event that an incident occurs. The desired level of protection as well as the goals and methods of achieving this level of protection are often declared within an overarching security policy, but various policies are required to perform in a supplementary fashion to best achieve this goal. Common policies and procedures that will be found in most established organizations include an Acceptable Use Policy (AUP), a 3rd Party Device Policy, and On-boarding and Off-boarding Procedures to name a few. This article was written to highlight a major area that most organizations are lacking, based on my experiences, where the deficiencies in these areas resulted primarily in the loss and/or destruction of data.
While it is true that often times the compromise of devices and data through the threats I am about to mention are not quite as prevalent as the media makes them seem, the exponential growth and sophistication of malware delivery in particular through these exploitation methods is not to be taken lightly. I am referring to the increasing number of exploit kits found in-the-wild today. An exploit kit, as could be inferred by its name, is literally a kit of different scripts that are designed to exploit vulnerable software. While they are primarily built with the goal of exploiting vulnerabilities that are known to exist in various versions of common software, of which both you as well as I likely have installed on our devices (albeit, a patched version, I hope).
Over the past few years, the number of unique as well as the sheer quantity of unique instances of exploit kits in-the-wild has grown rapidly. While a phishing e-mail with a URL leading to an exploit kit landing page (the page of an exploit kit that actually performs the vulnerability checks and serves the exploits to vulnerable hosts) is still common, newer tactics such as the issue with malvertising campaigns (malicious advertisement campaigns in which advertisements displayed on even legitimate websites may redirect users to landing pages, and exploit the users without direct interaction with the malware author [such as via a URL sent within an e-mail]) has become a pain for security analysts globally. Now this is not to say that the risk of infection via an exploit kit could be mitigated entirely by making sure that software is up-to-date; this year alone started off with the release of a number of zero-day exploits (exploits previously unknown to the developer and/or security community as a whole) in Adobe’s Flash Player application were leveraged by many exploit kits, potentially affecting even users with even the most up-to-date version of Flash Player at that time.
The point is that the potential for compromise and overall risk level of an end-user device in particular can be greatly decreased through the enforcement of an efficient patch management policy.
A proper patch management policy should first categorize assets in terms of severity level, or the impact that such device(s) becoming compromised could have on the organization as a whole. This allows for the prioritization of patch deployment, ensuring that the most critical devices are focused on first. The prioritization allows for the allocation and development of set dates and times, whether daily or at regular intervals, that each device (or group of devices) will be patched. Once these steps are complete, the “backbone” of the patch management policy can now be referenced when going into the specifics of the policy. Where the patches will be retrieved from, whether on an OS level or individual device level basis, must be defined.
It is important that patches are not first rolled out to the production network; a development-type network environment should be in existence where these patches are first tested to ensure that the production network is not negatively impacted by the roll-out of the new patches. Once patches are obtained and tested, the scheduling of as well as the actual roll-out of these patches must be coordinated.
Important: Change management and patch management go hand-in-hand. Change management begins during the planning process and takes place throughout the full lifecycle of each managed device. Roles and responsibilities must be defined, each stage and process of patch management must be tracked, and additional procedures must be put into place detailing the actions to be taken in the event that something goes wrong.
What defines a successful patch deployment? What defines a failed roll-out? Criteria surrounding what constitutes as a success (as well as the opposite) must be clearly defined. Additionally, time management plays a huge part; management must aim to have all patches rolled out and deployed in the least amount of time that will not negatively impact the efficiency of other phases (such as the testing phase), to minimize the inevitable window of non-compliance that assets awaiting patches will face.
Whenever I am doing work within a client’s environment, whether it be product-related in terms of deployment or management or performing audit work (such as a vulnerability assessment or a gap analysis), I often encounter one of the applications that are a part of what I like to call “the big players” in terms of their use by an attacker as an infection vector.
A common example is the deployment of an outdated version of Internet Explorer (whether IE 9 or versions as early or earlier than IE 7) as a standard operating procedure within an organization, with the outdated software being a part of the baseline image rolled out to all [new] devices. While we all have our own personal browser preferences, Internet Explorer is known to contain a large quantity of vulnerabilities, especially in such outdated versions. When I was performing some research on exploit kits, one in particular whose control panel’s screenshots were publicly available, revealed that the majority of successful exploitations performed by that particular affiliate’s exploit kit deployment (up to 85%) were through the exploitation of vulnerable versions of Internet Explorer.
So why do we still see companies rolling out vulnerable versions of software to their end-users as part of a standardized procedure? Often, this is because of compatibility issues between certain software utilized within the environment. Many applications rely on outdated versions of browsers to function properly, and while I am tempted to refer to these applications as legacy applications, this is not always the case. While some software may be easier to use, contain specific functionality that is desired, or whatever the case, if the deployment of said software will introduce various commonly exploited risks to the organization, the software should simply not be used. Alternatives exist for virtually every type of software, and each have their own individual sets of pros and cons that must be weighed.
Another administrative component of an organization that perhaps has the greatest effect on the organization’s security posture, is the organization’s users. After all, no security tool can be efficiently used if the operator has no knowledge of how to correctly, efficiently do so. The same goes for users that have no direct technical responsibilities, i.e. a sales department. If your end-users are not aware of the threats that exist, especially those that they are more than likely to encounter, they will fail to successfully identify and prevent themselves from becoming victims. An in-depth discussion on “securing layer 8” of an organization is reserved for another article, but it’s important to mention that policies must be implemented and enforced that mandate users to undergo security awareness training at regular intervals, regardless of whether such training is delivered in person or online. It is unfortunate, but not quite shocking, that an enforced security awareness policy is essentially nonexistent within an otherwise established organization.
Many organizations simply present employees with an Acceptable Use Policy (AUP) or set of guidelines during their initial on-boarding, that are often either skimmed through by the new hire or actually read but forgotten almost immediately. The threat landscape that we face is constantly evolving, but in addition to new threats and methods of compromise, those that were “at large” and are thought to be rare and “outdated” are also often recycled. Old phishing e-mails designed to scam the recipient, Microsoft Word documents with embedded-macros, and other attacks that we consider deprecated are emerging once again and still to this day manage to fool the unsuspecting user. It is important that new hires are not the only ones that are mandated to undergo some form of security awareness training as part of their on-boarding process, but that all end-users within the organization are mandated to partake in such training sessions at regular intervals, as well as in response to a successful compromise or new threat being discovered.
While I may have gotten carried away in the first few sections of the article, it is important to understand the following:
As security professionals, it is our job to protect our users; we are often the voice that gets heard when upper-management is looking to design a security architecture, or implement a security control. We should not be exposing our users to an insecure environment; to the Internet, which is now a relatively dangerous place that comes with inherent risks. Our job is to do whatever we possibly can with the resources we have on-hand to best ensure our users’ security; not throw them into the fire with a disadvantage that gives the attackers the upper hand.
About the Author Michael Fratello
Edited by Pierluigi Paganini
(Security Affairs – security, threat prevention) | <urn:uuid:e8965aaa-5aa1-4d70-b812-97b8c73f0e7a> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/35833/security/cyber-security-threat-prevention.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00670.warc.gz | en | 0.957537 | 2,875 | 2.828125 | 3 |
Patch Management Definition
Patch management is the process that helps acquire, test and install multiple patches (code changes) on existing applications and software tools on a computer, enabling systems to stay updated on existing patches and determining which patches are the appropriate ones. Managing patches thus becomes easy and simple.
Patch Management is mostly done by software companies as part of their internal efforts to fix problems with the different versions of software programs and also to help analyze existing software programs and detect any potential lack of security features or other upgrades.
Software patches help fix those problems that exist and are noticed only after the software's initial release. Patches mostly concern security while there are some patches that concern the specific functionality of programs as well.
What is Automated Patch Management?
Patch management process features to detect missing patches, install the patches or hotfixes that are released from time to time, and provide instant updates on the latest patch deployment status.
Budget pressures continue to be high on IT organizations, and so automating day to day routine tasks is critical. Patch management software can be automated to enable all the computers to remain up-to-date with the recent patch releases from the application software vendors.
It is critical to take necessary steps to enhance the security posture of enterprises – large and small. Therefore, consistent patching of operating systems and applications with an automated patch management solution is important to mitigate and prevent security risks.
How does an Automated Patch Management Solution Work?
- The automated patch management is used to automate the various stages of patching process
- Scan the applications of devices for missing patches
- Automate the downloading of missing patches that are released by the application vendors.
- Automated Patch Deployment ensures to automatically deploy patches based on the deployment policies, without any manual interference.
- Once the patches are deployed, reports on the status of the automated patch management tasks are updated.
With automated Patch Management solution, each enterprise is equipped to update its endpoints with latest patches irrespective of what OS they run and where they are located.
What is the Purpose of Patching?
Patching is a process to repair a vulnerability or a flaw that is identified after the release of an application or a software. Newly released patches can fix a bug or a security flaw, can help to enhance applications with new features, fix security vulnerability.
Unpatched software can make the device a vulnerable target of exploits. Patching a software as and when the patch is released is critical to deny malware access.
Some of the best practices of patch management that will allow the organizations to enhance cybersecurity are
- Understanding the importance of patch management –
Knowing why patch management is an important aspect of cybersecurity solution is critical. Quick response to latest patch updates would deny and protect vulnerable systems from zero-day threats.
- Outcome of delayed patch application -
Delayed patch application creates a severe impact causing major security breaches. The latest Wannacry attack revealed the vulnerability of not updating the software with patch fixes. The victims of Wannacry were those who delayed in updating the patch released by Windows to fix the SMB v1 protocol vulnerability – this resulted in loss of data, and business.
- Availing the services of managed service providers
Managed service providers offer patch management software to fit the requirements of the business – big or small. MSPs take full control of the patch management process – while the businesses can focus on the management and revenue-generating aspects.
- Deploying patch testing
Some patches are incompatible with certain operating systems or applications and leads to system crashes. It is good for IT admins, to run a patch test before the patches are deployed on to the endpoint systems.
How to choose the right patch management software?
How do you know which patch management software is best for your organization? The demand varies from business to business, however there are few common traits, which most of the organizations look for in a patch management software
A patch management software should be capable to:
- Apply patches across different operating systems that includes Windows, Linux and Mac
- Apply patches on different endpoints like desktops, laptops, servers, etc.
- Provide automated patch management to save time.
- Offer instant reports on latest patch update statuses.
If you are looking for a patch management solution that can offer all the above-mentioned features – ITarian offers efficient patch management solution with robust features to keep your network patched with the latest patch updates.
Patch Management Life Cycle
- Update vulnerability details from software vendors
- Scan the enterprise network for vulnerability
- Examine the Vulnerability and identify the missing patches
- Deploy patches and validate patch installation
- Generate Status Report on the latest patch updates
Patch Management for Cyber Security
Software vendors release patches to fix vulnerabilities identified after the release of a software or application. Patch Management enables patch testing and deployment which is a critical aspect of cyber security. Quick and instant responses to patch updates would mitigate the chances of data breaches that can cause due to unpatched software.
ITarian Patch Management software offers future-proof and scalable patch management solutions and strategies to protect and secure your business endpoints with quick and latest patch updates.
ITarian Patch Management allows you to:
There’s a lot of software running in your organization, and none of it is flawless. Which means a lot of patches from multiple sources get released on an ad hoc basis, Patch Tuesday notwithstanding. You can’t simply wait to deploy patches when it’s convenient, because leaving those security flaws and major bugs unpatched leaves your business vulnerable. And while managing patches can be complex and tedious, the alternative of getting hit with a security breach is infinitely worse.
- Identify which endpoints contain vulnerabilities and need to be patched
- Create policies to automatically apply updates to groups of tagged endpoints at scheduled times
- Remotely deploy operating system updates for Windows and Linux machines
- View dashboard statistics for breakdowns of available updates for endpoint machines | <urn:uuid:fd5d0654-2e65-4497-bce6-5a84c4bb1f0d> | CC-MAIN-2022-40 | https://www.itarian.com/patch-management.php?af=11676&track=11676 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00670.warc.gz | en | 0.916862 | 1,221 | 2.796875 | 3 |
Globally, there were an estimated 44.4 million people with dementia in 2013. That number is set to 75.6 million by 2030, and 135.5 million in 2050, placing increasing amounts of pressure on the medical community to prevent, slow or stop the disease. The lack of proper treatment for Alzheimer’s has been mobilizing new allies in a fight towards global eradication of this disease.
With the advent and permeation of big data into healthcare, scientists believe they have a new instrument to fight the disease.
The method will involve sifting through the troves of medical information of patients, with the aid of a supercomputer which would involve all possible tests and scans in order to glean patterns that might point towards a link and cause of neurodegenerative disorders, reports an article published in The Globe and Mail.
Michael Strong, the dean of the school of medicine and dentistry at the University of Western Ontario notes, “Up until really quite recently, most of these studies have tried to link together one or two variables. So we have [brain] imaging and we look at cognition or we have genetics and we look at behaviour.”
Being able to collate a substantial amount of variables through a single pool of data, would open up new horizons, said Dr. Strong, the lead in a $28.5-million Ontario research project that is looking to integrate Big Data in treatment of brain diseases.
Challenges lie ahead as aggregating, categorizing and sharing a large amount of patient data globally is “proving to be a legal, ethical and logistical” issue. On Monday, a workshop in this regard was carried out by the Organisation for Economic Co-operation and Development, in Toronto that saw the attendance of more than 50 scientists and doctors, some from the field of computers, policy experts and patient advocates.
As the global age mark- and the number neurodegenerative disorders- continue to rise, Governments and healthcare institutes are taking heed to put up a better fight against this growing issue.
Read more here.
(Image Credit: Ann Gordon) | <urn:uuid:a46828ff-d037-4795-bcd3-93ae651f8e2c> | CC-MAIN-2022-40 | https://dataconomy.com/2014/09/the-fight-against-alzheimers-finds-a-potent-ally-in-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00670.warc.gz | en | 0.946488 | 432 | 3.3125 | 3 |
EB: What is NVMe?
TL: NVMe or NVM Express, short for Non-Volatile Memory Host Controller Interface Specification, is a new high-speed storage protocol. Non-Volatile Memory (NVM) refers to Solid State Drives (SSDs) and flash storage. The NVMe protocol was designed from the ground up to enhance the low latency and high performance of the NVM storage media.
That enhancement is necessary because, while the shift from mechanical Hard Disk Drives (HDDs) to silicon SSDs and flash has brought about major performance, capacity and reliability improvements, there have been issues too. The storage media suffers from performance bottlenecks caused by the technology connecting the storage to the rest of the IT system. Adapters and protocols originally designed for slower hard drives just haven’t been able to keep up. This is where NVMe comes in. Unlike legacy protocols, NVMe has been created to capitalise on the performance benefits of flash and SSDs. And this opens up a whole range of opportunities from massive web applications and supercomputing applications to powerful data analytics environments such as industrial IoT.
EB: How does it work?
TL: NVMe was developed for SSDs and flash memory to enhance the internal communication between the IT environment and storage solutions. Flash memory is already fast – NVMe makes it even faster – enabling communication with the storage device across thousands of parallel command queues.
Just to give you an idea of scale, HDDs typically employ a single queue with 32 commands, while NVMe boasts 64,000 queues and 64,000 commands per queue. NVMe then streamlines these commands so that the flash technology only sees those that it requires to store and retrieve the data. By making the command structure that much more efficient, NVMe reduces CPU cycles, reduces latency and increases IOPS. It’s not hard to see how it speeds up performance.
EB: How is this technology impacting the flash storage market?
TL: A paradigm shift is clearly in the works. Many large enterprises and service providers are looking to replace their traditional flash arrays with NVMe-based flash over the next couple of years and in turn, it is likely that we will see traditional flash replace most HDDs.
I’m bound to say that as an NVMe vendor – but it’s not just my opinion. ESG’s 2017 European Storage Trends Survey of over 400 European IT professionals found that 10 percent of the organisations asked said they are already using NVMe, 26 percent are planning to deploy it, and another 34 percent say they are interested in deploying NVMe-based technologies. And that trend is set to rise: a recent report by G2M Research predicts the NVMe market to reach $57 billion by 2020 with a 95 percent CAGR.
EB: What would be the main challenges for any business looking to adopt NVMe?
TL: To enjoy the full performance benefits of NVMe flash, storage needs to be utilised by the application locally, in-server. This often results in wasted capacity and performance as without an additional management layer it is virtually impossible to orchestrate beyond these boundaries. To address this and maximise ROI on an NVMe investment, businesses, particularly those with heavy workloads, will require a software management layer that can level out NVMe utilisation across the entire IT infrastructure.
EB: What benefits does the protocol offer businesses?
TL: Business is changing: we’re demanding more from our applications and our IT infrastructure. Think about big data analytics programmes or high-performance computing research, for example. High performance is essential, slow running apps impact on the credibility of results.
NVMe removes bottlenecks and can handle four times more parallel IO commands than SAS/SATA SSD controllers, delivering an extraordinary performance boost and unleashing the capabilities of the most demanding business applications. Plus, fewer NVMe drives are required to achieve the concurrent workload performance levels which makes it a lot easier to maintain a balanced CPU/IO ratio. | <urn:uuid:63598b9f-fd01-4099-a40f-650536b292aa> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/hardware/nvme-flash-supercomputing-future | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00670.warc.gz | en | 0.919942 | 829 | 2.921875 | 3 |
The growth of the autonomous vehicle industry has seen a lot of wild suppositions about when we’ll see the first completely self-driving cars available for public use. As it stands, however, most of the self-driving vehicles on the roads today are solely concerned with aiding drivers as opposed to driving themselves. As most who follow the industry know, there hasn’t been a genuinely autonomous vehicle developed as yet. Still, several manufacturers have made great strides in creating cars that respond and react to the world around them. In testing, results have been promising, suggesting that we might actually see these vehicles become commercially viable shortly.
Some Kinks to Work Out
Some autonomous vehicle crashes have resulted in fatalities. Only a few crashes have occurred to date, but they raise some interesting questions about the potential morality of an AI system, should full autonomy become reality. The United States has started moving towards creating legislation that deals with the morality and ethics of AI systems. As it stands, however, Germany is the only country so far that has implemented guidelines to manufacturers regarding the ethics of their self-driving AI. The rule-of-thumb that Germany uses is that, should an accident be unavoidable, the AI shouldn’t have any distinction between targets, nor should it offset victims against each other. The guidelines do offer support for companies who want to include general programming to limit the number of casualties, including that of its passenger.
Coding a Moral AI
The distinction between victims is an important one. A study published in the October 2018 edition of the journal Nature noted that, based on data collected from an international sample size, humans tended to rate some decisions more moral than others. Among the preferences the respondents displayed was a tendency to want to save more people as opposed to fewer, and an inclination to keep younger people alive as opposed to older ones. Many regions also sought to prioritize the status of the individual, preferring high-status people to those of lower economic backgrounds.
The Inherent Issues in Equality
The real moral preferences of the human race differ by cultural location and sensibilities. While the study in Nature does provide a broad guide, it is by no means exhaustive. Trying to code an AI along the lines of that study creates a situation where that AI would be deemed illegal in places like Germany, and anywhere else that follows their lead in implementing guidelines for the ethics and morals of a self-driving vehicle. However, it is essential to note that the legislation, while enforcing equality, does so in an uncomfortable manner, since specific individuals are seen as more valuable to society due to their contribution than others.
A New-Age Trolley Problem
The trolley problem is an ethical dilemma that has been used since the start of the twentieth century as an exploration into individual ethics. The premise is that a runaway trolley is bearing down on five people in front of it, and the person being asked the question has access to a lever that, if pulled, will switch the trolley to a new line, saving the five people. The catch is that on the second line, there will still be a single individual who will die, being unaware that the trolley is headed in their direction. Even today, humans faced with this dilemma take pause, and questioners have upped the stakes by attaching emotional or economic value to one or more of the hypothetical individuals involved in the problem. If humans haven’t been able to solve this problem to date definitively, how realistic is it to expect an AI to be able to solve a similar problem? It’s a conundrum that those involved in coding the morality of self-driving AI will need to face head-on. | <urn:uuid:2770b6dd-82fb-4d19-b135-d3e6aa01041f> | CC-MAIN-2022-40 | https://cloudwedge.com/opinion/the-morality-of-a-self-driving-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00670.warc.gz | en | 0.972881 | 744 | 2.6875 | 3 |
As the use of artificial intelligence (AI) is to be monitored by an equality regulator to ensure technologies are not discriminating against people, there is emerging evidence that algorithms can in some circumstances perpetuate bias.
And with research suggesting that factors as simple as the web browser you use, how fast you type or whether you sweat during an interview can lead to AI making a negative decision about you, it is vital for organisations to understand these potential biases to avoid discriminating against people.
Technology ‘a force for good’ but important businesses understand risk of AI bias
The Equality and Human Rights Commission (EHRC) last week published new guidance to help organisations avoid breaches of equality law, giving practical examples of how AI systems may be causing discriminatory outcomes.
The UK-based regulator is not the first to look into monitoring AI usage. Baltimore and New York City have passed local bills that would prohibit the use of algorithmic decision-making in a discriminatory manner, while the US states of Alabama, Colorado, Illinois and Vermont have passed bills creating a commission, task force or oversight position to evaluate the use of AI and make recommendations regarding its use.
And in an attempt to counter AI bias, the European Union has proposed new legislation in the form of the Artificial Intelligence Act, which suggests that AI systems used to help employ, promote or evaluate workers should be subject to third-party assessments.
Responding to the EHRC’s guidance, Marcial Boo, its chief executive, said it was essential that businesses understood the impact of technology on people.
“While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.
“Many organisations may not know they could be breaking equality law, and people may not know how AI is used to make decisions about them.
“It’s vital for organisations to understand these potential biases and to address any equality and human rights impacts.”
Concerns over ‘urgent need’ to protect public from discimination through AI
In a study, published earlier this year in the journal ‘Tulane Law Review‘, author Professor Sandra Wachter of the Oxford Internet Institute said decisions being made by AI programmes could “prevent equal and fair access to basic goods and services such as education, healthcare, housing, or employment”.
“AI systems are now widely used to profile people and make key decisions that impact their lives,” she said. “Traditional norms and ideas of defining discrimination in law are no longer fit for purpose in the case of AI and I am calling for changes to bring AI within the scope of the law.”
There is an “urgent need to amend current laws to protect the public from this emergent discrimination through the increased use of AI,” the research also warned.
According to the World Economic Forum, biases can also occur when machine learning algorithms are trained and tested on data that under-represent certain subpopulations, such as women, people of colour or people in certain age demographics.
For example, studies show that people of colour are particularly vulnerable to algorithmic bias in facial recognition technology.
And businesses need to carefully scrutinise the data ingested by AI, wrote Dr Rob Walker for AI Magazine. “If they don’t, irresponsibly-used AI can proliferate, creating unfair treatment of certain populations – like unduly limiting loans, insurance policies, or product discounts to those who really need them. This isn’t just ethically wrong, it becomes a serious liability for organisations that are not diligent about preventing bias in the first place.” | <urn:uuid:8ef24cf2-50f2-4d02-a40a-f430554505ab> | CC-MAIN-2022-40 | https://aimagazine.com/articles/vital-that-businesses-realise-ai-bias-risk-regulator-warns | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00670.warc.gz | en | 0.947434 | 760 | 3.109375 | 3 |
Virtual Reality – by definition, it isn’t real, right?
So if it’s true that we will find ourselves increasingly living our lives in the virtual worlds and environments of the metaverse, will this end up cheapening what it means to be human?
Is it possible that we can live meaningful and fulfilling lives as digital avatars? Many parents of teenagers will know it's entirely possible for human beings to spend most of their lives plugged into digital devices and social media. Imagine how much more addictive it will be when rather than smartphone screens filled with carefully posed selfies and vidoes, the virtual worlds at our fingertips will be fully 3D, immersive environments that are indistinguishable from reality, and capable of taking any form we can imagine.
It might sound fantastic, but a 2020 Ericsson survey found that 70% of respondents believe we will be able to create VR worlds that our brains can’t distinguish from real worlds by 2030.
We have already seen people getting married in the metaverse. It’s also been estimated that, by 2035, more than half of romantic relationships will start online – through dating apps or whatever new methods of digital matchmaking emerge as the digital domain continues to evolve. It’s clear that we are not instinctively averse to experiencing major life events virtually. Is this a trend that will continue to grow, as has happened with the encroachment of the internet into all other aspects of life? Or will there be some form of kickback against it - some 21st-century equivalent of the Luddite rebels who protested against the spread of mechanized factories in 19th century England?
Companies like Meta (Formerly Facebook) and Microsoft are definitely optimistic about the metaverse. They are throwing huge amounts of money into making what we used to call “cyberspace” – although that term seems quaint now – as tempting as possible. Brands are also already on board – from business behemoths like McDonald's, Disney, Gucci, and Coca Cola, to entertainment icons like Snoop Dogg and Paris Hilton, everyone is racing to carve out a space in the metaverse where they will be able to connect with - and sell to us -in new ways.
So, there’s plenty in it for celebrities and corporations – but what about us? Well, for me, the true value to be found in the metaverse isn't necessarily in the virtual elements of these worlds – the one-of-a-kind NFT sneakers or home décor that are sold by Nike or Gucci for us to decorate our avatars and virtual homes. It’s more likely to be found in the “real” elements – elements that will inevitably arise due to the fact that we share these worlds with other living, breathing human beings.
In other words, there’s no reason to think that a conversation or other interaction that takes place between two people is necessarily any less “meaningful” simply because it takes place in a virtual environment. As we can tell by the vast number of people who are starting real-life relationships based on connections forged online, even relatively unsophisticated interfaces like dating apps and chatrooms provide enough connectivity for friendships and more to blossom. It makes sense that in a VR environment – possibly one where we are represented by lifelike avatars with our own physical characteristics – bonds would form even more readily.
And what about experiences? We might think of a real-life experience as “meaningful” if it provides an emotional reaction, such as fun or laughter, or if we learn something from it – either in an educational sense or if it teaches us something about ourselves or other people.
If we are really going to have VR worlds that are indistinguishable from real ones, it stands to reason that they will be able to provide us with experiences that are meaningful in this way. Google Earth, for example, already allows us to put on a VR headset and view virtually any part of the planet in a 360-degree, first-person perspective. It seems perfectly feasible that in a few years, we will be able to share these environments with other people and explore them pretty much as if we are actually there. With that in mind, it seems likely that we will at least experience some of the sense of awe that comes with standing in front of the Taj Mahal or some of the fear that would accompany leaping head-first over the edge of the Niagara Falls. In this regard, it's likely we could consider such experiences meaningful.
One thing we have to consider – meaningful experiences don’t always have to be positive, of course. There has already been a good amount of very important discussion around the potential for harm to come to us in VR environments. For example, several people have reported unpleasant experiences with sexual harassment and racial abuse in VR environments. Understandably, it is often reported that the unpleasantness and sense of violation are magnified beyond what is felt by victims of similar abuse in less immersive environments, such as chatrooms or social media, due to the personal nature of connections made in VR.
Technology companies seem to be responding to this heightened level of threat posed to their users – Meta, for example, has implemented panic controls that let users immediately warp to a safe area or put up a protective shield that prevents other people from approaching or touching them. This seems to imply that they, at least, believe that experiences in the metaverse can be meaningful.
Putting aside, for the moment, those impossible-to-verify arguments that we are already living inside a virtual reality simulation, it seems clear to me that we will not find it difficult to find meaning in the virtual worlds. As long as, just as is the case with actual reality, they provide us with the ability to make connections with other people and share experiences with those we meet there. | <urn:uuid:facc2eeb-8ebd-4b06-ae25-653802d0ecbe> | CC-MAIN-2022-40 | https://bernardmarr.com/can-life-be-meaningful-in-the-metaverse/?paged1202=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00670.warc.gz | en | 0.961632 | 1,204 | 2.640625 | 3 |
Applies To: CLOUD VPS DEDICATED
IPv6, short for Internet Protocol version 6, is the next generation protocol for the Internet and replaces the current IPv4. It controls how packets of data are transferred from one device to another over the Internet. It does this by assigning a unique IP address (a long sequence of hexadecimal digits) to every device connected to the Internet. Each packet is then marked with the source IP address and the destination IP address and together they enable the packets to be sent correctly to and from their destination anywhere on the Internet.
In simple terms IP (both v4 and v6) can be viewed in much the same way as a postal service. Envelopes containing information (data packets) are marked with their destination on the front and their sender's address on the back (destination and source IP addresses). These then help the post office (routers around the Internet) send the envelope to the correct place and, if necessary, return it to its sender.
Whilst IPv6 is only just gaining prominence, it has actually been in circulation for over 10 years and so it is well tested and known to be reliable.
Why do we need IPv6?
IPv4 is running out of IP addresses to assign and so a new protocol with more addresses is needed. Whilst it is difficult to predict exactly when IPv4 will run out, there are indications it could be in the not too distant future. Happily, the Internet Assigned Numbers Authority (IANA) has been planning for this for years and has been slowly introducing IPv6 since 1999.
IPv6 will last far longer than IPv4 because it has 128 bit addresses as opposed to IPv4's 32 bit addresses. This means that IPv6 is able to provide us with 2128 individual addresses (or 340,282,366,920,938,463,463,374,607,431,768,211,456), more than enough to last us for the foreseeable future!
As well as more addresses, IPv6 has several other advantages over IPv4 such as being more efficient and secure (with data packet encryption) as well as being better able to support mobile devices.
What does this mean for me?
Internet Users – There is nothing you need to do and you shouldn't notice any changes to how you use and access the Internet.
Businesses – You need to check that your ISP is ready to provide you with IPv6 connectivity. You should also make sure that your hardware, software and network equipment are IPv6 compatible. This shouldn't be as arduous as it sounds as, in practice, most of it already will be.
Memset Customers – Our IPv6 Implementation is currently in the Beta Stages, If you would like to join the beta program then please open a Support Ticket with our Support Team whom will be able to enable this Beta Feature for you, You will then see a button on your control panel which will allow you to allocate IPv6 addresses to your servers. | <urn:uuid:1b90edc7-0e2f-416a-9873-40f110438a42> | CC-MAIN-2022-40 | https://docs.memset.com/cd/IPv6.199070833.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00670.warc.gz | en | 0.951805 | 620 | 3.078125 | 3 |
In July of 2019, Equifax announced they would be paying the largest data breach settlement in history for compromising the personal information of 148 million people in 2017. Riding on the heels of this announcement was Capital One, who reported a massive theft of data of more than 100 million individuals of their own. Although the Capital One assailant was said to have been arrested, the damage had already been done.
Naturally, this may make you think of the security of your personal information, and what is being done to protect it. In the wake of these recent data breach incidents, we will look at some of the world’s biggest data breaches that have affected both consumers and businesses alike.
1. The Yahoo Data Breach – 2013
It wasn’t until 2017 when Yahoo realized — or fully disclosed — the significance of a data breach that originally occurred in 2013. At the time, Yahoo revealed that the names, birthdates, phone numbers, passwords, backup email addresses, and even security questions of approximately 1 billion user accounts were leaked. By 2017, Yahoo divulged that this number was significantly higher, and affected every Yahoo user — about 3 billion individuals.
The hack is said to be the biggest data breach in history and was done by cracking simple and outdated encryption measures used to protect personal information. The data breach (and subsequent announcement) came just as Yahoo was in talks to be acquired by Verizon — and ultimately lowered Yahoo’s value by $350 million.
2. The First American Financial Corporation Data Breach – 2019
This Fortune 500 company (No. 491) took a hit in May 2019 when the real estate title insurance and financial services goliath announced that upwards of 885 million financial records had been leaked. This hack included sensitive information related to mortgage deals, Social Security numbers, images of driver’s licenses, tax documents, wire transaction receipts, and bank account numbers and statements.
The flaw was found through poor security design in The First American Financial Corporation’s website, by which “all the documents were available to anyone with a browser who had a link to a single document [on] the website… no log-in or password information was needed.”
The company was criticized for not sufficiently providing security while collecting massive amounts of sensitive data from hundreds of millions of individuals. Technically, The First American Financial Corporation’s gaffe was not a data breach, but rather a glaring weakness in the design of their website application. Nevertheless, the design was easily hacked, resulting in massive financial loss, damage to the brand, and diminished consumer confidence.
3. The Facebook Data Breach – 2019
The social media giant Facebook has experienced data breaches before; however, the 2019 breach takes the cake, affecting 540 million Facebook users. Essentially, two third party Facebook app developers were to blame, exposing account names, comments, and reactions to posts.
This data was leaked by these third-party companies, as they stored data on a public Amazon cloud computing server, which was then exploited. In addition to account names and users’ reactions to comments on posts, photos, location check-ins, and unprotected passwords were also jeopardized. Facebook has been under federal investigation, and this breach may only add to the heat.
4. The Marriott International Data Breach – 2018
The Marriott International data breach is significant both in the number of individuals affected and in the particular data that was stolen. The names, addresses, credit card information, phone numbers, passport numbers, and travel details were pilfered from 500 million Marriott customers through an exploit in the hotel’s reservation database.
The nature of the data exposed during the Marriott breach is unique in that much of this confidential information can be used to carry out identity theft. Marriott saw a dip in shares and stock performance following their cybersecurity error.
5. The Yahoo! Data Breach – 2014
Back on this list for the second time is Yahoo, which suffered an additional data breach in late 2014. Unrelated to their previous breach, this hack is attributed to Russian cyber thieves (who have been charged by the FBI). Although valuable data was not said to be obtained, the state-sponsored hackers found names, email addresses, phone numbers, birthdates, passwords, and security questions and answers of 500 million user accounts. The second data breach for Yahoo only fanned the flames over concerns for both Yahoo’s and the government’s apparent lack of cybersecurity measures.
6. The Friend Finder Networks Data Breach – 2016
Friend Finder Networks Inc. is a series of adult websites that saw over 400 million user accounts illegally accessed in 2016. Usernames, email addresses, and passwords were found through a vulnerability in their servers. Even accounts thought to be “deleted” were at risk. This hack is akin to the 2015 data breach experienced by Ashley Madison — the extramarital affairs website — although Friend Finder’s losses were said to be about 10 times worse.
Many of the world’s largest data breaches have been made possible by inadequate cybersecurity measures. While a company may not be able to prevent a data breach completely, it can provide a security solution to minimize the damage done by a wide-scale data breach and insulate the individual users (and their data) from being exposed. Due to the many interactions and online services facilitated by business websites, several layers of security should be deployed. This includes securing the many modes of communication and consumer identification used to make purchases, create accounts, or interact with brands and platforms online. | <urn:uuid:b391fc9f-324e-4525-ad71-0ab22bd611b3> | CC-MAIN-2022-40 | https://anonyome.com/2019/12/the-6-largest-data-breaches-of-all-time/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00070.warc.gz | en | 0.962949 | 1,123 | 2.59375 | 3 |
Technological advancement is a major earmark of the 21st century and one of the prominent developments we can boast of is in the field of Artificial Intelligence. Modern artificial intelligence leverages deep learning and this has engendered solutions to complex problems once considered intractable. The fusion of AI and robotics is one of the solutions which appears promising and we shall go into detail.
What Underpins the Success of AI and Robotics?
This boom in AI is facilitated by three major factors;
- The increased access to big data,
- The development of more sophisticated/complex algorithms, and
- GPUs with increased computing power.
Future Applications in Autonomous Tasks
Drones, in the nearest future, will be used to obtain privileged data which can be applied to weather forecasting, storm tracking, and precision agriculture. They can even be used for surveillance purposes, especially in search and rescue.
This year, researchers at Carnegie Mellon University published a paper year titled, “Learn to fly by crashing”. An AR Drone 2.0 taught itself how to navigate through 20 doors by trial and error. It was able to master the path in just 40 flying hours. The future drone will be more equipped with faster learning capabilities and we can safely project more involvement of AI drones in automated tasks with ever-rising efficiency.
Future Applications in Military
When it comes to warfare, military forces are on a constant search for tools for superior advantage. Such little advantage can be the deciding factor for victory or defeat in a battle.
Military forces are beginning to inventively integrate AI into drone technology for warfare purposes. At the moment, AI drones are being used as spy devices, and even as killer machines by arming them with missiles and bombs for destroying enemy forces.
AI Drones for Inspection of Infrastructure
Catastrophes happen, sometimes we can guard against it, most times, and we have to deal with the consequences. While AI is being used to anticipate and prevent future disasters, we also envision improved automated inspections of infrastructures. Tests are ongoing and the New York Power Authority discovered that while it costs $3500 and $3300 to send a helicopter and boat on an inspection mission, a drone can do the job for $300.
Addressing the Elephant in the Room
Is There a Possibility of AI Drones Colonizing Earth?
We are right on track to a final destination where drones will eventually be self-sufficient. In fact, AI-driven drones have been claimed to defeat human pilots in a combat named ALPHA. There are major concerns shared by prominent figures like Elon Musk and Stephen Hawking as to whether drones with fully automated artificial intelligence will eventually become smarter than man.
If this happens, then the possibility of them launching an attack to wipe out humanity is pretty high, just like in the Terminator. However, there are sufficient facts to dispel these fears. AI enthusiasts posit that only those who have no knowledge of AI entertain these beliefs. Some of the prominent AI researchers assert that we are still a long way from self-sufficiency and to ensure that this doesn’t occur, there are three rules in place.
Rules of Robotics
- A robot is not allowed to injure a human being or, through inaction, cause harm to come to a human.
- A robot must obey any orders given to it provided that the order is not conflicting with the first rule
- A robot is required to protect its existence provided that this protection does not flout the first or second rule.
Will Robots Take Your Job?
The statistics provide an almost unanimous answer, the ubiquity of drones will significantly increase efficiency, productivity, and ensure security. This development is predicted to replace about $127 billion in labor costs with the impact being felt in the area of agriculture, transportation, and infrastructure.
We have already seen things such as Automated Call Routing, replacing workers at Telecom Exchange, also, AI-generated content is fast replacing creative writers and this trend is expected to permeate other sectors. While it is impossible to accurately predict the loss this will cause, workers are expected to brace up.
AI has drastically improved the capabilities of drones. Given the amount of money and effort being lavished on this course (projected to more than double by 2020), it is only possible that drones will continue to get more intelligent in the recent years to come. | <urn:uuid:25ad7164-f9ae-4962-9e29-fd39f1f77328> | CC-MAIN-2022-40 | https://www.ciocoverage.com/the-impact-of-ai-on-the-future-of-drones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00070.warc.gz | en | 0.9428 | 892 | 3.09375 | 3 |
Technology has risen to newer heights over recent years. It has brought on miraculous advancements in industries and has set amazing trends among consumers. With no signs of stopping, technology will continue to grow in its influence, accuracy, efficiency, and power among buyers, industries, etc.
Biometrics hopes to authenticate one’s identity when it comes to security. This proposed technology also hopes to help marketing teams statistically analyze physical and behavioral characteristics.
But what about privacy? What if people don’t want technology to track and validate their identities or other personal information? Understandably, this is an obstacle that can halt the implementation of biometrics, especially for public events.
- The uses of biometrics at events
- The industries that might use biometrics
- The concerns for privacy
Definition of biometric technology
Biometrics, or biometric authentication, refers to technology that dives into a person’s unique, identifiable attributes as a means to identify and authenticate him, her, or them. Human attributes that biometrics is expected to look into are the face, fingerprints, hands, voice, eyes, and others.
These attributes would be used to identify someone without the person having to pull out their identification card and or receipt. Based on its concept, someone will need to be scanned biometrically, in order to get into a restricted area or secured place like an airport, a stadium, or elsewhere.
So, where did biometrics start developing in the consumer sphere?
- Heart rate
- Sleep patterns
- Blood pressure
Now, biometrics-enabled wearables might be used to attend events and keep track of energy levels.
Uses of biometrics at events
Biometrics, as it stands currently, has potential. In fact, there are many possible uses for biometrics, should it move forward in public events. While some uses are no-brainers, other uses can be never-before-seen until now.
First, registration is imperative for access to an event. Traditionally, attendees would need to do all (or some) of the following.
- Sign-in (print or digital)
- Show an ID card
- Show proof of purchase (of a ticket or admission), etc.
But with biometrics, event registration is streamlined for both event organizers and attendees. Similar to having a kiosk set up in the venue, biometrics only requires certain human attributes (i.e. face, fingerprints, etc.) in order to get into the event. This technology can be implemented during preregistration or during in-person registration.
Normally, consumers would pay for tickets (print or digital) to an event. But what if a physical ticket gets lost? What if a digital ticket gets corrupted or compromised by hackers?
The good news is, biometrics hopes to change all that. By eliminating physical or digital tickets, there’s no need to worry about losing an entry into an event. Biometrics can also help prevent fake event tickets from circulating in public or on the black market. Plus, this helps in reducing paper waste from having to create physical tickets, only for them to be thrown away and or wasted away somewhere.
There’s no doubt that people will buy things (i.e. food, merch, etc.) at an event. Normally, people would pay with cash, debit, or credit. However, pick-pockets still exist, and so do cybercriminals. There has to be a way to prevent such things from happening at an event, right? There is!
With biometrics, guests can opt to provide credit card information at preregistration, and then pay for merch with their biometric data. This makes it easier for people to buy things without having to worry about pulling out a physical credit card or cash in front of unsuspecting and or would-be thieves. This is also beneficial for marketing teams since their aim is to look for ways to market their products and or services to more consumers.
Crowd control and restricted access
Events can draw in many people, depending on the theme and situation. Regardless, large crowds can create confusion for both event-goers and organizers. This is especially an issue for restricted-access areas, because sometimes, unauthorized persons may slip through the cracks, which can cause a security risk.
The good news is, biometric authentication ensures that the right people get access to certain places. This is especially great for VIP sections and restricted areas. Biometrics also prevents unauthorized individuals from entering restricted sections.
Tracking guests’ biometrics
Since wearable biometric tech (like FitBit) already tracks specific vitals, wearables can also play a role at events. Biometrics has the potential to keep track of people reacting to stimuli in real-time. This allows companies guests’ emotional and physical responses to certain situations during an event. As a result, event runners can improve attendee experiences for future events, based on what the wearables show.
Facial recognition has been growing more and more in today’s technology. This form of biometrics involves scanning one’s face, as they request access to a place, such as an event. This adds an extra layer of security since facial recognition puts… well, a face to someone, rather than just allowing them to simply show an ID card.
Facial recognition also allows for security cameras to better identify people, should someone try to commit fraud, thievery, vandalism, and so on during the event. The technology also makes creating accurate heatmaps possible for both event organizers and marketers by looking into foot traffic. This also makes it possible for first aid personnel, law enforcement, etc. to tend to an emergency situation if need be.
Likely industries that will use biometrics
So, now that we know how biometrics can be used in an event, what types of industries are already looking into this technology? Or what industries are concerning using biometrics?
Although biometrics might still be fairly new, certain industries are already considering the prospect of authenticating people’s identity with the technology.
- Biometrics can tap into people’s preferred music playlist, and play “the right music.”
- Biometrics can monitor how people are feeling, and how they’re responding to certain songs, genres, and situations. In other words, what types of songs and music will they dance to? What songs or music make them feel good.
- Biometrics can create a heatmap of people, based on the music that’s playing. In other words, how many people are leaving because they didn’t like the song? How many people are staying because they love the music?
- Biometrics can help out during the event admission phase. When people enter a stadium or center, they’ll be asked to provide human attributes (i.e. fingerprints, facial recognition, etc.), instead of print or digital sports tickets, in order to receive entry into a game or other event.
- Biometrics can be used by consumers whenever they shop for food, merch, etc. at the event. Instead of cash or a physical card to pay for purchases, vendors can scan human attributes to verify card payments.
- Biometric badges or wearables can grant attendees access to a conference.
- Biometric heatmaps can track foot traffic, such as what events and topics are successful. This information, in turn, can be used to make events much better.
What about privacy?
It’s important to note that people are often concerned about their privacy, whether in public or online. “Nowadays, consumers live in a connected world, where the Internet and social media reign supreme when it comes to technology. So, it makes sense that users will want privacy whenever they use technology.
So, where does biometrics stand, when it comes to privacy?
In order for biometrics to legitimately work for everyone, there needs to be a stable balance between privacy and security. Overall event participation requires that not only everyone is safe, but also that a person’s privacy is respected. With that said, personal information needs to be handled responsibly to ensure both identity validation and privacy.
Using the right biometric tech
Now, it’s important to note that while some people might be skeptical about allowing biometrics to be a part of events, there are still some that desperately want better security. The good news is, when biometrics is executed properly – meaning that they won’t force people to share personal and biological information if they’re not comfortable – it can be a godsend for security in event venues.
As you can see, biometrics is still being considered, if not 100% implemented yet. As this overview has shown, there is still more ground to cover before all events take advantage of biometrics. With more improvements underway for this rising technology, security and privacy would soon go hand-in-hand, when it comes to making events safer and more attractive to guests, organizers, and marketing teams alike. | <urn:uuid:78d04aa5-ba35-4169-8b7c-ec5765e72a70> | CC-MAIN-2022-40 | https://www.bayometric.com/using-biometrics-at-events-security-over-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00070.warc.gz | en | 0.944762 | 1,920 | 2.890625 | 3 |
Table of Contents:
What Is Network Mapping?
Network mapping is a process used to visualize the physical and virtual equipment in your IT network, whether SNMP, Layer 3, and more.
Network mapping can provide network administrators key performance insights such as device status, physical connections, traffic metrics, and more so they can troubleshoot issues faster and maximize uptime.
A network map is simply a document that helps IT better manage the network. Maps can function as a visual topology of all your network devices, a geographic map of where devices are located, or a dashboard of network performance. Depending on how they’re created, maps can also tell you information like the IP address of specific devices, how devices are connected, or their availability statistics.
A network diagram is just another word for a network map. It is a visual representation of the individual components of a network and how they are all connected so that information may flow between them.
A network’s topology is the actual arrangement of the components of a network. Whereas network maps and diagrams are just representations of a network’s arrangement, network topology is the real positioning of nodes, devices, and connections on a network.
Why Map a Network?
Mapping a network provides visibility into all the devices in your network. Leveraging a network mapping solution will streamline this mapping, so you don’t have to waste time manually creating (or updating) network diagrams.
With network mapping software, you can:
Maintain End-to-End Device Performance
Network maps make seeing the big picture of your network easier so you can maintain availability and prevent business-critical issues.
See Live Network Data
Data updates in real time so you know how devices are performing now instead of an hour ago.
Troubleshoot Issues Faster
With a bird’s-eye view of your network, you can easily map your network to identify the location of a problematic device and quickly form a plan to mitigate.
Live network mapping software automatically pulls in any device with an IP address connecting to your network for a real-time view of everyone and everything that’s connecting to your IT environment.
Greater Network and Security Control
Live network mapping software makes it easier to identify unknown devices, problem areas, and threats in real time.
Watch this webinar and see for yourself why dynamic network mapping is essential for every network professional.
Types of Network Maps
Physical Network Maps
A physical network map displays the arrangement of the physical components of your network such as desktops, printers, and servers and how they are connected by various wires and cables. Physical network maps are useful in identifying physical network components that are malfunctioning or just simply need updating. These maps are also crucial for efficient network planning.
Layer 2 refers to the data link layer of the network. This is how data moves across the physical links in a network. It's how switches within a network talk to one another. Installing Layer 2 on infrastructure provides high-speed connectivity between devices. It can also provide improved network performance. Layer 2 network mapping gives IT and network professionals valuable information about how devices are physically connected.
While Layer 2 is the data link layer of your network, Layer 3 uses IP addresses to communicate between network infrastructure. Layer 3 mapping scans for IPs of devices and determines the networks and subnets they're associated with to build out the Layer 3 map.
Logical Network Maps
Logical network maps are different from physical network maps, in that their primary purpose is to demonstrate how information flows through a network. Think of logical network maps as a sort of flow chart for how data is routed throughout a network and how devices communicate with one another. These diagrams typically include elements such as subnets, routing protocols, and firewalls.
Automated vs. Manual
Many IT teams begin by creating static network maps by hand or with a tool like Microsoft Excel or Visio to help inventory their equipment. There are several disadvantages of manual network maps, which include:
- You must update them every time there’s a change on the network.
- They don’t give you any performance data.
Network mapping software eliminates these problems with automation and real-time data. It auto-discovers devices & maps them on a live diagram, displaying the status of each device in real time. Every time something changes, the map updates—so you’re operating off a live snapshot of network performance instead of stale information.
How to Create a Network Map
Creating a network map isn’t as difficult as you might expect. Learn how to create your own network map in just five easy steps.
Network Map Examples
Network professionals prefer to view their networks in a certain layout. Great network mapping software lets you create a network diagram that makes sense to you without the cumbersome job of manually documenting device information. From Google Earth Integration to office floorplans, see how different users have customized their maps:
See how your network is laid out over an actual map of your college campus. Integrating your map with Google Earth gives you a literal bird’s eye view of your network topology. See more ›
Your network map doesn’t have to be complicated. If you prefer a more simple layout, you can set up a network map in the easiest of designs. See more ›
Customize your network maps with colors that easily alert you of device or network status by showing different colors for different statuses. See more ›
Overlay your network map with an office floor plan to easily spot problems and go right to the source for fast troubleshooting. See more ›
Dive Into Network Mapping
Dive into network mapping with these resources:
Choosing the Right Network Mapping Software
There are several types of network mapping software solutions: free, open source, or enterprise.
- Free network mapping software typically provides a limited feature set and allows you to map a small number of devices or sensors. Organizations who only want to monitor a portion of their IT infrastructure often use it, as do small organizations.
- Open source network mapping software is free to use and typically requires some extra set-up and development work. These solutions are usually best for organizations with in-house developers and the time to dedicate to making open source software fit their needs.
- Enterprise network mapping software is built to meet the needs of modern organizations. Users pay for licenses and technical support. Examples include Intermapper, NetBrain, SolarWinds, and PRTG.
To determine which type you need, read Network Mapping Software: Paid vs. Free - Which is Best?
What to Consider When Choosing Network Mapping Software
The right network mapping tool should give you a comprehensive view of your network devices—and more. As you consider your options, these are some of the most important mapping-related features to keep in mind.
Live, dynamic snapshots of your network's health
Autodiscovery of devices as they join your network
Ability to export maps to Visio and other formats
Layer 3 network mapping capabilities | <urn:uuid:14d7cb51-76ca-4e64-ba62-1e91b31ce43e> | CC-MAIN-2022-40 | https://www.helpsystems.com/solutions/automation/infrastructure/network-mapping | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00070.warc.gz | en | 0.900588 | 1,477 | 3.171875 | 3 |
NIST Cybersecurity Framework
The NIST (National Institute of Standards & Technology) Cybersecurity Framework is guidance based on existing standards and practices to help organizations better manage and reduce cybersecurity risk. Fortified provides a comprehensive security assessment of an organization’s compliance within this Framework, along with direction to better communicate its cybersecurity posture.
In addition to helping organizations manage and reduce risks, the NIST Cybersecurity Framework was designed to foster risk and cybersecurity management communications among internal and external stakeholders.
Fortified counsels clients on how to better leverage the NIST Cybersecurity Framework to understand, manage and reduce cybersecurity risks – determining which activities are most important to assure critical operations and service delivery. In turn, this helps prioritize investments and maximize the impact of each dollar spent on cybersecurity. By providing a common language to address cybersecurity risk management, the NIST Cybersecurity Framework is especially helpful in communicating to stakeholders both inside and outside the organization.
Components of this assessment include improving privacy and security communications, awareness and understanding between and among IT, planning and operating units, as well as senior executives of organizations.
Interested in other Advisory Services? Fortified Health Security offers the following: | <urn:uuid:8576c80d-d776-43e2-a849-d56960aaea10> | CC-MAIN-2022-40 | https://fortifiedhealthsecurity.com/nist-cybersecurity-framework/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00070.warc.gz | en | 0.948846 | 236 | 2.59375 | 3 |
It’s an unfortunate reality in our education system that cost often outweighs improvements in education. Budget concerns too often win out over what benefits students, presenting a barrier against progressive, blended-learning techniques that schools desperately need. A 2015 survey conducted by TES Global revealed that 48 percent of teachers believe cost is the main influencer of whether an institution is equipped with educational technology, rather than student outcomes.
Many teachers understand the need to integrate technology into the classroom but lack the proper tools and training necessary to make this happen. Unfortunately, costs and anxiety about technology prohibit teachers from training in the systems that would make their classrooms more effective.
Offer Sufficient Training for Technology
A survey conducted by Samsung found that a whopping 90 percent of teachers believe integrating technology in the classroom is important to student success, yet 60 percent admitted to feeling underprepared to use technology in the classroom setting.
Other teachers don’t even attempt to embrace the latest digital technology, complaining that it is too complex (37 percent) or that their schools don’t offer proper training (63 percent). They lack role models for blended-learning classrooms—even after workshops and other attempts at training—teachers find they still lack the right tools for successfully integrating technology into the classroom.
Enhance Student Learning Outcomes
Teachers acknowledge the significance of integrating technology in the classroom (91 percent say it’s important to achieving success), and they hold their students’ best interest at heart when they admit technology in the classroom encourages hands-on involvement (81 percent). In fact, the impact on students is the main reason for teachers’ concern about a lack their lack of technological training.
Students experienced in blended learning can adapt to changing educational environments, giving them an edge over students taught in traditional ways. Invaluable skills, such as creating video and audio content, will increase students’ chances of succeeding after they graduate. Students who don’t experience a blended-learning classroom miss out on the benefits of social collaboration and multichannel navigation, and they’re liable to fall behind the 21st century learning curve.
Incite Change in Teachers’ Tech Training
School districts are widely failing in their efforts to train teachers about technological advances. Only a handful of innovative schools have created training programs, while the rest assert the need for role models to follow. District leaders aren’t checking in on teachers after workshops to ensure integration, nor are they offering help to those teachers who are still struggling.
To close the gap between the ideal and the real, measures must be taken to improve teacher training. Teacher training sessions should be more interactive, encouraging teachers to explore the functions of a technology and learn how to harness its full potential in the classroom environment. The assumption that teachers will learn on their own time or be willing to attend unpaid training seminars will only serve to reinforce current classroom downfalls.
Offering incentives to teachers who go the extra mile to create a blended classroom is another important step in promoting technology use in the classroom. Sites like Teachers Pay Teachers offer monetary compensation for teachers who create their own lessons, learning games, and resources. Teachers deserve credit for the time and energy they spend outside of the classroom and will feel more inclined to undertake the task of integrating technology if they’re receiving proper training, and periodic recognition. These are small steps to take to ensure our children’s education, and their digital future. | <urn:uuid:06dacc30-c078-4e50-97be-0e2cf33e5a82> | CC-MAIN-2022-40 | https://mytechdecisions.com/compliance/if-you-want-tech-in-the-classrooms-teach-the-teachers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00070.warc.gz | en | 0.95135 | 698 | 3.28125 | 3 |
Any time you’re dealing with sensitive business data, you need to take care to elevate security measures. But cybersecurity trends are always changing. You can’t (and shouldn’t) jump on every bandwagon that comes along. This article will give you the scoop on MFA so you know what it actually does to provide additional network security.
Multi-Factor authentication is all about making it more difficult for hackers to access your company’s sensitive data, email addresses, files, company credit card numbers, sign-in information and even personal information.
What is multi-factor authentication?
Forbes breaks down the essence of MFA this way:
“Multi-factor authentication is more complex, yet potentially more secure than two-factor, usually requiring additional verification such as biometrics to include voice, retina or fingerprint recognition, etc., which is harder for an attacker to bypass. Depending on the nature of the organization (i.e. maintains critical infrastructure), the risk could outweigh the cost and multi-factor authentication may be preferred.”
An example may help.
If you use an iPhone, there’s a good chance you’re already using MFA. When you use your fingerprint to access your phone, that’s an element of multi-factor authentication in action.
Of course, that’s multi-factor authentication on a consumer level. In business, there are all kinds of applications for advanced cybersecurity, including things like fingerprint readers.
The end result is simple—it’s significantly harder for a hacker to breach your data because the requirements for access are far more difficult to bypass.
How does multi-factor authentication boost network security?
There are multiple ways a cybercriminal can get to your company’s data. That’s why firewall protection, antivirus programs and other standard network security measures are so important.
Unfortunately, that kind of protection will only take you so far. That’s because the overwhelming the overwhelming majority of cybersecurity breaches happen, not because of a technical breakdown, but because of something a human being did or failed to do. If you don’t secure data at the human level, a breach is simply more likely.
Not only that, but password theft is alarmingly common and constantly evolving. Phishing scams, keylogging and pharming don’t take advantage of human error, per se. But the result is the same. Data is compromised because user passwords are compromised.
A multi-factor authentication method forces anyone accessing data to use more than a password alone. Even if users’ passwords are compromised, MFA means data is still safe.
Two-factor authentication vs multi-factor authentication
Two-factor authentication is more or less what it sounds like—two pieces of information are needed for access.
For instance, at an ATM you need two pieces of information to access your account—your ATM card and your PIN. But multi-factor authentication ups the ante. You’re required to provide multiple (as in, more than two) pieces of information for access, and one of those pieces of data is typically something completely unique to you. (Think retinal or fingerprint scan.)
Multi-factor authentication and your business
Multi-factor authentication helps with security, productivity, flexibility, and compliance. It gives business leaders an effective way to protect their organization’s infrastructure and adds multiple additional layers of cybersecurity. While it’s never possible to stop all data breaches, it’s well worth your time to do what you can to minimize the possibility that your data will be compromised.
If you’re interested in using MFA in your office, we recommend reaching out to your managed IT services provider. They’re already familiar with your technology and your network. They’ll be in a position to help determine exactly what kind of multi-factor authentication will work best for you and your staff. At ISG Technology, we have partnered with the very best in the industry, Aruba, to provide you with the tools to create the best mobile workplace and prevent a cyberattack. | <urn:uuid:f889904c-9808-43bd-bb0f-0e0dced839d7> | CC-MAIN-2022-40 | https://www.isgtech.com/category/security/page/3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00270.warc.gz | en | 0.913042 | 849 | 2.5625 | 3 |
IPv6 Security: 5 Things You Need to Know
Here are key facts you need to know about IPv6 and network security:
1. The IPv6 protocol suite was designed to be more secure than IPv4, but that doesn't make it automatically so.
Merike Kaeo, chief Network Security architect for Double Shot Security and author of multiple technology papers on IPv6 security, points out that IPv6 was architected to be more secure but that was based on the attacks happening in the late 1990s. For example, IPv6 routers handle fragmenting of packets differently, and the IPv6 protocol spec mandates deployment of IPsec –- the protocol suite that authenticates and encrypts IP packets. Both of those things were designed to enhance security.
But threats have become more sophisticated, and deployments don't always follow the original plans. "For instance, the IPv6 protocol spec mandated that you had to implement IPsec to be compliant," Kaeo says. "But in reality, when people first started implementing IPv6, they weren't always using IPsec, and if they were using it, that doesn't mean they are implementing it properly."
Implementing IPsec properly isn't like "flipping a switch," adds Thomas Maufer, director of Technical Marketing for Mu Dynamics , a testing and application validation company. It requires having a Public Key Infrastructure, which is a repository and management system for digital certificates. Managing those certificates within an enterprise is one thing, but connecting two enterprises is a different level of challenge.
"A lot of operational things are not in place to do IPsec, and that has nothing to do with IPsec or people's best intentions," Maufer says. "Mu has found a number of vulnerabilities with Key negotiation protocols -- these are just software and software is going to have bugs. If you are going to deploy something and you believe it is secure -- you had better be testing it thoroughly to see that it really is."
Next Page: NAT Is Not Security | <urn:uuid:de676b75-8f83-41c3-87e2-3a78bebaa669> | CC-MAIN-2022-40 | https://www.lightreading.com/ethernet-ip/ip-protocols-software/ipv6-security-5-things-you-need-to-know/d/d-id/689118 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00270.warc.gz | en | 0.968378 | 408 | 2.6875 | 3 |
A study using extensive nationwide registry data showed that girls born extremely preterm, earlier than 28 weeks gestational age, were three times more likely to be diagnosed with depression than peers born close to the expected date of delivery. Increased risk of depression also applied to girls and boys with poor fetal growth born full-term and post-term. The effects of poor fetal growth were more evident with increasing gestational age.
All the results were adjusted for paternal psychopathology, paternal immigrant status, maternal psychopathology, maternal depression, maternal substance abuse, number of previous births, maternal marital status, maternal socio-economic status, maternal smoking during pregnancy, and the infant’s birthplace.
Childhood depression can be addressed preventively
Depression is a common psychiatric disorder that has been reported to affect 1-2 percent of preschool and prepubertal children and 3-8 percent of adolescents. However, childhood depression is a severe disorder and its prevention can be advanced with the identification of at-risk groups.
“The study highlights the need for preventive interventions for high-risk infants and support programmes for parental mental health during pregnancy and neonatal care, especially for extremely preterm infants and growth-retarded full-term infants. Follow-up care practices should include psychosocial screening and developmental testing for children born preterm and their families, with appropriate support for sound mental health,” says researcher Subina Upadhyaya from the Research Centre for Child Psychiatry, University of Turku.
“Future studies should examine the risk associated with preterm birth and infant long-term outcomes in the present era of family centered neonatal care practices,” she continues.
The study included 37,682 children born in Finland between January 1987 and December 2007 and diagnosed with depression. They were compared with 148,795 matched controls without depression.
The study is part of a larger body of research that investigates the associations between antenatal risk factors and major psychiatric disorders.
“The results are significant both for understanding the risk factors for psychiatric disorders and for prevention, notes the primary investigator,” Professor Andre Sourander.
The study belongs to the INVEST Research Flagship funded by the Academy of Finland Flagship Programme. INVEST aims at providing a new model for the welfare states that is more equal, better targeted to at risk groups, more anticipatory as well as economically and socially sustainable. | <urn:uuid:30f3061a-9964-4df9-89b2-164e93146347> | CC-MAIN-2022-40 | https://www.businessmayor.com/children-born-extremely-preterm-are-more-likely-to-be-diagnosed-with-depression/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00270.warc.gz | en | 0.954269 | 490 | 2.859375 | 3 |
The combination of the polymorphic nature of malware, failure of signature-based security tools, and massive amounts of data and traffic flowing in and out of enterprise networks is making threat management using traditional approaches virtually impossible.
Until now, security has been based largely on the opinions of researchers who investigate attacks through reverse engineering, homegrown tools and general hacking. In contrast, the Big Data movement makes it possible to analyze an enormous volume of widely varied data to prevent and contain zero-day attacks without details of the exploits themselves. The four-step process outlined below illustrates how Big Data techniques lead to next-generation security intelligence.
Malware is transmitted between hosts (e.g. server, desktop, laptop, tablet, phone) only after an Internet connection is established. Every Internet connection begins with the three Rs: Request, Route and Resolve. The contextual details of the three Rs reveal how malware, botnets and phishing sites relate at the Internet-layer, not simply the network- or endpoint-layer.
Before users can publish a tweet or update a status, their device must resolve the IP address currently linked to a particular domain name (e.g., www.facebook.com) within a Domain Name System record. With extremely few exceptions, every application, whether benign or malicious, performs this step.
Multiple networks then route this request over the Internet, but any two hosts never connect directly. Internet Service Providers connect the hosts and route data using the Border Gateway Protocol. Once the connection is established, content is transmitted.
If researchers can continuously store, process, and query data gathered from BGP routing tables, they can identify associations for nearly every Internet host and publicly routable network. If they can do the same for data gathered from DNS traffic, they can learn both current and historical Host IP Address/Host Name associations across nearly the entire Internet.
By combining these two Big Data sets, researchers can relate any host’s name, address, or network to another host’s name, address, or network. In other words, the data describes the current and historical topology of the entire Internet — regardless of device, application, protocol, or port used to transmit content.
Extracting Actionable Information
While storing contextual details on a massive volume of Internet connections in real-time is no easy task, processing this data in order to extract useful information about an ever-changing threat landscape might be nearly impossible. There is an art to querying these giant data sets in order to find the needles in the haystack.
First, start with known threats. It’s possible to learn about these from multiple sources, such as security technology partners or security community members that publicly share discoveries on a blog or other media site.
Second, form a hypothesis. Analyze known threats to develop theories on how criminals will continue to exploit the Internet’s infrastructure to get users or their infected devices to connect to malware, botnets and phishing sites. Observing patterns and statistical variances regarding the requests, routes and resolutions for malicious hosts is one of the keys to predicting the presence and behavior of malicious hosts in the future.
Spatial patterns can reveal malicious hosts, since they often share a publicly routable network (aka ASN) with other malicious websites — for example, same geographic location, same domain name, same IP address, same name server host storing the DNS record or other objects. Infected devices connect with these hosts more often than clean devices do.
Temporal patterns can be used to identify malicious hosts by showing evidence of irregular connection request volume or new domains with sudden high spikes in volume immediately after domain registration. Statistical variances, such as a domain name with abnormal entropy (gibberish), can also reveal malicious hosts.
Third, process the data — repeatedly. On the Internet, threats are always changing. Processing a constant flow of new data calls for a real-time adaptable machine-learning system. It needs classifiers that are based on a hypothesis. Alternatively, the data can be clustered based on general objects and elements, and training algorithms can collect a positive set of known malicious hosts as well as a negative set of known benign hosts.
Fourth, run educated queries to reveal patterns and test hypotheses. After processing, the data becomes actionable, but there may be too much information to effectively validate hypotheses. At this stage, visualization tools can help to organize the data and bring meaning to the surface.
For instance, a researcher may query one host attribute, such as its domain name, but receive multiple scored features outputted by each classifier. Each score or score combination can be categorized as malicious, suspicious or benign and then fed back into the machine-learning system to improve threat predictions.
When a host is categorized as “suspicious,” there is a possibility of a false positive, which could result in employee downtime for customers of Internet security vendors. Therefore, continuous training and retraining of the machine-learning system is required to positively determine whether a host is malicious or benign.
The process of determining whether suspicious hosts are malicious or benign can be cost- and resource-prohibitive. To validate threats across the entire Internet would require an army of analysts. The good news is that there are thousands of potential analysts in the security community, including security-savvy customers. The bad news is that security vendors typically keep their threat intelligence to themselves and guard it as core intellectual property.
A different approach is to move from unidirectional relationships with customers to multidirectional communication and communities. Crowdsourcing threat intelligence requires an extension of trust to customers, partners and other members of a security vendor’s ecosystem, so the vendor must provide dedicated support to train and certify the crowdsourced researchers.
However, the upside potential is significant. Given an anointed team of researchers across the globe, the reach and visibility into real-time threats will expand, along with the ability to quickly and accurately respond, minute by minute, day by day, to evolving threats.
As for tactical requirements, the community needs access to query tools similar to those used by the vendor’s own expert researchers. The simpler interface would display threat predictions with all the relevant security information, related meta-scores and data visualizations, and allow the volunteer to confirm or reject a host as malicious.
Applying Threat Intelligence
Threat intelligence derived from Big Data can prevent device infections, network breaches and data loss. As advanced threats continue to proliferate at an uncontrollable rate, it becomes vital that the security industry evolve to stay one step ahead of criminals.
The marriage of Big Data analytics, science and crowdsourcing is making it possible to achieve near real-time detection and even prediction of attacks. Big Data will continue to transform Internet security, and it’s up to vendors to build products that effectively harness its power. | <urn:uuid:79aa4af9-3e68-46b6-a3a1-f5272bf8acfc> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/needle-in-a-haystack-harnessing-big-data-for-security-78961.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00270.warc.gz | en | 0.91774 | 1,406 | 3.09375 | 3 |
Recently popularized by IBM’s highly intelligent Watson supercomputer, which competed on the hit game show Jeopardy, cognitive computing refers to machines that are capable of learning concepts and patterns through advanced language processing algorithms. A system that involves incredibly advanced artificial intelligence, cognitive computing is one facet of computer science that isn’t for the faint of heart.
Consumer Uses for Cognitive Computing
Although much of the hype is centered on big business and big data processing, there are a number of consumer applications. Whereas business leaders might use the technology to increase their bottom line, streamline daily operations and achieve greater profitability, consumers can take advantage of computing to ease some of the burdens of everyday life.
In fact, many consumers are using some form of it without realizing it. Smartphone apps, in-store kiosks, and e-Commerce use cognitive computing to offer users and customers greater accessibility, increased support and cost comparison. According to Deloitte, more than half of all mobile users currently use their devices while shopping in order to browse prices and download coupons.
How Cognitive Computing Is Changing the Workplace
While cognitive computing has yet to reach its full potential, there are nearly infinite possibilities for its future implementation.
According to some sources can bolster the recordkeeping and documentation process within the healthcare sector by collating patient history, recommending the appropriate diagnostic tools and even suggesting relevant articles or whitepapers. Some analysts predict that approximately 30 percent of all healthcare IT systems will use cognitive computing by the year 2018.
Our ability to manage ad-hoc projects can also benefit from it. By utilizing a system like IBM’s Watson as a personal, AI-driven secretary, project managers can obtain accurate information, monitor timelines and deliverables or even participate in the overall project planning and budgeting phases. People who actively use a project portfolio Management Strategy can use the technology to achieve greater resource allocation, track multiple projects and collate data from various sources.
People in the insurance industry also stand to benefit from cognitive computing and advanced AI. According to research by experts with IBM, computing systems can bolster human-computer engagement, strengthen information discovery and make important business decisions. Additional benefits include improvements in risk management, cost analysis and customer service.
Companies use cognitive computing in a myriad of other ways, too. Some use the technology as a means of supporting internal troubleshooting and third-party software, while others use it to collect, store and analyze financial data on behalf of individual clientele.
Receiving Brand Name Support
Cognitive computing is receiving support from some of the top names in the IT world. Apart from IBM and their Watson supercomputer, brands such as Microsoft, Cisco, Google and Spark have thrown their respective hats into the mix. Moreover, they all add something different to the concept of cognitive computing.
For example, Microsoft offers various software development kits and utilities in order to support the programming and implementation of advanced artificial intelligence in modern software. Conversely, Cisco’s Cognitive Threat Analytics suite is meant to identify and resolve cyber-threats as soon as possible.
The Longevity of Cognitive Computing
Despite the fact that it’s still a relatively new concept, there’s no denying the computing’s impact on our daily lives. As more companies pledge resources to the development of the technology and as more consumers embrace it in their personal lives, we’ll only see the technology improve even further. Indeed, its definitely here to stay.
By Kayla Matthews
Kayla Matthews is a technology writer dedicated to exploring issues related to the Cloud, Cybersecurity, IoT and the use of tech in daily life.
Her work can be seen on such sites as The Huffington Post, MakeUseOf, and VMBlog. You can read more from Kayla on her personal website. | <urn:uuid:2f77085d-4d89-45b2-80a3-b36831ebcce5> | CC-MAIN-2022-40 | https://cloudtweaks.com/2016/09/future-cognitive-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00270.warc.gz | en | 0.931071 | 777 | 3.125 | 3 |
When you open a web page, all sorts of things will need to be done in the background before you get your shiny website on your screen.
We will see now what is happening in the networking system to make that possible.
TCP/IP protocol is what makes sending and receiving most of the data across the Internet possible. But how data packets know how to find us and how we know how to find the IP addresses of the web servers where these pages are saved?
Data will maybe not even take the same route in each direction. It can happen that when we send something, a request to the server, the packets will flow through one route and the server response towards our computer will take some other route.
The Internet is the biggest computer networking system. It knows in every moment how to find the best route to some device connected anywhere in-between all his nodes.
But how is this data transferred across the wires, fibres and air?
Data is divided into small packets. Every time we send a request towards a server, our request must firstly be divided into packets, most of the same size. Each of those packets needs an IP address of the destination to be written on it so that he can be routed through the network.
In order to find out what is the destination IP address of the server – (remember that we are typing an URL into the browser, usually we are not typing IP address into it) – your computer, before sending out all those packets, will contact public DNS server – domain name server, that will have the information about IP address to which packets must be forwarded in order to get to your URL-linked page.
Public DNS servers are set up into a hierarchical system that keeps the knowledge of IP addresses for all URLs (domain names) that are registered on the Internet. With this database, DNS is able to translate our request for the web page URL into the IP address of the server on which the web page is stored.
| Continue Reading.. | | <urn:uuid:13e5810f-4527-4bd0-83d5-912cdca9bd9d> | CC-MAIN-2022-40 | https://howdoesinternetwork.com/tag/packet | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00270.warc.gz | en | 0.943977 | 407 | 4 | 4 |
The fight against the deadly COVID-19 disease is a collective effort drawing on the resources of scientists, laboratories and high performance computing centers around the world. This is very much the case at the University of California at San Diego, where a team of researchers is leveraging the massive computational power of a supercomputer at the Texas Advanced Computing Center (TACC) to expose the secrets of the SARS-CoV-2 virus, which causes COVID-19.
This team is led by Dr. Rommie Amaro, a distinguished professor in theoretical and computational chemistry at UC San Diego, working in conjunction with scientists in national research labs who are part of this shared quest to innovate with data. Dr. Amaro explains that her team is using the simulation power of a TACC supercomputer to churn through massive amounts of data to see things that can’t be observed under even the most powerful microscopes.
“One of the main things we’re working on in my group is to try to see things that experimentalists can’t see,” Dr. Amaro says. “We’re trying to get really detailed views into living systems. In particular, one of our goals is to understand more about the structure of the SARS-CoV-2 virus and its molecular piece parts, and how they come together into a working whole. And so we’re using computational simulation to give us insights beyond where we can get with experiments.”
The secrets of a sugary shield
Much of the research conducted by Dr. Amaro and her team revolves around a sugary molecular shield on the SARS-CoV-2 virus, made up of a substance known as glycans. SARS-CoV-2 and other viruses use this spiked sugary cloak to attack the cells of a human host. In essence, the glycans trick the human immune system into seeing them as harmless.
The breakthrough simulations and modeling conducted by the UC San Diego research team have shown the world what the sugary coating actually looks like and how it tries to hide itself from human immune systems. These efforts reveal that the glycans prime the coronavirus for infection by changing the shape of their spike protein. Scientists hope this basic research will add to the arsenal of knowledge needed to defeat the COVID-19 disease.
“When we know how it’s hiding, that gives us a chance to try to understand how we can do a better job in making things like vaccines to detect the virus in the body,” Dr. Amaro says. “Hopefully we can translate those understandings into things that will be useful either in the clinic or the street — for example, if we’re trying to reduce transmission for what we know now about aerosols and wearing masks.”
Drawing on the power of HPC
For these data- and compute-intensive scientific investigations, Dr. Amaro and her colleagues draw on the computational power of the TACC Frontera supercomputer. With a peak-performance rating of 38.7 petaflops, Frontera is currently the 9th fastest supercomputer in the world.
Dell Technologies provided the primary computing system for Frontera, based on Dell EMC PowerEdge C6420 servers. The initial implementation of the system has more than 8,000 two-socket nodes, more than 16,000 Intel® Xeon® Scalable processors and 448,448 cores. In addition, Frontera incorporates several technical innovations, including Intel® Optane™ DC Persistent Memory for some large-memory nodes, CoolIT Systems high-density Direct Contact Liquid Cooling and a high performance HDR 200 Gb/s InfiniBand interconnect.
With the Frontera supercomputer, and community codes like the nanoscale molecular dynamics (NAMD) engine, Dr. Amaro and her colleagues were able to scale their compute problem across many hundreds and even thousands of nodes. And in further work, they were able to simulate the entire virus itself, which has about 300 million atoms.
“This is an enormous problem that would not be possible to even think about computing on any sort of normal desktop or laptop, or any sort of cluster that one generally has access to,” Dr. Amaro says. “It really takes the ability to scale many simulations across many hundreds and even thousands of nodes in order to make progress in this space.”
To Learn More
Find out more about Dr. Amaro’s work in the case study. For a closer look at the capabilities of the Frontera supercomputer at Texas Advanced Computing Center, see the Dell Technologies case study “A New ‘Frontera.’”
Explore more HPC Solutions from Dell Technologies and Intel.
Source: TOP500 List – November 2020. | <urn:uuid:1390aa67-36a6-41a5-a301-5af17e646a0d> | CC-MAIN-2022-40 | https://www.cio.com/article/188864/exposing-the-secrets-of-a-deadly-virus-with-the-power-of-hpc.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00270.warc.gz | en | 0.929665 | 1,003 | 3.375 | 3 |
We're in an era where the number of machine identities has already surpassed the number of human identities, which isn’t something that should be ignored from a security perspective.
Whether we talk about an IoT ecosystem containing millions of interconnected devices or application programs continuously seeking access to crucial data from devices and other apps, machine identity security is swiftly becoming the need of the hour.
What’s more worrisome is that cybercriminals are always on the hunt to exploit a loophole in the overall security mechanism in the digital world where machine-to-machine communication is the new normal.
Hence, it’s no longer enough to reassure or assume services/devices accessing sensitive data can be trusted since a breach or sneak into the network in real-time processing can go undetected for months or even years, causing losses worth millions of dollars.
Here’s where the critical role of machine-to-machine (M2M) authorization comes into play.
Let’s understand how M2M authentication works and paves the path for the secure machine to machine and machine to application interactions without human interventions.
What is Machine Identity? Why Does Security Matter Now More than Ever?
Just like humans have a unique identity and characteristics that define a particular individual, machines have their identities that help govern the integrity and confidentiality of information between different systems.
Machines leverage keys and certificates to assure their unique identities while accessing information or gaining access to specific applications or devices.
Today, business systems undergo complex interactions and communicate autonomously to execute business functions. Every day, millions of devices constantly gather and report data, especially concerning the Internet of Things (IoT) ecosystem, which doesn’t even require human intervention.
However, adding stringent layers of security isn’t a piece of cake at such a micro-level. Hence, cybercriminals are always looking for a loophole to sneak into a network and exploit crucial information.
Hence, these systems need to efficiently and securely share this data during transit to the suitable systems and issue operational instructions without room for tampering.
A robust machine-to-machine (M2M) communication mechanism can be a game-changer concerning the ever-increasing security risks and challenges.
What is Machine-to-Machine Authorization?
Machine-to-machine (M2M) authorization ensures that business systems communicate autonomously without human intervention and access the needed information through granular-level access.
M2M Authorization is exclusively used for scenarios in which a business system authenticates and authorizes a service rather than a user.
M2M Authorization provides remote systems with secure access to information. Using M2M Authorization, business systems can communicate autonomously and execute business functions based on predefined authorization.
Why Do Businesses Need M2M Authorization?
Since we’re now relying on smart interconnected devices more than ever before, secure data transfer is undeniably a massive challenge for businesses and vendors offering smart devices and applications.
Moreover, these smart devices and applications continuously demand access from other devices and applications, which doesn’t involve any humans; the underlying risks and security threats increase.
IT leaders and information security professionals can’t keep an eye on things at this micro-level, which is perhaps the reason why there’s an immediate need for a robust mechanism that can handle machine-to-machine communication and ensure the highest level of security.
Apart from this, businesses also need to focus on improving the overall user experience since adding stringent layers of security eventually hampers user experience.
Here’s where a reliable CIAM (consumer identity and access management) solution like LoginRadius comes into play.
How LoginRadius’ Cutting-Edge CIAM Offers Seamless M2M Authorization?
LoginRadius M2M helps businesses to provide flexible machine-to-machine communication while ensuring granular access, authorization, and security requirements are enforced.
LoginRadius’ M2M Authorization offers secure access to improve business efficiency and ultimately enhances customer experience. M2M provides several business benefits, including, but not limited to:
- Seamless user experience backed with robust security
- Efficient authentication and data exchange
- Grant, limit, or block access permissions at any time
- Secure data access across multiple business systems
- Granular data access with predefined scopes
With the rise of smart devices, the rising threat of machine identity theft is increasing among developers and vendors offering these services.
Organizations need to understand the complexity of the situation and put their best efforts into incorporating a smart security mechanism that can carry out machine-to-machine authorization tasks like a breeze.
LoginRadius’ cutting-edge CIAM offers the best-in-class M2M authorization that helps businesses grow without compromising overall security.
Originally Published at LoginRadius | <urn:uuid:1760858b-9584-4f68-b0dd-283bac0707a3> | CC-MAIN-2022-40 | https://guptadeepak.com/is-the-rise-of-machine-identity-posing-a-threat-to-enterprise-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00270.warc.gz | en | 0.90154 | 1,002 | 2.78125 | 3 |
Location data analytics has taken on new urgency in the COVID-19 era. This is most obviously true for businesses dependent on in-person activities and foot traffic such as traditional retail. But the ‘new normal’ also includes changes of much wider impact, such as likely permanent shifts in working from home and dramatic acceleration of both pure and mixed e-commerce.
These changes are complex, affecting particular groups of workers and consumers in different ways. For example, service and office workers’ experiences during and after COVID correlate more strongly to the business sector than to ‘collar color.’ These changes are also both rapid and historically unprecedented. This makes them a poor fit for many conventional statistical modeling techniques. Traditional ‘baselines’ at a minimum need to be recalibrated and refactored to consider new cohorts.
But this pandemic is not over yet especially internationally. So rather than to base projections on quarterly, annually or decadal lagged data, best-in-breed enterprises are finding new ways to leverage live geospatial data at scale, and pivot rapidly when new information diverges from prior expectations. They are also empowering their analyst to more easily generate, revise and share scenario-based plans with explicit assumptions.
The Importance of Granular Spatiotemporal Data
We are sometimes so accustomed to working with aggregate data, that we miss important insights. Of course, some datasets are only available or reliable with a certain level of aggregation, and aggregation can be important in privacy protection. Yet the COVID pandemic is nothing if not varying in time and place, and so an important meta-question for every business must be: are we collecting and using data at appropriate scales.
The importance of locational data was actually most famously made clear all the way back in the 19th century when in 1854, a doctor named John Snow manually plotted a geotemporal sequence of cholera cases (see related Wikipedia story). Interestingly, scientific debate about the origins of COVID-19 is also using similar methods today (Bolos 2020). Of course the hand-drawn maps are no longer required, and today’s scientists are coloring their maps based on the gene sequence of the virus mutating over time. But the core method of spatial analysis still applies, and still requires disaggregated data. John Snow would not have gotten far with a map of London census blocks showing cholera cases per decade.
Quantifying New Demographic and Cultural Shifts
The US over the last several decades has seen both intrastate migration to sunbelt cities and pattern shifts within major metros. COVID-19 so far has not appreciably influenced the broader pattern but has flipped the script within metros. Working from home and the continued rise of e-commerce has had enormous implications, not only for offices but also for entire regions. The question impossible to resolve at least before schools restart in the fall is where the ‘new normal’ will settle.
According to Pew Research, about 20% of Americans worked from home before the pandemic and 70% currently do. What will these numbers look like in a year? Well, if employees get their way, 54% will continue to work from home. Employers were not surveyed. But safe to say, the eventual range will be something like 30-50%. That is a shift in home-based work we have not seen since before the industrial revolution. How will this affect your business? Recent data from within the pandemic is muddled, and realistically we won’t in the US have an answer until the new school year settles in. Updating your analytics capabilities now so that you and your staff are trained and prepared would be a wise move.
Which Data Are Available and at What Scales?
Geospatial location-based data is at the heart of our ability to measure and respond effectively to these shifts. The precise data needed by a business will obviously vary. But some general sources are clear.
First, the new census data will roll out late in the fall. This is a “full count” and will be useful especially in capturing large scale demographic shifts. But keep in mind the survey period landed in the midst of the pandemic. If you aren’t already doing so, consider reviewing customer data in this light. At a broad level, are your customers still where you think they are? Has their offline and online behavior shifted to match broader trends, or are your customer cohorts different?
Second, consumer behavior and footfall data is available now. This can come from a variety of commercial vendors, or at county-levels or above from public sources. However, these data will be particularly valuable to look at in the 4th quarter as parents shift to new school and childcare arrangements. These data at disaggregate levels can get big quick. So you will want to make sure your spatial analysis tools can gracefully handle the load.
Third, there has been a lot of churn in real estate over the pandemic. From an internal perspective, this means rebalancing your physical spaces may be important. Market values have shifted, and lease rates will inevitably follow. From a general data analytics point of view, this means your prior location-based datasets are due for a major refresh. Take a look especially at any business points of interest or address datasets. These are likely candidates for at least a refresh, if not a major overhaul.
Intuitive Visual Data Exploration and Collaboration
The accelerated speed with which businesses can explore geospatial data is ushering in new forms of inquiry. Previously, data analytics has been couched in the language of mathematics. But thanks to advanced CPU and GPU processing breakthroughs, analytics has become a dramatically more intuitive pursuit. The blistering speed of geospatial analytics now allows anyone—even the non-technical user—to not only visualize huge datasets, but also to dig into them according to their natural curiosity.
When query response rates are measured in milliseconds, people can focus on exploration and forget about the underlying technology. The exploration of COVID-19 data has allowed people to test hypotheses as quickly as they can be considered. Similarly, business insights can be tested against a variety of conditions to examine their viability and impact.
Think about who in your organization might benefit from improved geospatial tools or training. If you haven’t updated your geospatial IT for a while, today’s “no code” dashboarding environments should come as a pleasant surprise. In particular, an end user not necessarily familiar with spatial analysis or GIS tools can feel comfortable with these newer generation tools in hours and not months. Meanwhile, for your more technical staff, consider what the revolution in ‘notebooks’ and spatial data science means to your organization. Notebooks (such as Jupyter) are a way of mixing scripting languages and interactive graphics that transcend the limits of spreadsheets.
Collaboration is playing an increasingly large role in analytics, and for good reason. So as you are thinking about your analytics, keep in mind how and when these can best be shared among your staff, or with stakeholders or customers. Both dashboards and notebooks have become popular due to their strength in “replicability.” How easy or hard is it for you to adjust an assumption made by a colleague and regenerate a new analysis? Do your current methods work only internally and not for customers? How about on mobile?
The big idea here is that analytics insights are only useful to the extent they can influence outcomes. So it’s great if you have single-user tools which give your analysts deep insights. But much better if they can fluidly share these to accomplish business objectives. This capability supports what analysts call “social speed.” In business contexts, multiple rounds and types of analytics sharing occur. In early stages, there is an emphasis on confirmation and feedback; later, the goal is often to convince peers, stakeholders or customers that a particular approach is valid and has been well considered.
Working at social speed, users can easily create and share annotated dashboards without programming knowledge. Instead of simply sharing a static graphic or table, users can share an interactive interface that includes access to full data where appropriate. This allows the recipient to engage themselves with the idea in question—and build confidence that it is correct.
Broader, Faster, Smarter
As access to these advanced tools becomes broader and more democratic, more uses are available to more people. “No code” analytics makes it easier for large numbers of business and government users to explore critical data.
Machine learning, one of the most exciting developments on the horizon, will help these entities arrive at better answers faster. But the benefits don’t stop there. The combination of granular data collection, particularly from IoT sensors, with machine learning technology will enable computers to present findings that would be simply impossible for humans to accomplish manually.
The digitization of maps at scale has also been useful for many organizations. But until now, those maps have been heavily aggregated and static. Their presentation reflects the presence or absence of an object class over a geographic area with long production lag times. Today we’re seeing fully dynamic, granular maps that include quality information. Instead of a map showing where a forest was five years ago, it can reveal where individual trees are currently, and even if they are under stress or healthy.
This kind of map, called a “digital twin,” is much more actionable. It can provide real-time insight into people movement, environmental changes and, in the case of a public health crisis, infection mitigation efforts. “GeoML” can apply to buildings in cities, individual vehicles on roads, or customers in a retail environment. The applications are endless—and a worthy legacy for what has been one of the most definitive, and consequential, global events of our time. | <urn:uuid:69bd6484-7d89-46bc-ba7b-49bdfb978190> | CC-MAIN-2022-40 | https://coruzant.com/analytics/how-big-data-analytics-is-helping-business-in-the-age-of-covid-19/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00470.warc.gz | en | 0.94959 | 2,035 | 2.65625 | 3 |
Most Wi-Fi hotspots don't use encryption to protect your web browsing and Internet activity like private networks can. Additionally, encryption doesn't exist on most wired connections you hook up to in hotels, airports, and other public places. Securing entire public networks just isn't very feasible.
However, you can easily secure your Internet sessions to prevent other nearby Wi-Fi users from snooping on what sites you're visiting and possibly capturing your emails, passwords, and other sensitive information.
You can use a solution, called a Virtual Private Network (VPN), which was originally designed for securely accessing remote networks. In this public network security scenario, a VPN server is hosted by a company, such as the ones we're going to discuss. They also provide a VPN client application, which you install on your computer.
Once you connect to the company's VPN server, no matter where you are, all Internet browsing and traffic is routed to and from the company's network through an encrypted tunnel over the Internet.
VPNs offer a few benefits in addition to protecting your network traffic from eavesdroppers:
- They bypass network filtering to view blocked websites.
- They use any restricted services: like VoIP, chatting, and instant messaging.
- They hide your IP address—surf anonymously.
- They avoid a country's Internet restrictions.
There are a few varieties of VPN solutions. The most popular for public network security is an SSL-based VPN, which uses similar encryption to what we trust for our banking and government sites.
Without further ado, here are the five hotspot applications and services you can use to secure your public browsing.
This solution is based on the popular OpenVPN client/server. Fortunately, they follow a stricter open source approach and don't run ads or otherwise try to gain revenuethey accept donations. Plus they don't impose traffic limitations, so you can use the service as much as you want.
The UltraVPN client is basically a modified version of the OpenVPN client, offered for Windows and Mac OS X. The settings are preconfigured, a system tray icon is added, and a customized GUI provides a more user-friendly experience.
Linux users can download the UltraVPN source code and build the binaries. The UltraVPN servers are hosted by Lynanda.
Although you must create an account to use the free service, the process is very simple. Just enter a desired username and password. You don't have to do an email verification or even enter an address.
Once installed in Windows, you'll see an icon in the system tray. To connect, simply right-click the icon and select Connect.
This icon also features shortcuts to enter Proxy settings if needed. | <urn:uuid:977ef1ea-8007-4c63-aaf3-e9ab1951b76c> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=1586455&seqNum=4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00470.warc.gz | en | 0.942525 | 561 | 2.5625 | 3 |
5G meaning the 5th Generation of wireless technology, is the latest in a set of standards describing cellular networks. Each generation (1G, 2G, 3G, etc.) is based on a set of telephone network standards that describe the technological implementation of the system.
1G was analog cellular. 2G technologies, such as CDMA, GSM, and TDMA, were the first generation of digital cellular technologies. With 3G technologies like EVDO, HSPA, and UMTS, digital performance levels increased, bringing speeds from 200kbps (kilobits per second) to a few megabits per second (there are 8 megabits in a Megabyte). With the arrival of 4G technologies such as WiMAX and LTE, speeds scaled up to hundreds of megabits, even approaching gigabit-levels.
5G network speeds are taking this up a quantum level, with download speeds of around 1GBps (Gigabits per second) having been observed, and a theoretical maximum speed of around 10GBps at the current state of technology.
There are three different methods for building a 5G network, depending on the type of assets that a wireless carrier has.
One of the major selling points of 5G is the vastly lower latency that it offers in comparison to previous generations of mobile. As a case in point, the response time of 4G LTE systems is typically around 20-30ms (milliseconds). With 5G enhanced mobile broadband technology, this network latency drops to around 4-5ms. And with 5G URLLC (Ultra-Reliable Low Latency Communications) systems technology, this reduces down to a single millisecond.
Performance like this is achieved through significant advances in mobile device technology and mobile network architecture. Significant changes in both the Core Network (Core) and Radio Access Network (RAN) are required to deliver low latency.
In the redesigned 5G core network, signaling, and distributed servers, the key technology strategy is to move content closer to the end-user, and to shorten the path between devices for critical applications. For this kind of “edge computing” application, it’s possible to store a copy or cache of popular content in local servers, so the time to access it is reduced.
For the low latency of 5G, the Radio Access Network (RAN) must be re-configured so that it is both highly flexible and software configurable. This enables the RAN to support the very different characteristics of the different types of services that the 5G system must cope with.
A virtual, dynamic, and configurable RAN allow the network to perform at very low latency and high data throughput. It also allows the mobile network to adjust to changes in network traffic, network faults, and new requirements.
The answer to the question “Who invented 5G?” isn’t a straightforward matter of “Person X” or “Company Y.” Like all the previous standards governing mobile technology, 5G came about as a result of input from several quarters, together with consultation by the relevant authorities, and due reference to the generations of wireless technology that came before.
The International Telecommunication Union (ITU) is an agency of the United Nations which develops technical standards for communication technologies and sets the rules for radio spectrum usage and telecommunications interoperability. In 2012, the ITU created a program called “IMT for 2020 and beyond” (IMT-2020) to research and establish minimum requirements for the 5th Generation of wireless. After years of consultation, the ITU published a draft report with 13 minimum requirements for 5G, in 2017.
Based on these minimum requirements, the 3rd Generation Partnership Project (3GPP), a collaboration of telecommunications standards organizations, set to work on creating standards for 5G. The 3GPP completed its non-standalone (NSA) specifications for 5G in December 2017, and it’s standalone (SA) specifications in June 2018. The NSA specifications use existing LTE networks for 5G rollout, while SA will use a next-generation core network specifically architected for 5G.
Today’s network carriers will build 5G networks based on other technologies that are faster than today’s networks but rely on existing infrastructure.
Other notable events in the history of 5G relate to the testing and trials that led to its first appearances. We can summarize key occurrences in the 5G timeline, as follows:
When will 5G come out? As you can see from the timeline, it’s been around since in-country trials began in 2017.
All of the major US carriers are working furiously to build out 5G networks, yet deployment across the entire country will probably take several years.
With its promise of vastly higher data transfer speeds, minimal latency, increased reliability of networks, and the ability to simultaneously connect enormous numbers of devices, there are serious implications for how this 5th Generation of wireless networking will affect technology, commerce, and daily life.
Higher speeds and reduced latency mean that 5G cells will be able to communicate almost instantly. This implies that a real-time internet is on the way, with a real-time cloud, and the potential to create a new class of electronic devices that can exchange data and communicate quickly enough to make near-real-time decisions.
This would enable businesses and service providers to deliver a new kind of customer experience, with instantaneous access to products and platforms, and the capacity to stream live and immersive environments incorporating virtual reality (VR), augmented reality (AR), and artificial intelligence (AI).
With remote connectivity and intelligent automation powered by super-fast 5G networks, operations within various sectors – health care, manufacturing, transport, etc., etc. – will have the potential to tap into a new set of techniques and technologies.
Emerging technologies such as smart cities and infrastructure could benefit from the rapid response and massive connectivity of 5G in linking devices of the Internet of Things (IoT), and expanding their range of capabilities. And minimal latency may finally make it feasible for autonomous or self-driving vehicles to not only steer a safe path through city streets and highways but to orchestrate and coordinate their movement with each other – although it may take several years for the relevant technologies to develop to this level.
Another reason why 5G is important is from a commercial perspective. 5G and its related technologies are already generating a great deal of interest. Shipments of 5G-compatible smartphones are expected to be in the range of 175 million to 225 million this year. On the enterprise side, private 5G network services are expected to drive new business uses. 5G applications like real-time automation are predicted to generate $107 billion by 2030, and connected vehicles could produce a market worth $89 billion by 2030.
As with any developing technology, the future of 5G remains largely uncertain – a view amplified by the wide divergence in observed levels of performance from the 5G networks that are running to date.
In countries like China and South Korea, for example, while the 5G rollout has gone further than in the US, network speeds haven’t been markedly different from 4G networks – and in terms of running smartphone applications, there’s been little difference in performance.
In the US, 5G network performance has been somewhat better, with, for example, average download speeds of around 190Mbps, and a maximum speed of 690Mbps (Megabits per second) observed on Sprint’s 2.5GHz spectrum service.
Tens of billions of dollars are currently being invested for spectrum and infrastructure, with all national carriers well on their way to widespread 5G coverage. One focus for the future is likely to be the harnessing of fiber optics technology in creating converged networks for 5G. A converged fiber network of this type can simultaneously support wired and wireless mobile applications.
Early experience with 5G centers on its significant enhancement of the mobile experience. Looking ahead, 5G development is expected to focus on the true enablement of the Internet of Things (IoT), with billions of connections to the network – up to 5 billion networked devices in North America alone, by 2023. The business-to-business (B2B) opportunities enabled by 5G will represent a $700 billion market by 2030 for network operators, according to some estimates.
In addition to real-time automation, burgeoning 5G technology and the IoT are expected to give rise to applications including automated factories, smart buildings, drones, autonomous connected vehicles, public surveillance and security networks, and precision agriculture.
Greatly enhanced speed is one of the principal advantages of 5G. Data transfer rates in the region of 1-20Gbps (Gigabits per second) represent a step up from the previous generation, 4G LTE, of a factor from 10 to 100. At a practical level, these enhancements are enabling 5G consumers to download content more quickly.
Though improved speed is a major driver of the shift towards the new generation of mobile, among the benefits of 5G that’s likely to have the most significant impact is low latency. Lag or latency is the response time between the sending out of a wireless signal from one device and its reception in a usable form at its destination. For the quickest existing networks (4G LTE), this lag time can last around 20 milliseconds (20ms). With the fastest 5G networks, this latency can be reduced to as little as 1 millisecond – about the time it takes for a flash on a normal camera.
Besides speed and responsiveness, what will 5G bring to the table in terms of our digital ecosystem? Since 5G is designed to connect a far greater number of devices than a traditional cellular network, operations within the still-growing Internet of Things (IoT) will receive a boost. The 5G network is also more versatile, with the ability to adjust its performance in catering for devices with different needs.
What will 5G be used for? In the consumer market, a number of 5G-capable smartphones are already available, with more due to launch later this year, extending access to mobile telecommunications users. Buyers with access to a 5G network will be able to download videos, games, and music up to five times faster than 4G LTE and enjoy greater call and image clarity.
By enabling billions of new connections with speed, low latency, and security, 5G could make X2X (everything-to-everything) links a practical reality, acting as a critical enabler for Massive IoT, Connected Autonomous Vehicles (CAVs), remote critical control, and numerous other applications.
How does 5G work? To answer that, it’s useful to have an understanding of how the previous generations of mobile technology operated and how 5G builds on that.
Data travels along the radio portion of the electromagnetic spectrum, which includes frequencies between 3kilohertz (kHz) and 300 gigahertz (GHz). 1G or the first generation of cellular network technology operated on frequency bands between 850MHz (megahertz) and 1,900MHz. 2G and 3G expanded to 850MHz to 2,100 MHz, and 4G occupied the range from roughly 600MHz to 2.5GHz.
At its quickest, 5G operates in the high-frequency bands of the millimeter wave (mmWave) spectrum – specifically, those in the 28GHz to 38GHz range. 5G signals will typically use wavelengths (between 30 and 300GHz) that are measured in millimeters. These very high-frequency signals provide an enormous amount of bandwidth, enabling many more simultaneous connections to exist on the same network.
5G frequency spectrum covers three 5G wavelength variations: low, mid, and high-band. Low-band 5G offers the lowest speeds but has the greatest coverage and ability to move through solid barriers. High-band 5G (about 28GHz) has the shortest range, and may only be able to span a few blocks of any given area. With low-band 5G (which transmits at around 600MHz), a single tower could potentially serve customers for hundreds of square miles.
How does 5G technology work? In many instances, it will function as an extension to existing cellular infrastructure. For the implementation of 4G services, communications companies typically built large cell phone towers to transmit signals across a geographical area. In rolling out 5G, service providers have been installing their equipment (known as small cells) on existing telephone lines and buildings. These cells typically have a range of around 820 feet (250 meters), and large numbers of them are required to maintain 5G coverage at higher speeds in densely populated areas.
In the US, Verizon has been building out capacity for its 5G Ultra Wideband service with small cells, extensive radio wave spectrum holdings, and fiber-optic cable. Small cell transmitters are roughly the size of a laptop computer and are strategically placed in locations where usage demands are highest. Fiber-optic cables may contain dozens to hundreds of optical fibers within a single casing and are used in transferring data signals from the small cells to the 5G core network at the speed of light.
Short answer: it isn’t – or at least, it hasn’t been proved to be dangerous to human health by any studies conducted through conventional science. Perhaps the greatest danger of 5G frequency band is the amount of hype and poorly informed rumor surrounding its potential or presumed health risks.
In particular, certain activists fear that radiofrequency or RF radiation from 5G wireless service could be dangerous to public health, and are demanding more research before carriers deploy the technology.
Elsewhere, false conspiracy theories are linking 5G wireless networks with the origins of the coronavirus (COVID-19). These include rumors that the rollout of 5G technology in Wuhan, China, caused the first outbreak, and that 5G radiation caused the SARS virus to mutate into COVID-19.
Neither claim is true – but belief in these theories has resulted in acts of extreme vandalism, such as the arson attacks against cellular infrastructure in the UK.
Action by national governments against 5G has largely been focusing on the role played by the Chinese company Huawei, which is one of the world’s principal providers of 5G network technology and expertise.
In January 2020, the US unveiled a high-profile case against Huawei, including allegations of fraud, stealing of trade secrets, and the skirting of US sanctions against Iran. Huawei’s alleged links with the Chinese government have also been cited as evidence that the smartphone and wireless technology company may be serving as a vehicle for China to spy on its rivals. These investigations, together with the ongoing trade disputes between the USA and China, have put pressure on America’s allies to take punitive action against Huawei.
The UK has banned the company from contributing core components to its rollout of 5G technology, cutting Huawei’s share in the country’s new network to 35%. In addition, companies in the UK are under orders to phase out elements of Huawei technology from their core infrastructure by 2027.
New Zealand and Australia have put total bans on Huawei technology into effect as they roll out their own 5G networks.
In Europe, Huawei is now supplying a third of telecommunication systems, so an outright ban on its technology would be problematic. Germany and France, which are considering bans, have said they will increase security measures to safeguard against backdoors into communication channels that may be part of Huawei’s technology. Denmark, Sweden, Belgium, and several other European countries are still on the fence about possible bans.
The “5G kills birds” theory can be traced back to a 2019 Facebook post by John Kuhles, who, according to the fact-checking site Snopes, “runs several anti-5G conspiracy websites and social media pages.” The post claimed that a recent mass die-off of European Starlings in the Netherlands was caused by a 5G frequency range antenna test.
Despite the fact that the 5G test actually took place months before the die-off event, other Facebook pages, and health blogs picked up the post. And after the release of the Indian sci-fi blockbuster 2.0 (which depicts scenes of electromagnetic radiation from cell towers wiping out bird populations), some Indian news organizations published stories on the movie, adding that “birds died in The Netherlands due to 5G.”
Fans of 2.0 then discovered a 2012 YouTube video in which University of Southern California professor Travis Longcore discusses his study revealing that communication towers kill 6.8 million birds annually. However, these bird deaths occurred due to the lights used on communication towers disorienting their flight path, not because of the electromagnetic radiation they give out.
This fact is backed up by hard science. “Radio wave emissions above 10MHz from radio transmission antennas (including cell telephone towers) are not known to harm birds,” as confirmed by Joe Kirschvink, a biophysicist at the California Institute of Technology who specializes in magnetics.
Kirschvink was also involved in a related study in 2014 – at the same time that a group of biologists in Germany found that low-level magnetic radiation, such as AM radio waves, could interfere with the ability of migratory birds to orient themselves using the Earth’s magnetic field.
Kirschvink issued a strong disclaimer in his own study, to the effect that: “Modern-day charlatans will undoubtedly seize on this study as an argument for banning the use of mobile phones, despite the different frequency bands involved.”
Conclusion: 5G is not dangerous to birds.
The frequencies emitted in X-rays, gamma rays, or ultraviolet light (sunlight, UV lamps, etc.) is classified as ionizing radiation, which is strong enough to damage human cells and DNA. In fact, prolonged exposure to these sources has long been recognized as posing a cancer risk.
By contrast, the radio signals that mobile phones and cell towers transmit are of lower band frequencies, which also include the radiation transmitted by AM radio, microwave ovens, and 5G network infrastructure. These are categorized as non-ionizing radiation, which isn’t able to directly harm human DNA. They are generally considered harmless, except for their potential to heat up human cells at a very close range and prolonged exposures.
With the safety limits already in place, leading national authorities such as the Food and Drug Administration (FDA), the National Cancer Institute, and the Federal Communications Commission (FCC) maintain that there is little to no health risk from using mobile phones – be they 5G, or otherwise.
Some theories have claimed that 5G wireless may have made people more susceptible to the coronavirus by weakening their immune systems. But again, leading authorities such as the International Commission on Non-Ionizing Radiation Protection maintain that there is no credible scientific evidence for this. And the National Cancer Institute, which has an extensive web resource covering cell phone health research, observes that: “The most consistent health risk associated with cell phone use is distracted driving and vehicle accidents.”
The radiation given off by cell phones operating on 5G networks is at the low-energy end of the electromagnetic spectrum, making it much safer than high-energy ionizing radiation such as X-rays and gamma rays. Ionizing radiation has enough energy to ionize an atom or molecule and damage cell DNA potentially causing cancer, whereas the non-ionizing radiation associated with 5G radio waves (and all the previous generations of mobile technology) cannot do this.
While the radio frequency or RF radiation employed by 5G does not cause cancer by damaging the DNA in human cells, studies are still ongoing regarding the effects of non-ionizing radiation. For now, a number of leading authorities in the health and regulation sphere have made their assessments on the implications of 5G and wireless technologies.
The US Food and Drug Administration (FDA) is not only responsible for protecting public health through the control and supervision of food and medicine, but also in matters relating to electromagnetic radiation emitting devices. According to an FDA statement from 2018: “the current safety limits for cell phone radiofrequency energy exposure remain acceptable for protecting the public health.” In recent questioning concerning 5G, FDA officials have restated this view.
The US Environmental Protection Agency (EPA) and the US National Toxicology Program (NTP) have not formally classified RF radiation as cancer-causing.
Though the World Health Organization’s International Agency for Research on Cancer (IARC) classifies RF radiation as “possibly carcinogenic to humans” due to the findings of a possible link in at least one study between cell phone usage and a specific type of brain tumor in rats, the IARC considers the overall evidence to be “limited.” You should also note that the IARC puts coffee and talc-based body powders in the same “possibly carcinogenic” category.
5G isn’t a weapon in itself – but there are ways in which it could potentially be weaponized.
5G is intended not only to improve on the performance of mobile communication networks but also to link digital systems that need enormous amounts of data in order to work automatically. And though the 5G commercial network will be built and activated by private companies, military experts predict that 5G systems will play an essential role in the use of hypersonic weapons – including missiles bearing nuclear warheads that can travel at speeds greater than Mach 5 (five times the speed of sound).
By minimizing network latency or time lag, 5G technology could enable these weapons to change direction in a fraction of a second, to avoid interceptor missiles. 5G networks could also allow military strategists and command posts to gather, analyze, and transmit enormous quantities of data in a very short time. And 5G automated systems could orchestrate responses to data analysis or incoming attacks in real or near real-time.
An interesting sideline on 5G weaponization concerns a non-lethal crowd control device called the Active Denial System (ADS), which was developed by the US Department of Defense. It works by firing a high-powered beam of 95GHz waves at a target – if the target is human, anyone caught in the beam will feel as if their skin is burning. The burning sensation ceases once the target leaves the range of the beam.
The 95GHz waves that the ADS employs fall within the millimeter-wave (mmWave) frequency spectrum used by some of the higher-speed 5G networks. But the concentrations of energy required to generate an ADS beam are way in excess of anything produced by 5G cell towers or equipment.
The highest speed 5G networks operate on millimeter waves, which have extremely large wavelengths and extremely small frequencies. These waves are generally unable to penetrate solid objects such as buildings or trees, so 5G base stations and cell towers will be planted at shorter distances (as “small cells”), to create an environment where radio frequency or RF signals can saturate an area.
This level of saturation with a form of radiation that’s still largely untested is causing concerns.
To reduce your exposure to 5G and other radio frequency emissions, you can:
In 2008, the International Telecommunications Union (ITU) laid down its specification for the 4th Generation of mobile technology, or 4G. In terms of download speed, this stipulated a minimum specification for 4G of 100Mbps (Megabits per second). Note that there are 8 megabits (Mb) in a megabyte (MB). However, this stipulation was so high that even today, equipment manufacturers and network providers have been unable to meet it in full.
As a workaround, manufacturers and network providers came up with the concept of LTE, which is short for “Long-Term Evolution,” to describe a network standard that approaches the values specified for true 4G. 4G LTE can currently support interactive multimedia, voice, and video, with maximum speeds of up to 20Mbps.
For 5G, download speeds of around 1GBps (Gigabits per second) have been observed, with a theoretical maximum of around 10GBps at the current state of technology. 1GBps equals 1,000MBps. In practice, however, 5G speeds assume a range of values from around 300MBps to 1GBps, which depend on the technology providing the network, the level of network congestion, the device you use, and other factors.
It’s in terms of latency (a measure of the time it takes for information sent from a device to be usable by the receiver) that 5G truly has the edge over its predecessor. With 4G networks, an average latency of around 50ms (milliseconds) is not uncommon, with the best performing networks reducing this to around 20ms. Note that it takes at least 10ms for an image seen by the human eye to be processed by the brain.
5G gives an average latency of around 10ms, reducing down to around 2ms on the best performing networks. It’s theoretical best is 1ms of latency, which is an almost real-time response.
In other considerations, 5G wastes less power than 4G LTE, which uses substantially more energy than the newer generation. 5G networks also provide better connectivity, enabling potentially millions of devices in densely populated areas to connect concurrently without experiencing latency or speed issues.
Much has been said online and in the popular press about the damaging effects of 5G radiation – despite plenty of scientific evidence to the contrary. Some conspiracy theories also claim that 5G networking technology caused the SARS virus to mutate into COVID-19, and is responsible for the coronavirus pandemic, which is also untrue.
After seven years of research, the International Commission on Non-Ionizing Radiation Protection (ICNIRP) released 2020 guidelines on limiting exposure to electromagnetic fields. These guidelines include information about 5G.
According to the commission, the main effect that radio frequency or RF electromagnetic fields of the type associated with 5G and other mobile networks have on the human body is increased temperature of exposed tissue. Radio frequency exposure and increased temperature can be dangerous above a certain threshold – but that threshold is unlikely to be breached by normal cell phone usage under 5G.
The ICNIRP has also stated that there is no evidence that electromagnetic fields cause health effects such as cancer, electro-hypersensitivity, or infertility. The only two recognized health effects are nerve stimulation at ranges up to 10MHz, and heating from 100kHz.
A number of 5G providers are rolling out or preparing to offer fixed wireless services for home internet. For example, Starry Internet in Boston uses 5G millimeter-wave bands on a fixed-wireless network to deliver internet to apartment complexes and some residential buildings in Los Angeles, Denver, New York City, Boston, and Washington DC. And Verizon currently offers its 5G Home internet service in select parts of Chicago, Sacramento, Los Angeles, Houston, and Indianapolis.
The problem is that 5G fixed wireless availability is extremely limited for now, and it’s hard to say whether 5G home internet will be better or faster than DSL, fiber, or cable. Service providers will need to build more robust networks if they’re to guarantee faster speeds and performance for 5G and the emerging WiFi 6 standards.
No – at least, not for now. Putting the necessary infrastructure and services in place for 5G is putting a considerable financial burden on the major network carriers. And for the moment, they’re passing that cost load onto their consumers.
Broadly speaking, the difference in average cost between 5G and 4G unlimited data plans varies from $5 to $72 per month.
Verizon, for example, offers 5G plans with prices on average $18 higher than 4G, but these services come with free roaming in Canada and Mexico, along with Apple Music. Sprint doesn’t charge a fee but requires subscribers to sign up for its $80 “Unlimited Premium” plan. Two of AT&T’s unlimited plans include 5G (their “Extra” plan costs $40 per line a month, and their “Elite” plan is $50 per line a month).
Using a technology called Fixed Wireless Access (FWA), Verizon’s 5G Home service offers consumers a 1Gbps (Gigabit per second) in-home connection via 5G. T-Mobile has announced that it plans to offer a 5G-based fixed wireless broadband service to more than half of US households by 2024. And AT&T has been weighing options for offering a 5G-based FWA service.
Due to the hardware requirements of the new generation of technology, you will need a 5G phone to access a 5G network.
However, 5G will not replace 4G LTE in the way that 4G superseded 3G when it launched. As 5G technology continues to develop by building on top of existing 4G networks, the two systems will be used in tandem for the foreseeable future.
In fact, users with 4G phones may see a boost in speed as 5G networks roll out, due to the effects of dynamic spectrum sharing and carrier aggregation. Dynamic spectrum sharing or DSS technology lets carriers use the same spectrum band for 4G and 5G. As more users leave 4G to transition to 5G, its capacity will increase – and so will speeds for the remaining 4G users. Carrier aggregation allows carriers to combine different 4G signals, resulting in a significant boost in performance and capacity.
With its minimal network latency and potential to support millions of devices at ultra-fast speeds, 5G is important for a number of reasons.
5G has the potential to greatly extend the reach of mobile broadband. For communications, the quality of voice calls made over any Voice over Internet Protocol (VoIP) service such as Skype, WhatsApp, or Zoom will be very much sharper and clearer than on the phone network. What’s more, Voice over 5G (Vo5G) services will enable improved video calling, telepresence, augmented reality (AR), and virtual reality (VR) applications.
Under the mMTC (massive Machine Type Communications) specification of 5G, networks should be able to accommodate up to a million connected devices per square kilometer. This kind of mass connectivity with minimal latency could, for example, make it possible for live audiences at stadiums to stream events via their phone or tablet to viewers at home.
The URLLC (Ultra-Reliable and Low Latency Communications) specification of 5G allows for reliable, instant communications between devices and the network. For autonomous or “driverless” vehicles, this allows vehicles to communicate their exact speed and position with each other in real or near real-time. This kind of communication would make it possible for autonomous vehicles to steer and react with acceptable margins of safety. It would also open the door for “platooning” – a scenario where groups of vehicles in a connected convoy will be able to drive with a one-second gap between each vehicle, automatically matching each other’s speed and braking patterns.
One of the biggest concerns about 5G is the potential cancer risk associated with the installation of many more antennas in urban areas, transmitting the radio frequency (RF) waves needed to create 5G networks.
However, studies conducted by respectable authorities including the US Food and Drug Administration (FDA), the Federal Communications Commission (FCC), the National Cancer Institute, and the International Commission on Non-Ionizing Radiation Protection (ICNIRP) suggest that the levels of radiation emitted by 5G infrastructure and equipment will be well within safe limits.
A number of other concerns about the dangers of 5G have been voiced, notably via online conspiracy theories linking the new technology – erroneously – to the outbreak and spread of COVID-19, and the death of flocks of flying birds.
The greatest danger so far exhibited by 5G has been in the hype and reaction to these stories. Several acts of arson and vandalism have been perpetrated against 5G infrastructure in the UK and elsewhere, as a result.
South Korea, China, and the United States currently lead the world in building and deploying 5G technology. As of January 2020, commercial 5G networks have been deployed in 378 cities across 34 countries. These include:
Austria, Estonia, Finland, Germany, Hungary, Ireland, Italy, Latvia, Lithuania, Monaco, Poland, Romania, San Marino, Spain, Sweden, Switzerland, and the UK, in Europe.
Suriname, Trinidad & Tobago, the US, and Uruguay, in North and South America.
Bahrain, Kuwait, Lesotho, Oman, Qatar, Saudi Arabia, South Africa, and the United Arab Emirates (UAE), in the Middle East and Africa.
Australia, China, the Maldives, New Zealand, and South Korea, in Asia and Oceania.
Advancing on previous generations of mobile technology, 5G introduces three new aspects: bigger channels (which speed up data transfer), lower latency (which reduces lag, and makes networks more responsive), and the ability to connect a lot more devices at once (for sensors and smart devices).
Initially, 5G installations constitute a “non-standalone,” or NSA, network. They have this designation because all 5G devices in the US, for now, require the presence of a 4G network to make initial connections before trading up to 5G where it’s available. Later this year, 5G networks will become “standalone,” or SA, not needing 4G coverage to work.
5G can run on any frequency, so there are 5G variants on the low, middle, and high band frequency spectrum. 5G speeds are directly related to the width of the available channels, and how many channels are available. With current phones in the low and mid-band 5G spectrum, it’s possible to combine two 100MHz channels, for 200MHz usage, and to add three more 20MHz 4G channels on top of that. In high-band 5G, you can use up to eight 100MHz channels.
In early 5G installations piggybacking on 4G, networks need to erect frequency barriers between their 4G and 5G channels. But with the introduction of dynamic spectrum sharing or DSS technology, carriers can dynamically split channels between 4G and 5G, based on demand.
High-band 5G or millimeter-wave (mmWave) uses radio waves with a very short range, typically demanding about 800-foot distances between towers. mmWave gives very high speeds, but poor penetration of solid barriers, so large numbers of small base station antennas or “small cells” are required to maintain coverage.
Just like other cellular networks, 5G networks use a system of cell sites to divide their territory into sectors and send encoded data through radio waves. Each cell must be connected to a network backbone, either through a wired or wireless backhaul connection.
In many major cities, carriers installed these “small cells” to increase 4G capacity starting in 2017. For their 5G implementations, they only need to bolt an extra radio onto existing sites. Elsewhere, however, carriers will need to convince municipal authorities to let them add small cells to suburban neighborhoods.
If it lives up to its promise, 5G will provide the speed, low latency, and connectivity needed to facilitate a new generation of applications, services, and business opportunities.
For the growing Internet of Things, massive machine-to-machine communications under 5G will enable us to connect billions of devices without human intervention at a scale never seen before. This could revolutionize processes and applications in various sectors, including agriculture, manufacturing, health care, and business communications.
With ultra-reliable low latency communications, mission-critical and real-time operations such as the control of industrial robotics, vehicle-to-vehicle communications, and automated safety systems could become commonplace under 5G. This could also make remote medical care, procedures, and treatment a practical reality.
Enhanced mobile broadband under 5G could provide fixed wireless internet access for homes, outdoor broadcast applications without the need for external broadcast units, and greater connectivity for people on the move.
Combined with data analytics and artificial intelligence (AI), 5G, and IoT could enable organizations to gain insights into their operations in new ways. This could pave the way for innovation, cost savings, and improved customer experiences.
5G meaning the 5th Generation of wireless technology, is the latest in a set of standards describing cellular networks. Each generation (1G, 2G, 3G, etc.) is based on a set of telephone network standards that describe the technological implementation of the system. 1G was analog cellular. 2G technologies, such as CDMA, GSM, and TDMA, were the first generation of digital cellular technologies. With 3G technologies like EVDO, HSPA, and UMTS, digital performance levels increased, bringing speeds from 200kbps (kilobits per second) to a few megabits per second (there are 8 megabits in a Megabyte). With the arrival of 4G technologies such as WiMAX and LTE, speeds scaled up to hundreds of megabits, even approaching gigabit-levels. 5G network speeds are taking this up a quantum level, with download speeds of around 1GBps (Gigabits per second) having been observed, and a theoretical maximum speed of around 10GBps at the current state of technology. | <urn:uuid:7803fa4e-7e46-497e-b025-3e60f1e36af6> | CC-MAIN-2022-40 | https://itchronicles.com/what-is-5g/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00470.warc.gz | en | 0.940802 | 7,778 | 3.546875 | 4 |
The different subscribers require unique addresses to be able to establish communication connections. The structure of these addresses is defined in SIP. An SIP address consists of a URL and domain name, similar to an e-mail address. An example for this type of address is “firstname.lastname@example.org". The first part of the address is the user name or often times the telephone number. The domain specifies the respective Session Initiation Protocol network.
In Voice-over-IP telephony, Session Initiation Protocol merely controls establishing and closing the connection as well as the connection modalities. The actual speech information is transmitted directly between the subscribers using a different protocol. VoIP typically uses the Realtime Transport Protocol (RTP, RFC 3550) for this purpose. SIP uses SDP (Session Description Protocol) to define details of the actual media transmission between the subscribers via RTP. The connection is established with an INVITE message, acknowledged with an OK message. The subscribers must first register their SIP addresses with a registrar server if they do not recognise each other.
Key elements of the SIP system architecture are the User Agent, the Registrar Server and the Proxy Server. The User Agent is a terminal such as a telephone, computer or mobile attempting communicate via Voice-over-IP and which has an SIP address. If they recognise each other, they can communicate directly. The Registrar Server controls locating the subscriber. For this purpose the User Agents regularly report to the registrar using their SIP and IP addresses. Using this information, the server is able to address connection requests. The Proxy Server lastly can be used as a client or server and send requests on behalf of a client. Other important elements in the SIP architecture are Redirect Server, Session Border Controller and Gateways. | <urn:uuid:67892270-55fc-4524-8901-250acae1b4ee> | CC-MAIN-2022-40 | https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/sip | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00470.warc.gz | en | 0.89047 | 368 | 3.421875 | 3 |
Cloud deployment enables end-users to access SaaS, PaaS, or IaaS applications on demand. A cloud deployment model relates to a cloud computing architecture on which you will implement the cloud solution.
Cloud computing has significantly changed how businesses store and work with data. With the help of AI, organizations can now work faster, more efficiently, and on a larger scale than ever before.
Artificial Intelligence (AI) in cloud computing allows for automated functions such as data classification and predictive analytics. There are several advantages to using AI in cloud deployments, and this article will explore them.
Enhanced IT Infrastructure
The restructuring of IT infrastructure is one of the most apparent advantages of cloud computing. AI allows for automating routine tasks for maintaining, updating, and upgrading IT infrastructure within the cloud. It can also help expedite digital transformation by automating specific tasks required for digital transformation.
Uninterrupted Data Access
AI uses data to improve performance and make smarter decisions. It provides firms with tools to manage and improve their business processes in a data-driven, ever-changing world.
Tying AI to the hybrid cloud helps organizations manage and govern data. Moreover, cloud computing AI gives organizations data insights, scalability, and the flexibility to create industry standards and modernize by increasing their assets.
Automated Cloud Security
Utilizing AI in the cloud is also beneficial for cloud security. Innovations in AI could evaluate information on cloud infrastructure and promptly spot anomalies. As a result, AI can caution humans or respond with alternate options. These innovations can significantly aid in limiting illegal access to cloud systems.
Improved Efficiency and Productivity
The adoption of cloud computing has eliminated the need for IT staff to perform hardware configuration, repairs, and other functions in on-premise structures. By switching to cloud computing, IT staff can focus on more important activities in an organization's daily operations.
In a similar way that cloud computing removed repetitive tasks from internal IT staff, utilizing AI in cloud computing further removes repetitive tasks. AI can help IT teams work more efficiently by automating repetitive maintenance tasks. This allows IT teams to focus on more productive activities.
Reduction of Costs
Arguably the most important benefit of utilizing AI with cloud deployments is the reduction of cloud spending.
One goal of combining artificial intelligence and cloud computing is to reduce expenses. The economics of cloud computing lower conventional infrastructure expenses by a large amount. AI can further reduce IT expenses.
AI can help organizations better monitor and control their cloud spending. Through predictive analytics, AI can provide insights to an organization on expected cloud spending in a given month based on past usage. Based on these insights, organizations can look to reduce their cloud spending by switching to a cloud plan that better fits their usage habits.
Cloud backup services now get integrated into nearly every application. Today, it has become possible to store huge amounts of data in the cloud using Artificial Intelligence.
AI and the Cloud are changing the face of businesses. Thus, if a company wants to remain competitive, it must accept these new technologies. In the end, cloud-based solutions, such as Artificial Intelligence, can help an organization grow while remaining profitable.
Only cloud solutions can offer the agility, flexibility, and efficiency needed to be successful in the digital age. At Datacenters.com, our goal is make your cloud journey an easy one. Our cloud control center utilizes AI to scale and manage multicloud deployments. The AI learns and adapts to the demands on cloud services to automatically reduce runtime costs. Schedule a demo today or spin up servers on demand. | <urn:uuid:f589fd9f-048b-48db-b2e8-caa2166db83b> | CC-MAIN-2022-40 | https://www.datacenters.com/news/how-to-leverage-ai-to-reduce-your-cloud-spend | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00470.warc.gz | en | 0.926023 | 728 | 2.8125 | 3 |
The answer is very simple: Loosely coupled systems should include services that are independent of each other. That is, anyway, the gold standard.
However, it is not easy to achieve loosely coupled systems. The concept is far from new, and adaptations of loosely coupled systems are currently well underway. But the implementation could take some time, especially on a wide-scale as everyone wants it to be.
So what exactly is a loosely coupled system, and what’s stopping everyone from implementing these systems from the get-go? Let’s find out:
What Is A Loosely Coupled System:
A loosely coupled system is one where components of a system are independent of each other. So if one component breaks down, it does not affect the rest of the system. This will prevent technical issues from affecting the entire system. The problem area can be isolated and worked upon without affecting the other services.
Loosely coupled systems are intended to be more stable and easy to work with since each service would be contained in itself and not affect the bigger structure that it is intended to run. Each component can be maintained and developed by a different team, so individual focus on the components will also increase.
Benefits Of A Loosely Coupled System:
Most organizations are now leaning toward loose coupling in their structures. A loosely coupled system has many benefits that currently make it the most sought-after system. Some of these benefits include the following:
It Can Make The System More Efficient
Loosely coupled systems contain components that can easily be reconfigured or modified without involving or compromising the core operations of the system. This means that problems can be resolved, and components can be updated in isolation, improving the system’s efficiency.
Loose components in a system can add flexibility to the system, which ensures that any condition that needs addressing can be responded to in various ways. Therefore, a loosely coupled system can use multiple types of responses, making the system more solution-oriented.
Makes The System Adaptable
Since multiple types of solutions can be implemented on components, the system can become adaptable to all sorts of conditions.
Increased Agility In The System
Since loosely coupled systems can allow developers to focus on individual components without worrying about the integrity of the core structure, they can increase the system’s agility.
These systems do not have to wait for implementations of different services before the whole system can come together, making for the stable deployment of the system.
Loosely coupled systems are more innovative and evolvable since more room for experimentation, changes, and unique solutions can be applied to each component when necessary. Therefore, these systems can help shape environments and change accordingly as the need arises.
Challenges Faced By Loosely Coupled Systems:
Perhaps, the biggest challenge that loosely coupled systems face in the tech world today is that the implementation of services that are completely independent of each other yet work together in harmony is difficult to achieve. Developers need to work up unique solutions for each system to introduce loose-coupling.
Many systems don’t really need loose coupling. Being tightly coupled is not a disadvantage to them at all; in fact, these systems thrive on their interaction with the different components. So thinking of tight coupling as an inferior solution can also hinder progress.
In a loosely coupled system, when components need to interact, developers have to go out of their way to make the interaction happen. Therefore, complete independent components from each other could be a disadvantage in some cases.
Another possible challenge many loosely coupled systems face is inconsistency. When entirely different teams are running each component of a system, any communication barrier can deliver inconsistent results to the client, costing the organization as a whole.
Are Tightly Coupled Systems Outdated?
Tightly coupled systems, on the other hand, are becoming outdated.
A tightly coupled system is one where each component in the system is completely dependent on the other components. If one service experiences failure, the entire system is compromised. Naturally, flexibility, growth, and innovation are much more difficult than a loosely coupled system in these systems.
However, tight coupling in structures can be a good thing as well.
Tightly coupled structures can process large amounts of data quickly, and everything does not need to be fed to an individual system each time. Besides, tightly coupled systems are also easier to deploy and implement. Therefore, if a system is not too complex, it can still be used in many instances.
Since tightly coupled systems are easier to implement, many argue that there is no need to compromise the integrity of entire structures just to introduce loose coupling, at least not at present. So tight-coupling is not a preferred mechanism but far from being completely outdated.
There is a way between tightly coupled and loosely coupled systems, called mixed structures. These structures implement components of both tightly coupled and loosely coupled systems. So, the system could be presented as tightly coupled but use loose coupling principles mainly to implement functions.
But even mixed structures have to use implementations of loose coupling. Therefore, loosely coupled systems are the better choice and should be more widely implemented. So even if complete individuality of services in a system can pose some issues presently, these problems can be solved with better implementation and continuous innovation.
Conclusion to How Loose Should Loosely Coupled Be
Loosely coupled systems should include components that do not depend on each other. Yes, there is a concern regarding inconsistency, and implementation of these systems is difficult even now. However, with better implementation and evolution of the systems, loosely coupled systems should take over organizations in the near future.
For now, mixed structures and even tight coupling in some areas are doing well enough. If loose coupling cannot significantly improve a system, its implementation will only be a waste of resources and compromise the integrity of an established structure.
How loose should loosely coupled be? Completely loose and entirely independent.
Contact us to gain solutions and services to how loose should loosely coupled be. Further blogs within this How Loose Should Loosely Coupled Be category. | <urn:uuid:a7d52f5b-2f83-4172-8289-b9ef3f5827f7> | CC-MAIN-2022-40 | https://cloudcomputingtechnologies.com/how-loose-should-loosely-coupled-be/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00470.warc.gz | en | 0.950585 | 1,269 | 2.8125 | 3 |
What are VM/Host affinity rules?
These VM/Host rules are configured on a cluster object in the vCenter inventory. Essentially what the rules do are associate one or more virtual machines with one or more hosts. On power on, the VM should only be started on these hosts. On a failure, the VM is restarted on another host in the same VM/Host affinity group.
“Must” rules and “Should” rules
Next lets talk about the difference between “must” rules and “should” rules in the context of VM/Host affinity rules. If we set a “must” rule, this will always tie a VM to one or more hosts. If all the hosts in that group fail, or if there are not enough resources available on the hosts in the group, the VM cannot be started. The “must” rule means it cannot run on a host that is not in the VM/Host affinity group. If we set a “should” rule, this rule will allow the VM be started on hosts that are not in the VM/host group, but only when there are no hosts/resources available in the VM/affinity group that the VM is associated with.
Which type of rule for VSAN stretched cluster?
The recommendation when setting VM/Host affinity rules in a VSAN stretched cluster is to use “should” rules and not to use “must” rules. The guidance is to create two VM/Host affinity groups, one group is made up of VMs and hosts from one site and the other group made up of VMs and hosts from the other site. If we use a “should” rule, should a VM need to be restarted, the first attempt is always made to start the VM on the hosts that are part of the same VM/Host affinity group. However, if there is a lack of resources, or if there is a catastrophic site failure, a “should” rule will allow the VM to be restarted on the other site, in other words, on hosts that are not part of the same VM/Host affinity. This is important behaviour when there is a complete site failure. The screenshot below shows where to set the “should” part of the VM/Host rule:
Now let’s consider DRS in VSAN stretched cluster. The first DRS consideration is in relationship to VM/Host affinity rules. DRS is needed for VM/Host affinity rules work. If DRS is not enabled, the “should” rules are ignored. So if you want to use VM/Host affinity “should” rules, you will need DRS.DRS can be setup in fully automated or partially automated mode. Of course, you will need to make sure you have a vSphere edition which supports DRS, so this is also a consideration.
VM placement with VM/Host affinity rules and DRS
Next, I want to highlight something in the workflow that may not be obvious. In order to be part of an affinity rule, the VM must be created in advance. So the workflow would be to deploy your VMs, create the hosts groups, and then add the VMs and hosts to the VM/Hosts groups. You can now power on the VMs. There is no way at the current time to add the VMs to a VM/Host affinity group during the deployment. This is something we are working on to improve.
This then leads to the predicament of whether or not the VM is deployed to the correct host. Not too worry, DRS will take care of that. If it is in full automated mode, the VM will be vMotion’ed to the correct site when you attempt to power it on. If DRS is enabled in partially automated mode, you will not be able to power on the VM if it is located on the site to which it does not have infinity. You will need to manually migrate the VM to the correct site before it powers on.
If DRS is not enabled, the “should” rule is ignored and the VM can be run on any host on any site.
DRS and full site failure in VSAN stretched cluster
One final consideration is what to do when there is a full site failure, and all VMs have been restarted on the remaining site. Now the failed site recovers, and all the hosts are now rebooted an online. However the components are rebuilding/resyncing. At this point, we may want to consider waiting before bringing any virtual machines back online until the resynchronization is complete. The reason for this is read locality. If the VMs were restarted on the recovering site, the components on the recovering site are currently not available until the resync/rebuild completes. Therefore the VMs have to do I/O over the inter-site link. This will impact the performance of the VMs (once the components are synced, the VMs will stop doing I/O to the remote site and use the local copy). However it is for this reason, on a full site failure, one should consider waiting for the components to fully resync before bringing the VMs back on the recovered site. If you are using DRS in fully automated mode, you should consider placing it in partially automated mode when a full site failure has occurred to avoid VMs moving back while resync is still in progress. When the failure is resolved, and all components are reynched, place it back into fully automated mode. | <urn:uuid:487107a4-dadf-4fa6-988b-f2a82513fae4> | CC-MAIN-2022-40 | https://cormachogan.com/2015/09/29/drs-and-vmhost-affinity-groups-in-vsan-stretched-cluster/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00470.warc.gz | en | 0.944006 | 1,150 | 2.59375 | 3 |
What does "Authenticate Account" mean?
In computing, authentication is the process of verifying the identity of a person or device.
A common example is entering a username and password when you log in to a website.
Entering the correct login information lets the website know 1) who you are and 2) that it is you, that is accessing the website.
For additional information, please contact your local ACI Learning Hub representative or reach out to us in the help chats at the bottom of the page or via https://help.acilearning.com/. | <urn:uuid:0861eb68-7609-48a1-ab72-43a59fdeda6a> | CC-MAIN-2022-40 | https://help.acilearning.com/en/articles/5413299-what-does-authenticate-account-mean | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00470.warc.gz | en | 0.889788 | 116 | 3.28125 | 3 |
A New Flaw Was Discovered in the Microsoft Windows Platform Binary Table (WPBT)
The Vulnerability Could Allow Hackers to Install Rootkits on Windows Devices.
The flaw discovered by the researchers at Eclypsium in the Microsoft Windows Platform Binary Table (WPBT) can be exploited in attacks meant to install rootkits on all Windows computers that were shipped since 2012.
Rootkits are malicious computer programs that penetrate a machine in order to gain administrator or system-level rights.
Rootkits are primarily designed to bypass user authentication measures before a harmful payload arrives, despite their obviously secretive behavior (i.e., they often work in tandem with trojans or other types of viruses).
What Is WPBT?
The Windows Platform Binary Table is a fixed firmware ACPI (Advanced Configuration and Power Interface) table.
It was introduced by Microsoft in Windows 8 in order to allow its vendors to execute programs every time a device boots.
The mechanism is very important as it can enable OEMs to force install critical software that can’t be bundled with Windows installation media.
Unfortunately, the mechanism can allow attackers to deploy malicious tools.
Because this feature provides the ability to persistently execute system software in the context of Windows, it becomes critical that WPBT-based solutions are as secure as possible and do not expose Windows users to exploitable conditions. In particular, WPBT solutions must not include malware (i.e., malicious software or unwanted software installed without adequate user consent).
All Computer Running Windows 8 or Later in Danger
The attacks can use various techniques that allow writing to memory where ACPI tables (including WPBT) are located or by using a malicious bootloader, as the BootHole vulnerability can be easily abused.
The Eclypsium research team has identified a weakness in Microsoft’s WPBT capability that can allow an attacker to run malicious code with kernel privileges when a device boots up. This weakness can be potentially exploited via multiple vectors (e.g. physical access, remote, and supply chain) and by multiple techniques (e.g. malicious bootloader, DMA, etc).
The Researchers Informed Microsoft About the Vulnerability
Microsoft recommended the use of a Windows Defender Application Control policy that allows users to control what binaries can run on a Windows device.
Generally, it is recommended that customers, who are able to implement application control using WDAC rather than AppLocker, do so. WDAC is undergoing continual improvements, and will be getting added support from Microsoft management platforms. Although AppLocker will continue to receive security fixes, it will not undergo new feature improvements.
AppLocker can also be deployed as a complement to WDAC to add the user or group-specific rules for shared device scenarios, where it is important to prevent some users from running specific apps. As a best practice, you should enforce WDAC at the most restrictive level possible for your organization, and then you can use AppLocker to further fine-tune the restrictions.
If you have a system that is running any older Windows releases, you can use the AppLocker policies to control what apps are allowed to run on a Windows client.
Security professionals need to identify, verify and fortify the firmware used in their Windows systems. Organizations will need to consider these vectors, and employ a layered approach to security to ensure that all available fixes are applied and identify any potential compromises to devices. | <urn:uuid:61c062e2-5c02-463c-87e6-1d16fd5cd1ee> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/a-new-flaw-was-discovered-in-the-microsoft-windows-platform-binary-table-wpbt/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00670.warc.gz | en | 0.912413 | 701 | 2.71875 | 3 |
Prevent Phishing In The Cloud: Tips For Your Organization
The term “phishing” describes a type of cyberattack in which criminals try to trick people into providing sensitive information, such as login credentials or financial data. Phishing attacks can be very difficult to detect, because they often look like legitimate communications from trusted sources.
Phishing is a serious threat to organizations of all sizes, but it can be especially dangerous for companies that use cloud-based services. That’s because phishing attacks can exploit vulnerabilities in the way that these services are accessed and used.
Here are some tips to help your organization prevent phishing attacks in the cloud:
- Be aware of the risks.
Make sure that everyone in your organization is aware of the dangers of phishing attacks. Educate employees about the signs of a phishing email, such as misspellings, unexpected attachments, and unusual requests for personal information.
- Use strong authentication.
When possible, use two-factor authentication or other forms of strong authentication to protect sensitive data and systems. This will make it more difficult for attackers to gain access even if they are able to steal login credentials.
- Keep your software up to date.
Ensure that all software applications used by your organization are kept up to date with the latest security patches. This includes not only the operating system but also any browser plugins or extensions that are used.
- Monitor user activity.
Monitor user activity for signs of unusual or suspicious behavior. This can help you to detect a possible phishing attack in progress and take steps to stop it.
- Use a reputable cloud service provider.
Choose a cloud service provider that has a good reputation for security. Review the security measures that are in place to protect your data and make sure that they meet your organization’s needs.
- Try Using The Gophish Phishing Simulator In The Cloud
Gophish is an open-source phishing toolkit designed for businesses and penetration testers. It makes it easy to create and track phishing campaigns against your employees.
- Use a security solution that includes anti-phishing protection.
There are many different security solutions on the market that can help to protect your organization from phishing attacks. Choose one that includes anti-phishing protection and make sure it is properly configured for your environment.
Following these tips can help to reduce the risk of a successful phishing attack against your organization. However, it’s important to remember that no security measure is perfect. Even the most well-prepared organizations can fall victim to phishing attacks, so it’s important to have a plan in place for how to respond if one does occur. | <urn:uuid:706a01cc-aa8a-4526-ae1d-c223d64dbf2d> | CC-MAIN-2022-40 | https://hailbytes.com/prevent-phishing-in-the-cloud-tips-for-your-organization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00670.warc.gz | en | 0.938747 | 555 | 2.515625 | 3 |
What is Shellshock?
The ‘Bash bug’, most commonly known as Shellshock is typically located within the command-line shell that is used within many Mac, Linux and UNIX operating systems, which can leave websites and devices powered by these operating systems open to risk.
How does it work?
Bash supports the export of not just shell variables but also shell functions to other bash instances, via the process environment to (indirect) child processes. The vulnerability occurs because bash does not stop after processing the function definition; it continues to parse and execute shell commands following the function definition.
An environment variable with an arbitrary name can be used as a carrier for a malicious function definition containing trailing commands.
How does it affect me?
The security flaw itself resides on many Linux, UNIX and Mac operating systems. This leaves devices powered by these systems vulnerable, allowing malicious codes to be uploaded onto your computer. In essence, this means hackers could quite easily take control of your machine.
I use Windows – is this vulnerable to Shellshock?
Windows systems usually do not come installed with GNU Bash (this is the vulnerable in it).
However, Windows is still vulnerable as Bash can still be installed with other programs, therefore users of Windows will still need to remain vigilant to this vulnerability.
Which versions of Bash are affected?
Every version of Bash (up to version 4.3) is vulnerable to Shellshock, therefore we’re looking at 25 years of Bash installs.
Is this in relation to Heartbleed?
No. Heartbleed stole information about you. Shellshock on the other hand, is far more sophisticated as it can take over a host computer and gain control. So if you’re reading an article over the internet, Shellshock hackers could make you vulnerable.
So, is Shellshock actually bigger than Heartbleed?
The impact of Shellshock has been huge and tremors have been felt across the technology world. This new vulnerability in the Bash shell has caused shockwaves, leaving no software safe on any Mac, Linux and UNIX systems, as a minimum.
As identified by The Register, security experts have said that as with Heartbleed, Shellshock is a pervasive flaw that could potentially take years to fix properly, with the onus being on webmasters and system admins rather than the end user.
What can be done to resolve this?
There is a huge rush of security experts across the world looking for a solution to fix this.
For users who use a version of Ubuntu, a patch is available from USN-2363-1 and for those users of Debian, the patch is available from DSA-3035-1.
If your operating system releases a new software patch or update, it is important to install this, as this will reduce the chance of you becoming a victim to Shellshock.
What’s the damage?
So far, thousands of servers have been compromised via Shellshock and some have been used to bombard web firms with irrelevant data.
Which companies / organisations have come out and reacted to Shellshock?
Companies that have reacted to Shellshock by releasing fixes and patches include Apple, Amazon and Google.
Additionally, the US Government have decided this flaw is serious, giving it a 10 out of 10 for severity.
What about Access Manager?
Access Manager uses Bash for Ubuntu and Debian; they have both been updated and will be available as an operating system update. For further information, please click here.
What is Shellshock? | <urn:uuid:58e56b07-1c8b-41c0-b75f-73bfa20f386b> | CC-MAIN-2022-40 | https://www.logonbox.com/content/the-shellshock-qa-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00670.warc.gz | en | 0.940185 | 727 | 2.96875 | 3 |
What you see as the address at the top of your web browser, like "www.google.com" or "facebook.com" is not the actual address. Instead, the real address is a number. In much the same way a phonebook (or contact list) translates a person's name to their phone number, there is a similar system that translates Internet names to Internet addresses.
There are only 4 billion Internet addresses. It's a number between between 0 and 4,294,967,296. In binary, it's 32-bits in size, which comes out to that roughly 4 billion combinations.
For no good reason, early Internet pioneers split up that 32-bit number into four 8-bit numbers, which each has 256 combinations (256 × 256 × 256 × 256 = 4294967296). Thus, why write Internet address like "192.168.38.28" or "10.0.0.1".
Yes, as you astutely point out, there are many more than 4 billion devices on the Internet (the number is closer to around 10 billion). What happens is that we can use address sharing (also called "network address translation"), so that many devices can share a single Internet adress. All the devices in your home (laptop, iPad, Nest thermistat, WiFi enabled Barbie, etc.) has a unique address that only works in the home. When the packets go through your home router to the Internet, they get changed so that they all come from the same Internet address.
This sharing only works when the device is what's called a "client", which consumes stuff on the Internet (like watching video, reading webpages), but which doesn't provide anything to the Internet. Your iPad reaches out to the Internet, but in general nothing on the Internet is trying to reach your iPad. Sure, I can make a Facetime video call to your iPad, but that's because both of us are clients of Apple's corporate computers.
The opposite of a client is a "server". These are the computers that provide things to the Internet. These are the things you are trying to reach. There are web server, email servers, chat servers, and so. When you hear about Apple or Facebook building a huge "data center" somewhere, it's just a big building full of servers.
A single computer can provide many services. They are distinguished by a number between 0 and 65,535 (a 16-bit number). Different services tend to run on "well known" ports. The well known port for encrypted web servers is 443 (no, there's no good reason that number out of 65535 combinations was chosen, it's not otherwise meaningful). Non-encrypted web-servers are at port 80, by the way, but all servers by now should be encrypted.
Web links like "https://www.google.com:443" must contain the port number. However, if you are using the default, then you can omit it, so "https://www.google.com" is just fine. However, any other port must be specified, such as "https://www.robertgraham.com:3774/some/secret.pdf". When you visit such links within your browser, it'll translate the name into an Internet address, then send packets to the combination address:port.
Normally, when you look for things on the web, you use a search engine like Google to find things. Google works by "spidering" the Internet, reading pages, then following links to other pages. After I post this blog post, Google is going to add "https://www.robertgraham.com:3774/some/secret.pdf" to it's index and try to read that webpage. It doesn't exist, but Google will think it does, because it reads this page and follows the link.
There is an idea called the "Dark Internet" which consists of everything Google can't find. Google finds only web pages. It doesn't find all the other services on the Internet. It doesn't find anything not already linked somewhere on the web.
And that's where my program "masscan" comes into play. It searches for "Dark Internet" services that aren't findable in Google. It does this by sending a packet to every machine on the Internet.
In other words, if I wanted to find every (encrypted) web server on the Internet, I would blast out 4 billion packets, one to each address at port 443. I would then listen for reply packets. All valid acknowledgements mean there's a computer with that address running such a service. When I do this, I get about 30 million responses, by the way. A single web server can host many websites, the actual number of websites is more like a billion.
Such a scan is possible because even though it takes 4 billion packets to do this, networks are really fast. A gigabit network connection, such as the type Google Fiber might provide you, can transmit packets at the rate of 1 million per second. That means, in order to scan the entire Internet, I'd only need 4 thousand seconds, or about an hour.
People get mad when I scan this fast, especially those with large networks who see a flood of packets from me in an hour. Therefore usually scan slower, at only 125,000 packets per second, which takes about 10 hours to complete a scan.
Two years ago a bug in encrypted web services was found, called "Heartbleed". How important a bug was it? Well, with masscan, I can easily send a packet to all 4 billion addresses, and test them to see if they are vulnerable. The last time I did this, I found about 300,000 servers still vulnerable to the bug.
Right at the moment, I'm doing a much more expansive scan. Instead of scanning for a single port, I'm scanning for all possible ports (all 65536 of them). That's a huge scan that would take 50 years at my current rate, or 5 years if I run at maximum speed on my Internet link. I don't plan on finishing the scan, but stopping it after a couple weeks, as sort of a random sample of services on the Internet.
One finding I have is a service called "SSH". It a popular service that administrators (the computer professional who maintain computers) use to connect to servers to control them. Normally, it uses port 22. Consider the output of my full scan below:
What you see is that I'm finding SSH on all sorts of ports. For every time somebody put SSH on the expected port of 22, roughly 15 people have decided to change the port and put it somewhere else.
There are two reasons they might do so. The first is because of a belief in the fallacy of security through obscurity, that if they choose some random number other than 22, then hackers won't find it. That's likely the case where we see old versions of SSH in the above picture, such as version 1.5 instead of the newer 2.0. That this is a fallacy is demonstrated by the fact that I can so easily find these obscure port numbers.
The other reason, though, is simply to avoid the noise of the Internet. Hackers are constantly scanning the Internet for SSH on port 22, and once they find it, start "grinding" password, trying password after password until they find one that works. This fills up log files and annoys people, so they put their services on other ports.
Note in the above picture two entries where Internet addresses starting with 121.209.84.x have SSH running at port 5000. Looking on the Internet, it seems these addresses belong to Telstra. It seems they have some standard policy of putting SSH on port 5000. If you were a hacker wanting to break into Telstra, that sort of information would be useful to you. That's the reason for doing this scan. I'm not going to grab all address:port combinations, but enough where I can start finding patterns.
Another thing I've found relates to something called VNC. It allows one computer to connect to the screen of another computer, so that you can see their desktop. It normally runs at port 5900. When you masscan the entire Internet for that port, you'll find lots of cases where people have the VNC service installed on their computer and exposed to the Internet, but without a password. This article describes some of the fun things we find in these searches, from toilets, to power plants, to people's Windows desktops, to Korean advertising signs.
But this full scan finds VNC running at other ports, as shown in the following picture.
For everybody running VNC on the standard port, it appears about 5 to 10 people are running it on some other random port. A full scan of the Internet, on all ports, would find a much richer set of VNC servers.
I tweet my research stuff often, but it's often inscrutable, since you are suppose to know things like VNC, SSH, and random/standard port numbers, which even among techies isn't all that common. In this post, I tried to describe from scratch the implications of the sorts of things I'm finding. | <urn:uuid:0e9441e9-d79c-498c-915d-ee72026c176f> | CC-MAIN-2022-40 | https://blog.erratasec.com/2016/05/from-scratch-why-these-mass-scans-are.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00670.warc.gz | en | 0.955774 | 1,911 | 3.203125 | 3 |
In this blog post
Data Management Patterns & Architectures
Data has always been significant to organizations, even before the advent of Big Data and machine-generated data. However, with the recent exponential increase in the volume of data, its importance has grown. In that context, enterprises have come up with different data architecture patterns which help them consolidate their data and generate insights from it.
The following are some data management architectures that have been implemented in enterprises in the last two decades.
- Operational Data Store (ODS): ODS is a central database that provides a snapshot of the latest data from multiple transactional systems for operational reporting. It enables organizations to combine data in its original format from various sources into a single destination to make it available for business reporting. ODS typically associates with OLTP (Online Transaction Processing) systems.
- Data Warehouse: A data warehouse is a type of data management system that is designed to enable and support business intelligence (BI) activities, especially analytics. Data warehouses are solely intended to perform queries and analyses and often contain large amounts of historical data. The data within a data warehouse is usually derived from a wide range of sources such as application log files and transaction applications including ODS.
- Data Mart: It is a subset of a data warehouse concentrated on a specific line of the business, department, or subject area. Data marts make specific data available to a defined group of users, which allows those users to quickly access critical insights without wasting time searching through an entire data warehouse.
- Data Catalog: As the disparate sets of data come from multiple source systems which might be from different geographies it is important that we store the information about the data. A data catalog contains details of all data assets in an organization, designed to help data professionals quickly look up the most appropriate data for any analytical or business purpose. It uses metadata to create an informative and searchable inventory of all data assets in an organization. Metadata can be simply defined as data about data. It is a description and context of the data, which helps to organize, find, and understand data.
Unstructured Data & Streaming Data
The syntax and semantics of the incoming data didn’t use to be clearly defined during ingestion but rather determined during the retrieval of the data. This has made enterprises shift to new kinds of data architectures.
Data Lake: A data lake is a central storage repository that holds big data from many sources in a raw, granular format. It can store structured, semi-structured, or unstructured data, which means data can be kept in a more flexible format for future use.
A data lake works on a principle called schema-on-read – there is no predefined schema into which data needs to be fitted before storage. Only when the data is read during processing is it parsed and adapted into a schema as needed. This saves a lot of time that’s usually spent on defining a schema. This also enables data to be stored as-is, in any format.
Is Data Lake the answer for all Data Management Issues?
Data Lake has brought many advantages to the enterprise data landscape. However, it has its limitations too, the following are some of the advantages and disadvantages of Data Lake.
Clearly, data lakes bring several new capabilities to enterprise data management architecture. However, they miss certain well-defined features of traditional data warehouses and operational data stores. This brings in the context of newer architecture known as Data lake House.
Emerging Pattern of Data Lake House
A data lake house is a data solution concept that combines elements of the data warehouse with those of the data lake. Data lakehouses implement data warehouses’ data structures and management features for data lakes, which are typically more cost-effective for storage.
Data lakehouses are enabled by a new, open system design – implementing similar data structures and data management features to those in a data warehouse, directly on the kind of low-cost storage used for data lakes. Merging them together into a single system enables data teams to work faster without needing to access multiple systems. Data lakehouses ensure the availability of the most complete and up-to-date data for data science, machine learning, and business analytics projects.
Technologies Behind Data Lakehouse
While Data Lakehouse retains most of the underlying technologies of existing Data Lake platforms, however, it has to bring in new technologies to support ACID Transaction support, Schema Enforcement & Governance, and other traditional enterprise features.
One such technology framework is known as Delta Lake.
Delta Lake is an open-source storage layer that brings reliability to data lakes. It enables building a Lakehouse architecture on top of data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing on top of existing data lakes, such as S3, ADLS, GCS, and HDFS.
In their latest trends on Data Management, analysts have come up several new concepts like,
- Data LakeHouse
- Data Fabric
- Big Data to Small & Wide Data
- Distributed SQL
Each of these concepts and architectures are not entirely new, but they are emerging from the existing architectures by refining them, especially by bringing the best of traditional data management with new-age data management.
Data LakeHouse will further help the enterprises to bring more data to their analytics scope along with bringing data governance and quality that is typical of the data warehousing era.
Here is a pictorial representation of Data LakeHouse by DataBricks – | <urn:uuid:c002cbc1-e13d-45e7-b664-afa316bec83d> | CC-MAIN-2022-40 | https://www.gavstech.com/data-lakehouse-the-latest-entrant-in-data-management-architectures/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00670.warc.gz | en | 0.920618 | 1,128 | 3.015625 | 3 |
October 16th 2017, US-CERT publicly disclosed a vulnerability at the core of the WPA-2 encryption protocol. This vulnerability affects nearly every modern encryption configuration used for transmitting information across the internet, especially Linux and Android devices. The KRACK exploit was discovered by security researcher Mathy Vanhoef before it could be implemented for widespread misuse; however, now that this issue is public knowledge, it is extremely important for businesses to update their systems to protect against it.
How Serious is this Vulnerability?
In terms of how harmful this exploit can be, it is extremely serious:
- It can be used to steal any encrypted information that is transmitted from or received by your computer or mobile devices.
- It can be used to inject various forms of malware into local networks and website.
- It affects all kinds of internet enabled devices; however, the most serious threats of injection are specific to Linux and Android.
The good news here is that a hacker needs to be within range of someone’s wifi network to implement it; so, the likelyhood of it being used against your home computer is fairly low. The most likely candidates for this attack are big businesses and smaller businesses that handle secure information.
Due to the potential damage that this exploit could cause, we strongly urge our clients to review their local networks to ensure that all of their connected devices are properly patched. | <urn:uuid:44b9268b-3f61-4166-862a-6c8e0ec72ed8> | CC-MAIN-2022-40 | https://comsolutionsusa.com/krack-wifi-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00670.warc.gz | en | 0.951692 | 281 | 2.734375 | 3 |
If only more things in life came with training wheels; a child’s first smartphone could certainly use some.
Like taking off the training wheels and riding out into the neighborhood for the first time, a smartphone opens an entirely new world for children. There are apps, social media, group chats with friends, TikTok stars, and the joy of simply being “in” with their classmates and friends through the shared experience of the internet.
For parents, the similarities between first bike rides and first phones continue. You love the growing independence that this moment brings, yet you also wonder what your child will encounter out there when you’re not around. The good and the bad. How have you prepared them for this? Are they really ready?
When is my child ready for a smartphone?
That’s the question, isn’t it—when is my child ready for that first smartphone?
For years, your child has dabbled on the internet, whether that was playing on your phone while they were little, letting them spend time on a tablet, or using a computer for school. Along the way, there have been teaching moments, little lessons you’ve imparted about staying safe, how to treat others online, and so forth. In other words, you’ve introduced the internet to your child in steps. Giving them their own phone is yet another step, but a big one.
Yet those teaching moments and little lessons are things that they’ll lean on when they’re on their own phone—whether those were about “stranger dangers” online, proper online etiquette, and the difference between safe and unsafe websites. Understanding if your child has a firm foundation for navigating all the highs and lows of the internet is a strong indication of their readiness. After all, safely entering the always-online world of having a smartphone demands a level of intellectual and emotional maturity.
Is there a right age for a first smartphone?
Good question. We do know that smartphone usage by children is on the rise. For example, research from Common Sense Media indicates that 53% of 11-year-olds have a smartphone, a number that jumps to 69% at age 12. That’s quite a bit of smartphone use by tweens, use which may be lightly monitored or not monitored at all. Note the percentage of ownership by age and the volume of screen time that follows in the infographic below:
Source: Common Sense Media
Why the rise, particularly in very young owners? However, does that mean 26% of nine-year-olds should have unfettered and all-day access to the internet in the palm of their hands? That’s a topic for you to decide for yourself and for the good of your family. However, if the notion of a third grader with a smartphone seems a little on the young side to you, there are alternatives to smartphones.
Smartphone alternatives for young children
If keeping in touch is the primary reason for considering a smartphone, you have internet-free options that you can consider:
- Flip phones: Often sturdy and low cost, these are great devices for keeping in touch without the added worry and care of internet access. Likewise, it’s a good way to help younger children learn to care for a device—because it may get dropped, kicked, wet, maybe even lost. You name it.
- Smart watches for kids: A quick internet search will turn up a range of wearables like these. Many include calling features, an SOS button, and location tracking. Do your research, though. Some models are more fully featured than others.
- First phones for kids: Designed to include just the basics, these limited-feature smartphones offer a great intermediary step toward full smartphone ownership. In the U.S., brands such as Pinwheel and Gabb may be worth a look if you find this route of interest.
In all, for a younger child, one of these options may be your best bet. They’ll help you and your child keep in touch, develop good habits, and simply learn the basic responsibilities and behaviors that come with using a device to communicate with others.
Preparing you and your family for the first smartphone
Now’s a perfect time to prepare yourself for the day when your child indeed gets that first proper smartphone. That entails a little research and a little conversation on your part. Topics such as cyberbullying, digital literacy, social media etiquette, and so on will be important to get an understanding on. And those are just the first few.
A good place to start is your circle of family and friends. There, you can find out how they handled smartphone ownership with their children. You’ll likely hear a range of strategies and approaches, along with a few stories too, all of which can prepare you and your child.
I also suggest carving out a few minutes a week to read up on our McAfee blog safety topics so that you can have all the knowledge and tools you need. We blog on topics related to parenting and children quite regularly, and you can get a quick view of them here:
- Stranger Danger
- Keeping Your Kids Safe from Predators Online
- Building Digital Literacy
- The Who, What, and How of Cyberbullying
- Screen Time and Sleep Deprivation in Kids
- Lessons Learned: A Decade of Digital Parenting
- Social Influencers and Your Kids
- Getting Kids to Care About Their Safety Online
Time for the first smartphone
Having a smartphone will change not only their life, but yours as well. Relationships will evolve as your child navigates their new online life with their middle school and high school peers. (Remember those days? They weren’t always easy. Now throw smartphones into the mix.)
With that, give you and your child one last checkpoint. The following family talking points for owning a smartphone offer a solid framework for conversation and a way to assess if your child, and you, are truly ready for what’s ahead.
Once smartphone day arrives, it’s time to put two things in place—mobile security and parental controls:
- Get mobile security for your child’s Android phone or mobile security for iPhones. This will provide your child with basic protection, like system scans, along with further protection that steers your child clear of suspicious websites and links.
- Use parental controls for your child’s phone. I also suggest being open and honest with them about using these parental controls. In effect, it’s a tool that extends your parental rules to the internet, so be clear about what those rules are. A good set of controls will let you monitor their activity on their phone, limit their screen time, plus block apps and filter websites.
Plenty. And as a mom myself, I rely heavily on those parental controls I put into place, but I also stay close to what they are doing online. It’s a bit of a mix. I simply ask them what’s going on and do a little, monitoring too. That could be asking them what their favorite games and apps are right now or talking about what playlists they’re listening to. This keeps communication open and normalizes talking about the phone/ their internet usage and what’s happening on it. Communication like this can come in handy later on should they need your help with something that’s occurred online. By talking now, the both of you will have an established place to start.
In all, take children’s smartphone ownership in steps and prepare them for the day those training wheels come off so the both of you can fully enjoy that newfound independence of life with a smartphone.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:b413341e-1acc-4d37-975d-629dc7644e96> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/family-safety/how-to-prepare-for-your-childs-first-smartphone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00670.warc.gz | en | 0.950874 | 1,626 | 2.65625 | 3 |
Team TechX won second place in the DARPA Cyber Grand Challenge (CGC) last week. TechX is a collaboration between GrammaTech and the University of Virginia, and we are all very proud. Over the next few weeks, we will be writing about the technologies we leveraged and analyzing some of the results.
This Whole Capture-the-Flag Concept
To kick off, it seems useful to outline what people actually mean when they say that the competition is a "Capture The Flag contest." I, for one, never played Capture The Flag (CTF) as a child. Probably this is because I grew up in New Zealand and spent most of my formative years fleeing orcs and zombie sheep instead. I'm sure many others are in similar positions, and many more did play in childhood but since forgotten how it worked. So let's start with a brief overview of that game.
A classic CTF game out here in the physical world involves two teams. Each team has a 'territory' containing a 'base' in which they fly a flag; the winning team is the one that is first to obtain the other team's flag and bring it back to base. Team members that are tagged in the other team's territory are 'jailed' -- though they may be subsequently freed by a mechanism that depends on local customs (for example, by being tagged by one of their teammates, or by waiting until an agreed time limit has elapsed). The game thus necessarily involves a heavy strategy component, including decisions about how many team members should be allocated to each activity (obtaining other flag, protecting own flag, freeing teammates from jail where applicable) and how those allocations should evolve in response to the other team's behavior.
There are lots of variations of this game, evolving in response to factors like the local terrain and the interests of the competitors. If someone says "a CTF game with pumpkins instead of flags" or "a CTF game underwater," that's easy enough to picture. It's not necessarily so obvious what "a CTF game with computer network services" could mean.
Capture-the-Flag... at a Laundromat?
Let's think first about a physical-world service: say, a laundromat. Simplifying away issues like detergent, and scientists with freeze rays and frozen yogurt, a laundromat is a place where a person can turn up with some dirty clothes and a pocket full of quarters and leave an hour or so later with the same clothes in a clean state. A laundromat is not useful if your clothes don't get clean, or if the process takes years, or if they impose too many additional requirements. Nobody wants to turn up to the laundromat and stand in a TSA-style line for the fun and personal fulfilment to be found inside, or to find out once in the door that the machines only take $2 bills.
What would a CTF-type game look like, transferred to a laundromat? It turns out that there are quite a lot of things that might be considered analogous to capturing a flag. You could break into an opponent's machines and steal all their quarters. You could sneakily post a fake "closed" sign so all their customers go to your laundromat instead. If they live above the store, you might even be able to find a way through the connecting door to their home, whereupon you could break their chair and eat their porridge. Meanwhile, you would have to protect your own quarters/signs/porridge from reciprocal incursions.
Teams, Strategy, and a Scoring Rubric for Laundromats (but really also for CGC)
Choosing a team for this kind of contest is an art in itself. You might want a structural engineer to analyze the current state of the building and design improvements. A builder who can implement these plans. A bit of muscle to repel incursions. Someone with lots of experience in finding security holes: maybe from a criminal background, maybe just a puzzle fan, maybe both. Someone to hold everything together. The total? Could be one person, could be dozens.
For standardized scoring, a governing body might set up two identical laundromats and assign you one each. In fact, there's no particular requirement that there be two: there can be N laundromats and N teams. These standard laundromats might include some deliberate security weaknesses, so that a team's score can depend or partly depend on how many of those weaknesses they manage to find and leverage. Standardized laundromats have an interesting consequence: if you find a weakness in your own laundromat, you know that the other teams started out with that weakness too, and if they haven't found it and fixed it yet you can use it to attack them.
The governing body can - of course - set up whatever scoring rules they want. Technically, they can set up whatever scoring rules I want, and since what I want is to illustrate what went on in CGC, we'll look at the laundromat equivalents of those scoring metrics. There are three:
- Security: Finding and eliminating the vulnerabilities in your own laundromat.
- Evaluation: Finding and exploiting vulnerabilities in competing laundromats.
- Availability: Keeping your laundromat open and your customers satisfied.
A scoring rubric for laundromat Security might have questions like, "What percentage of attacks from competitors have been successful?" and "What percentage of the 'reference attacks' designed by the competition organizers would be successful?" The fewer successful attacks there have been against your laundromat, the higher your Security score.
Similarly, a scoring rubric for Evaluation might ask "How many other laundromats has this team successfully attacked?" For Evaluation, attacks against a larger number of opposition laundromats will give a higher score.
Meanwhile, scoring rubric for laundromat Availability might have rather more questions. Could I get in the door? Was I in there for a reasonable amount of time? Has my net worth been adjusted by the correct number of quarters? Do I have clean clothes? Are they MY clean clothes? Are they ALL of my clothes??
Answering "no" to one or more of these questions would indicate that something is wrong with laundromat operations, either because of a successful attack by another team or because the laundromat's owners accidentally messed something up while upgrading their security.
"Could I get in the door?" is an especially interesting question because in order to prevent other teams from exploiting security breaches you have to fix those breaches, but frequently this entails closing the shop. Teams must carefully plan their security upgrades: too late and they may be successfully attacked (lowering their Security score); too early and they will lose Availability points even though other teams may never have even noticed the problem.
Following CGC, the overall score is computed as a product: Availability × Security × Evaluation.
This scoring system eliminates certain strategies altogether. You can't win by just pulling down the security gate and eliminating access to your laundromat while you carry out attacks against everyone else, even if those attacks are wildly successful, because then Availability is always zero and so the overall score is also zero.
Making More Work for the Organizers
If this isn't a challenging enough competition, we could graduate to assigning a whole standardized town to each competitor. This means more targets to attack and defend, and nonidentical targets at that. A strategy designed for a laundromat will not apply perfectly to a carwash (although it's quite possibly better than nothing). A town isn't just a jumble of services sitting in the countryside, either. It has infrastructure. Setting up an entire standardized town will involve building roads as well as laundromats; water lines as well as coffee shops; street lights as well as gas stations. A more exciting contest for competitors means more work (a lot more work) for the organizers.
And finally, we can transform this into a computer-based CTF game. The town becomes a computer; the town services become network services, such as web servers and email servers; the team members become experts in software analysis, security, and so forth. They might also take on names that include numbers as well as letters, but this is not strictly compulsory.
There is a long tradition of such games in the hacker community, with a number of variations, especially with respect to exactly what constitutes a 'flag' and a 'capture.' The annual DEF CON convention has a particularly famous CTF contest.
The DARPA Cyber Grand Challenge was about taking the next step along this path: a computerized CTF whose competitors were not - or not directly - teams of humans, but pieces of software. Each of the Cyber Reasoning Systems played all of the roles that the members of a human team would play: analyzing software components, designing and implementing security fixes for its own system, designing and implementing attacks against other systems, and strategizing.
DARPA and their collaborators had the enormous task of setting up the competition environment (town) and challenge problems (services), building in multiple layers of redundancy and some super-snazzy tools for monitoring what was going on without interfering.
And they did it with style:
Stay tuned to read about the relation of formal methods to CGC. | <urn:uuid:f9bd8b52-a9f1-4ca7-a167-9de246d792be> | CC-MAIN-2022-40 | https://resources.grammatech.com/government/understanding-darpas-cyber-grand-challenge-laundromat-edition-3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00670.warc.gz | en | 0.965719 | 1,913 | 2.703125 | 3 |
A security breach is a problem of our times. Recently, we talk a lot about the security of the digital sector that, even if it has been existing for decades, never really concerned every one of us on such a scale. It’s only since cameras and computers have entered our houses and became widespread that they instantly started to raise fears. What precisely “a security breach” means? According to specialists, it’s an incident that results in unauthorized access to computer data, applications, networks, or devices. Typically, it occurs when an intruder is able to bypass security mechanisms.
Security Breach: What You Need to Know
Security breaches are announced almost every day, affecting millions of individuals. The media report thousands, if not millions of people victimized by data breaches daily: Target, eBay, Jimmy John’s, Neiman Marcus, and Home Depot were all hit by hackers. There were 160,000 breach notifications across the EU-28 and Norway, Liechtenstein and Iceland with €114 million in fines imposed in the period between May 25, 2018, and January 27, 2019.
Verizon’s annual Data Breach Investigations Report for 2020 reviewed 32,002 security incidents and confirmed 3,950 data breaches across 16 industries. Approximately 62 percent of them impact smaller businesses. These security breaches are more than just data loss. The biggest issue is an influence they have on the overall availability of services, the reliability of products, and the public trust in a brand.
Five top things to know about the cyber threat
– It’s getting personal. Email addresses, names, phone numbers, etc. Personal data was involved in 58% of breaches, twice the percentage as last year. Improved reporting may account for some of that rise, though.
– It’s all about the money. Corporate espionage accounts for 10% of breaches. 86% are financially motivated. Those headline-grabbing advanced persistent threats? 4%.
– It’s coming from inside the enterprise. Internal error-related breaches doubled to its highest level yet. Again, some of this may be due to improved reporting thanks to laws like the GDPR.
– It’s because of humans. More than 67% of breaches were as a result of credential theft like phishing, social attacks, or just plain human error. Increased reporting or no, the percentage that’s our fault stayed steady.
– It’s moving to the web. Attacks on web apps doubled to be part of 43% of breaches, which makes sense as we move to web apps the attackers follow. Less than 20% of the breaches were because of vulnerabilities. The majority were credentials that were stolen or brute-forced.
The countries with the most GDPR data breaches
The EU’s General Data Protection Regulation or GDPR provides individuals with control over their personal data and unify EU regulation. DLA Piper released data showing the level of GDPR data breaches across the EU and EEA between May 25, 2018, and January 27, 2019. As we see below, the Netherlands had the highest number of breaches during the period examined, followed by Germany and the United Kingdom.
The Netherlands also had the highest number per 100,000 of its inhabitants, with 147.20, followed by Ireland’s 132.52. Interestingly, despite having only 3.2 branches per 100,000 people, France had the highest value of imposed fines at €51.1 million, followed by Germany that imposed €18.1 million.
To check some of the world’s biggest data breaches and hacks, see this animated graph or the other one. Commenting on the newest, 2020 report, Ross McKean, a partner at DLA Piper specializing in cyber and data protection, says: “GDPR has driven the issue of data breach well and truly into the open. The rate of breach notification has increased by over 12% compared to last year’s report, and regulators have been busy road-testing their new powers to sanction and fine organizations”.
What do we fear?
Cameras and devices that we use daily need security tools. This branch of protection is also known as cybersecurity. A security breach means a successful attempt by an attacker to gain unauthorized access to an organization’s computer systems. A data breach is when someone illegally obtains information about your customers, causing them to suffer from identity theft and fraudulent credit card charges.
Breaches may involve the theft of sensitive data, corruption or sabotage of data or IT systems, or actions intended to deface websites or cause damage to reputation. Today’s consumers use personal computers and mobile devices to conduct online transactions.
And as this procedure was becoming more and more common, criminals have learned to attack large networks and steal data records, committing identity fraud on a massive scale. It doesn’t surprise that with so much at stake, customers, employees, revenue, and the good name and reputation, the fear of being attacked raises concerns in companies and private users. We have reasons to keep us aware of this threat because it’s real and common.
Types of a security breach
There are the following attack methods:
– Hacking – Major corporations are prime targets for attackers attempting to cause data breaches because they offer such a large payload (millions of users’ personal and financial information, such as login credentials and credit card numbers). This data can all be resold on the black market.
– Lost or stolen credentials – The simplest way to view private data online is by using someone else’s login credentials to sign into a service. To that end, attackers employ a litany of strategies to get their hands on people’s logins and passwords. These include brute force attacks and man-in-the-middle attacks.
– User error (lost media containing data, computer left unlocked, etc.)
– Social engineering attacks – social engineering involves using psychological manipulation to trick people into handing over sensitive information. For example, an attacker may pose as an IRS agent and call victims on the phone in an attempt to convince them to share their bank account information.
– Malware– Cybercriminals often use malicious software to break into protected networks. Viruses, spyware, and other types of malware often arrive by email or from downloads from the internet. You might receive an email with an attached text, image, or audio file, and opening it could infect your computer. Otherwise, you might download an infected program.
– Credential fraud – After someone’s login credentials are exposed, an attacker may try re-using those same credentials on dozens of other platforms. If that user logs in with the same username and password on multiple services, the attacker may gain access to the victim’s email, social media, and online banking accounts.
– Physical attacks (e.g., skimming hardware on ATMs, physical point-of-sale attacks) – These attacks target credit and debit card information and most often involve the devices that scan and read these cards. For example, someone could set up a fake ATM or even install a scanner onto a legitimate ATM in hopes of gathering cards and PINs.
– Insider attacks (“misuse” of privilege, unapproved hardware/software) – These involve people who have access to protected information deliberately exposing that data, often for personal gain. Examples include a restaurant server copying customers’ credit card numbers as well as high-level government employees selling secrets to foreign states.
What do you do if you are under attack?
Data breaches come in so many forms. Not surprisingly, there is no single solution to stop them, and what’s needed is a holistic approach. Let’s begin with a common-sense approach to data security. This involves practices such as unique passwords for online services, not using credit cards with suspicious vendors. Further, keeping software up to date with security patches and use security software (antivirus and malware blockers) that will help mitigate data breaches.
Employers should ensure that their employees only have the minimum amount of access and permissions necessary to do their jobs. Companies should also prepare a response plan to be executed in the case of a data breach, to be able to successfully minimize or contain the leak of information.
Be aware of bad habits
If your business practices any of these habits, you could be in threat of a data breach:
– Using old, familiar technology. This is an easy way to breach – it was Target’s issue.
– Using the same POS system at all of your locations. Once someone figures out the system, they have access to all your stores; that one was Jimmy John’s issue.
– Not updating your information encryption. Even if info is stolen, it’s useless if it’s encrypted; this was Home Depot’s issue.
– Insecure employee login credentials. Be sure to protect yourself by using strong and different passwords on all devices and software, change them often, and tell them to no one; this issue happened at eBay’s.
– Not monitoring computer systems. Monitoring not only helps keep breaches from occurring but if one does occur, you’ll catch it quickly before a greater amount of damage is done; this was Neiman Marcus’s case.
How can you plan and protect?
Data Protection is about planning and protecting yourself from a disastrous data breach to guard against the attack methods. It may include consulting professionals, who specialize in preventing identity theft and offer data breach services. You can do more to shield your employees and business partners by understanding the risks of identity fraud, determining how serious a threat can be to your company, and preparing a data breach protection plan against being compromised before it happens to you.
A data breach response plan is a strategy put in place to combat breaches after they occur to diminish their impact. A well thought out plan ensures every person in a company knows their role during a breach to discover, respond, and contain it promptly. These plans provide peace of mind during a crisis since the steps are already tested and laid out, as opposed to formulating a plan amid a breach. Three factors that crucially impact data breach response time are preparation, technology, and privacy laws. | <urn:uuid:fba82de2-c731-4735-bf28-9446d2e1abe2> | CC-MAIN-2022-40 | https://hummingbirds.ai/what-is-a-security-breach-and-what-you-should-do-about-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00670.warc.gz | en | 0.950939 | 2,109 | 2.96875 | 3 |
Command Prompt vs. Powershell
When releasing Windows 7, Microsoft also introduced PowerShell, a robust set of commands for operating system instructions. The scripts are more like program files than batch files. On the other hand, Command Prompt, also known as CMD, is a default Windows application that directly interacts with any Windows objects in the Windows OS. Users can use it to run simple utilities and execute batch files.
In comparison, PowerShell is a more advanced edition of Command Prompt, and it’s not just an interface but a scripting language that helps users carry out administrative tasks more effortlessly. Most of the commands that CMD executes can also run on PowerShell. What, then, is the difference between PowerShell and Command Prompt?
What Is PowerShell?
The scripting framework of PowerShell allows for task automation. It incorporates a scripting language, command-line shell, and a .NET framework, all of which work together to deliver a tool that enables administrators to automate most of their regular daily tasks. Besides, the platform provides developers with a comprehensive library of functions.
For the .NET processes to be more accessible, PowerShell uses a combination of cmdlets, which serve as utilities in PowerShell scripts. While the platform provides a set of cmdlets, developers can also create their own.
PowerShell is easy to integrate with the Component Object Model (COM) to develop more complex scripts. These new creations can call on many other packages on the Windows platform to issue commands, exchange data, and receive back statuses. The service comes in handy for developers of applications that have PowerShell as their functional framework.
The Windows Management Instrumentation interface integrated into PowerShell provides an exciting feature to administrators. WMI is made available as a cmdlet allowing users to probe the status of a device or service running on Windows. They can then incorporate the findings into a PowerShell script. This comes in handy when checking the status for conditional processing and branching. This way, users can generate reports on the success or failure of every execution step.
Windows Command Prompt
CMD or Command Prompt was the first shell version developed for the Microsoft DOS operating system and remained the default until the release of Windows 10 build 14791. At this time, Microsoft converted PowerShell into the default option. CMD remains one of the last remnants of the MS-DOS OS that Microsoft removed from its list of operating systems.
PowerShell vs. CMD
The two platforms are entirely different, despite one being the successor of the other. However, there is a general perception that the ‘dir’ command works in the same way in both interfaces.
PowerShell relies on cmdlets to function, as they expose the underlying administration options inside of Windows. Before developing these programming objects, system admins would navigate the GUI to look for the options manually. The interface provided no easy way to reuse the workflow to change options on a large scale. The only option was to click through a series of menus to perform the desired actions.
PowerShell uses pipes, just like other shells, to share inputs and output data and chain cmdlets. The functions are not very different from what happens with bash in Linux. Pipes serve to help users create complex scripts that transfer data and parameters from one cmdlet to another. Besides, users can also create reusable automated scripts that make mass changes with variable data, for example, a list of servers.
One particular function of PowerShell is its ability to create pseudonyms for various cmdlets. These aliases can help a user set up their names for different scripts and cmdlets. This makes for a more straightforward process in switching between different shells. In Linux bash, the ‘Is’ command is equivalent to the ‘dir’ command that displays direct objects. Both of these commands function as an alias for cmdlet ‘Get-Childitem’ in PowerShell.
PowerShell vs. Command Prompt Examples
For a clearer picture of how the two interfaces work differently, here are some basic operations you can do with both of them and their correct syntax.
1. To change the location of a directory
- PowerShell cmdlet: Set-Location “D:\testfolder”
- CMD command: cd /d D:\testfolder”
2. Renaming a file
- PowerShell cmdlet: Rename-Item “c:\file.txt” -NewName “new.txt”
- CMD Command: rename c: \old.txt new.txt
3. To list files in a directory
- Powershell cmdlet: Get-Childitem
- CMD command: dir
4. Stop a process
- PowerShell cmdlet: Stop-Process-Name “ApplicationName”
- CMD command: Stop-Process- Name “Processname.”
5. Accessing the help command
- Powershell cmdlet: Get-Help “cmdlet name”
- CMD command: help [commadnname] [/?]
Differences between PowerShell and Command Prompt
The first notable difference between PowerShell and Command Prompt is in the year of release. CMD came earlier than PowerShell, having being introduced in 1981, while PowerShell came into the picture in 2006. Other differences are:
- You can open both interfaces from run by typing the word PowerShell and CMD respectively to open each separately
- PowerShell operates with Powershell cmdlets and batch commands, while CMD only works with batch commands
- PowerShell allows you to create aliases for scripts and cmdlets for easier navigation, a function that’s not possible with CMD
- You can pass the output of one cmdlet to another in PowerShell but can’t do so in CMD
- The output in PowerShell is in object form, while in CMD, it’s in the form of text.
- PowerShell can execute a sequence of cmdlets combined in a script, while a CMD command must first process to the end before another one runs.
- PowerShell easily integrates with Microsoft cloud products, while CMD doesn’t offer that compatibility.
- PowerShell supports Linux systems and can run all types of programs. On the contrary, CMD only runs console-type programs and doesn’t support Linux systems.
When to Use PowerShell
The PowerShell interface is the way to go for IT functions and system administrators. All the commands you could previously run on CMD are now available on PowerShell and have better functionality. Besides, PowerShell comes with all the cmdlets you will ever need for administrative functions.
Having PowerShell knowledge is a differentiator that could change the way you conduct administrative and system functions. If you would like to get started with the PowerShell interface but don’t know where to begin, 4BIS is here to help. We work with internal IT teams to help them with specialized IT projects and helpdesk services. Call us today and let us become the resource your internal IT department needs.
4BIS.COM, Inc is a complete IT Support and Managed IT Services Provider, Computer Reseller, Network Integrator & IT Consultant located in Cincinnati, Ohio focusing on customer satisfaction and corporate productivity. Our mission is to develop long-term partnerships with our customers and ensure they stay up-to-date with the evolution of business processes and information technology. | <urn:uuid:9c6ea72a-ff7d-4874-abd1-6e5c2b9b609e> | CC-MAIN-2022-40 | https://www.4bis.com/command-prompt-vs-powershell/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00670.warc.gz | en | 0.897871 | 1,495 | 2.90625 | 3 |
Understanding static multicast routes
In this short article we'll take a look at Cisco IOS static multicast routes (mroutes) and the way they are used for RPF information selection. Multicast routing using PIM is not based on propagation of any type of multicast routes - the process that was used say, in DVMRP. Instead, router performs RPF checks based on the contents of unicast routing table, populated by regular routing protocols. The RPF checks could be classified as either data-plane or control-plane. Data-plane RPF check applies when router receives a multicast packet, to validate if the interface and upstream neighbor sending the packet match RPF information. For data-plane multicast, the packet must be received from an active PIM neighbor on the interface that is on the shortest path to the packet source IP address, or RPF check would fail. Control-plane RPF check is performed when originating/receiving control-plane messages, such as sending PIM Join or receiving MSDP SA message. For example, PIM needs to know where to send the Join message for a particular (S,G) or (*,G) tree, and this is done based on RPF lookup for the source IP or RP address. Effectively for PIM, RPF check influences the actual multicast path selection in the "reversed way": it carves the route that PIM Join message would take and thus affects the tree construction. In both control and data-plane RPF check cases, the process is similar, and based on looking through all available RPF sources.
The following is the list of possible RPF sources:
- Unicast routes, static/dynamic (e.g. via OSPF). This the normal source of RPF information, and the only one you need in properly configured multicast network, where a single routing protocol is used and multicast is enabled on all links.
- Static mroutes, which are "hints" for RPF check. Those could be used in situations where you need to engineer multicast traffic flow over the links that don't run IGP, such as tunnels, or fix RPF failure in situations where multicast routing is not enabled on all links or you have route redistribution configured.
- Multicast extension routes, such as those learned via M-BGP. While those belong mainly to the SP domain, M-BGP could be used within the scope of CCIE RS exam to creatively influence path selection and perform RPF fixups without resorting to static m-routes.
You may find out which source is used for your particular address by using the command show ip rpf [Address]. The process of finding the RPF information is different from simple unicast routing table lookup. It is not based solely on the longest-match rule across all RPF sources, but rather the best match is selected within every group and then the winner is elected based on administrative distance. The router selects best matching prefix from both the unicast table (based on longest match) and static multicast routing table and compares their AD's, to select the best one. For the mroute table, the order you create static mroutes with is important - the first matching route is selected, not the longest-matching one.
By default, when you configure a static mroute, its admin distance is zero. For example, if you have a static default mroute ip mroute 0.0.0.0 0.0.0.0 it will always be used over any longer-matching unicast prefix, since it matches everything and has the AD of zero. As another example, assume that you want prefix 192.168.1.0/24 to be RPF checked again unicast table while the rest against addresses matched against the default mroute. You may configure something like this:
ip mroute 192.168.1.0 255.255.255.0 null 0 255
ip mroute 0.0.0.0 0.0.0.0 Tunnel 0
Like we mentioned before, the order of mroute statements is important here, and for sources in the range 192.168.1.0/24 the first matching static mroute has the AD of 255 and thus would be always less preferred as compared to unicast table routes (but not ignored or black-holed!). However, for all other sources, the default mroute will be selected over any unicast information. Notice that if you put the static default mroute ahead of the specific mroute the trick will not work - the default mroute will always match everything and prevent further search through mroute table. What if mroute and unicast route both have the same admin distance? In this case, the static mroute wins, unless it is compared against directly attached route or default route. In the latter case, unicast direct or unicast default route would ace the mroute for RPF check.
It seems that in all recent IOS version the linearly ordered match has been replaced with longest-match lookup across the mroute table. CCO documentation and examples still state that ordered match is in use, but actual testing shows it is, in fact, longest match. Thanks to David Serra for pointing this out.
Finally, what about M-BGP, which is another common source for RPF information? M-BGP routes are treated the same way as static mroutes, but having distance of BGP process - 200 or 20 for iBGP and eBGP respectively. They don't show up in the unicast routing table, but they are used as RPF information source. However, when looking up for the best matching M-BGP prefix, a longest match is performed and selected for comparison, unlike linear ordering used for mroutes. Think of the following scenario: your router receives a unicast default route via OSPF and prefix 192.168.1.0/24 via M-iBGP session. A packet with the source address 192.168.1.100 arrives - what would be used for RPF check? Somewhat counter-intuitively, it would be the OSPF default route, because of OSPF's admin distance 110 and BGP's distance 200 for iBGP. You can solve this problem by lowering BGP's distance or increasing OSPF's distance or resorting to use a static mroute for the source prefix. Keep in mind, though, that in case of equal AD - e.g. when the same prefix is received via unicast and multicast BGP address families - the multicast would take precedence, per the general comparison rule.
In the end, let's briefly talk about what happens if router has multiple equal-cost paths to the source. Firstly, only those routes that point to the active PIM neighbors would be used. Secondly, the router will use the entry with the highest PIM neighbor IP address. This will effectively eliminate uncertainty in RPF decision. It is possible to use equal-cost multicast splitting, but this is a separate IOS feature:
This feature allows splitting (not load-balancing) multicast trees among different RPF paths and accepting packets from multiple RPF sources. However, for the classic multicast, there is only one RPF interface. | <urn:uuid:e87e2d61-8990-429c-9487-b35fac43d44d> | CC-MAIN-2022-40 | https://ine.com/blog/2011-07-31-understanding-static-multicast-routes | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00670.warc.gz | en | 0.907375 | 1,529 | 2.65625 | 3 |
What is a Zettabyte?
A zettabyte is a unit of digital information that is 1,000,000,000,000,000,000,000 bytes or a trillion gigabytes. Each byte is made up of eight bits, each bit being a 1 or a 0.
When to use Zettabytes?
Zettabytes are used to calculate data usage in huge volumes.
Latest Zettabyte Insights
In today’s world, our life experiences are being more curated at increasing speeds. Data-centricity have changed out lives in various aspects – sometimes even in unexpected ways.
Interested in data analytics topics such as how to execute Tensorflow inside a data analytics platform, or would just like to get some compelling tech book recommendations? Look no further.
Explore the technical underpinnings of the fastest analytics database. Dive into the specifics, as Exasol’s general architecture, query processing, query compilation, query execution, and data access.
Interested in learning more?
Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data. | <urn:uuid:0425182d-4060-410c-bd69-216477ea9cf0> | CC-MAIN-2022-40 | https://www.exasol.com/glossary-term/zettabyte-definition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00070.warc.gz | en | 0.89747 | 254 | 3.140625 | 3 |
New Training: Design and Configure OSPF on Junos
In this 7-video skill, CBT Nuggets trainer Knox Hutchinson explores the depths of OSPF and how to deploy the IGP on Juniper devices. Watch this new training.
Learn with one of these courses:
This training includes:
55 minutes of training
You’ll learn these topics in this skill:
Introducing OSPF on Junos
The OSPF State Machine
Designated Routers and Backup Designated Routers
Link State Advertisements and LSA Flooding
OSPF Area Types
Summarizing OSPF on Junos
What is Open Shortest Path First (OSPF)?
Open Shortest Path First (OSPF) is an Interior Gateway Protocol (IGF) that is used in a single Autonomous System (AS), such as a LAN. A link-state routing protocol, OSPF uses Dijkstra's Shortest Path First (SPF) algorithm to find the shortest route through an IP network based on the cost of the route. It is used widely in enterprise networks because of its ability to reliably find routes in complex networks and is supported by every major routing vendor as an open standard.
In a network using OSPF, each router exchanges information about the routes that they know (and the costs of these routes) with the adjacent routers, which are known as neighbors. Eventually, all routers in a defined area have information about all routes and costs and know the shortest path between them.
OSPF can further detect any change in network topology and find a new shortest path in a matter of seconds. | <urn:uuid:0a40ba57-3ec4-4cba-867e-ee619d987dc3> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-design-and-configure-ospf-on-junos | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00070.warc.gz | en | 0.915032 | 349 | 2.765625 | 3 |
What is Data Visualization?
Data visualization is the process of translating information into a graphical representation. Using charts, graphs, and infographics, complex relationships can be conveyed to a user in an easily digestible format. Data visualization is also an art and science, and it lies at the intersection of communication, information science, and design. Effective visualizations help a user to understand implications and make good decisions.
Simple forms of Data Visualization have been along for a long time but it’s most recently been identified as a key 21st century research skill. These include graphs, pie charts, maps, and even tables. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
The advantage of uiltizing this solution is that it provides insights into complicated data sets by communicating their key aspects in more intuitive and meaningful ways. In today’s world there are numerous benefits of data visualization.
Let’s see some of them below:
• Enables people make fast decisions as graphical information is faster to process by the human brain
• We can communicate key findings in constructive and more meaningful ways
• Helps us understand the nuances between operations and results
• Easily identifys trends and predicts outcomes
How Do Businesses Use This to Their Advantage?
Nowadays companies generate and consume a lot of data. Making sense of that data promptly to support key business decisions is of huge importance. Businesses are now better equipped to make strategic decisions and mitigate risk. A study conducted by the Wharton School of Business study found that the use of data visualizations could shorten business meeting by 24%.
As per Bain & Company, companies with the most advanced analytics capabilities are:
• 2x more likely to be in the top quartile of financial performance within their industries
• 2x more likely to use data very frequently when making decisions
• 3x more likely to execute decisions as intended
• 5x more likely to make decisions much faster than market peers
Microsoft Power BI is one of the most popular and easy to use data visualization platforms. It connects dissimilar data sets, cleans them up and transforms them into a cohesive data model. It aims to provide interactive graphics and business intelligence capabilities with an interface simple enough for end-users to create their reports and dashboards.
iLink’s Data Analytics practice consists of experts in Data Strategy, Data Lakes, Data Warehouse, Business Intelligence, Machine Learning, and Data Science. Our deep functional expertise enables us to understand the underlying business challenges, customer needs and industry dynamics to connect the dots and deliver real-world insights. To learn more about how we can empower you with the right tools and knowledge required to gain a competitive edge, talk to an expert now | <urn:uuid:3bb230bb-08b4-4688-945a-c1861f6f81d3> | CC-MAIN-2022-40 | https://www.ilink-digital.com/insights/blog/what-is-data-visualization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00070.warc.gz | en | 0.929225 | 580 | 2.953125 | 3 |
So reduzieren Sie Risiken durch die Log4j-Sicherheitslücke in Ihrer Umgebung
December 15, 2021
Editor’s Note: Log4j is an unfolding situation that we are closely monitoring. We will be updating our blog on a regular basis so check back for updates.
The vulnerability with the Log4j logging library has been keeping IT teams up at night since late last week. We know when a zero day event occurs like this it creates understandable panic and we’re here to help. Naturally when an issue like this comes up, your sole focus should be on responding to the issue. You need to reduce the risk to your organization as quickly as possible.
To that end, here’s a cheat sheet to help you plan that work and to help make sure that nothing has fallen through the cracks.
Was ist Log44?
Log4j is an open source library for applications written in Java. It helps developers record what’s happening in their applications. These recorded logs help make sure that things are working smoothly and they provide critical information when things aren’t going as planned.
Where is Log4j used?
Log4j has been around for years and is currently in its second major version. This library is so popular in the Java community that it is basically the standard for logging.
This means that it’s used in most applications that are written in Java. The problem is that most users are unaware and don’t need to be aware of what language their applications were written in.That’s why you’re seeing reports from various services and vendors about updates being required. Log4j is part of the plumbing that runs most Java applications.
On Thursday, 09-Dec-2021, the team of volunteers behind Log4j announced a serious vulnerability in the library. There are two CVE (common vulnerabilities and exposures) identifiers assigned to this issue, CVE-2021-44228 and CVE-2021-45046.
This vulnerability had the two things that we never want to see in a security issue, let alone see in the same security issues because it’s easy to take advantage of and gives attackers the ability to run their code on your systems.
Officially, the severity of this issue is rated 10 out of 10. “That’s not good” is a massive understatement. In simple terms, this means it’s worth stopping normal operational and security work and addressing this issue immediately.
What do I need to do?
Responding to incidents like this is always a challenge. Because log4j is part of the plumbing of some applications, getting a handle on the scope of your exposure is very difficult.
Here’s the general process you’ll should be following;
- Monitor production for exploit attempts
- Look for all installations of log4j in your environment
- Prioritize that list of affected systems
- Working through that list, either turn off the affected feature or upgrade to the latest log4j version
While that list seems straightforward, it presents a number of challenges.
A number of researchers and vendors in the community have reported widespread attacks related to this issue. Making matters worse, attackers are using a number of different techniques. This means you’re not looking for one but several different types of attacks on your network. This post from Lacework Labs highlights some of the techniques we’ve already seen in the wild.
Cybercriminals know they have a window of opportunity here as teams rush to mitigate the issue. That means that you and your teams have to balance fixing the problem while monitoring production for active attacks.
This is a really hard balance to strike, but it’s a critical one. You don’t want to close one gap just as attackers gain a foothold through another. In production, you should be focusing your efforts on looking for anomalies in your activity data. That will help spot the new techniques as they are deployed by cybercriminals. In the end, security controls will only buy you so much time to fix the root cause, the version of log4j in use. Finding the vulnerable systems is the biggest hurdle. You can’t fix what you don’t know about.
Vendors with affected products will be notifying customers. That will happen either through direct contact or their blogs so make sure to check in with your vendors to understand how to mitigate the issue for those products. The community has also published several scanners to help determine if specific systems are affected that might help you out. If you’re thinking, “This is a lot of manual effort.” You are correct. Sometimes there’s no way around it. Each set of systems is going to need to be checked for vulnerable applications and a specific plan made to address each occurrence detected.
Once you know the scope of the work, it’s time to walk through the dependencies in your network. Some systems will be simple fixes. You can turn off the impacted feature of log4j or quickly patch while others will require coordination between teams and testing of the remediation plan. Hopefully you’ll get to the point where you have a grasp of what needs to be done. The next step is to order that work to reduce the risk to your organization as quickly as possible. High value systems should be done first, working your way through the long tail of the list. It would be nice if there was a faster way, but there isn’t.
The key thing is to remember the ease with which attackers are taking advantage of this vulnerability. We’ve seen multiple reports of attackers taking an opportunistic approach with this vulnerability. They are scanning systems at will and attacking any they find vulnerable.
Remember that risk is the combination of the likelihood of an event occurring and the possible impact from that event. We do know that attackers are actively hunting for victims which means there’s a strong likelihood of your systems being attacked. If that attack is successful, the attacker gains access to your environment and can run their code. That could have a significant impact on your business. This is one of those times when downtime and other disruptions are usually an acceptable trade off in order to reduce the risk to the business.
I’m a Lacework customer, what do I need to know?
The Lacework Customer Success team has documentation available to help you understand how to use the Lacework platform to help uncover any vulnerable systems and aid with any investigations related to log4j. With the Lacework Polygraph technology, you can identify anomalous behavior that may be indicative of an attacker trying to exploit vulnerable systems. This is important as you actively patch your environment and need to pay extra attention to any vulnerable services running in production. Additionally, you can leverage the Polygraph visualizations to look at New Connection Events from Java applications. As with any major incident, this documentation will be updated as new information becomes available.
Lacework Labs will be monitoring for post-exploit activity, including historical data. We will provide specific recommendations to customers if a compromise is detected.
The situation is likely to change quickly over the next few days. Please continue to monitor the updates from your vendors and work through the list of impacted systems.
It’s exhausting work but the critical nature of this vulnerability means it can’t wait.
Stay tuned to the Lacework blog and social media handles (we’re @Lacework on Twitter) for more as this situation develops.
Copyright 2021 Lacework Inc. All rights reserved. | <urn:uuid:8df03f68-0a5a-41c4-85d0-c0f3e41a6798> | CC-MAIN-2022-40 | https://www.lacework.com/de/blog/how-to-get-a-handle-on-the-log4j-issue-in-your-environment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00070.warc.gz | en | 0.950817 | 1,590 | 2.609375 | 3 |
Apr 30 2019
The intelligent edge has gained much buzz recently, including interesting comments from Microsoft CEO Satya Nadella during a keynote at Mobile World Congress. But as IoT devices and applications become more advanced, producing more data and demanding greater speed and power, the need arises for a more efficient edge computing approach.
Consider a common IoT product, such as a connected refrigerator. It can link with your phone or laptop to let you see temperature, adjust ice settings and even stream music or video. That’s pretty awesome, but a truly intelligent refrigerator can do far more, telling you how much milk is left, estimate calories or gauge spoilage. Early edge computing platforms leveraged common server and storage technology as sufficient for supporting the traditional type of IoT devices and apps. However, intelligent edge platforms are needed to support more advanced functions, which involve far more data. To become intelligent, edge platforms require major innovations to introduce new efficiencies and help manage how to analyze and manage the date in the most effective way.
The example above and other next-generation IoT use cases involve moving large quantities of data, which is always challenging. Even with the advent of 5G, moving big data sets to and from edge networks creates major bottlenecks, since these datasets are approaching petabyte scale. This challenges existing edge platforms as they utilize a typical Von Neumann computing architecture. Due to the size and power constraints of edge platforms, adding core CPU processing to an edge platform hasn’t been feasible. Simply increasing the number of systems in a deployment is impossible because of the same size and power limits.
For edge platforms to bridge this gap and become intelligent they need innovative architectures. Vendors are attempting to address the challenge of data movement by delivering disaggregated solutions, such as NVMe-oF fabrics, composable architectures and GPU and FPGA accelerators. While these can speed up the process to some degree, they don’t move the needle enough to make next-generation IoT use cases work smoothly. All these solutions involve space and power needs that may not exist, and they are not innovating the way to move and manage the stored data itself.
But what if you didn’t need to move all that data?
“Computational storage” is a new approach that minimizes data movement and creates intelligent edge computing with intelligent storage. In-situ processing is the key to computational storage. It creates data processing capabilities within storage devices, such as NVMe SSDs, eliminating the need for total data movement. In-situ processing solves the problem outlined above by bringing compute capabilities to the where the data resides. This allows you to pre-process data rather than move all the data for host CPU processing, which will be faster and more efficient. Overall, computational storage has the capability to reduce the time to process a petabyte of data for high capacity-driven, read-intensive analytics applications.
And it’s not hard to deploy. Computational storage does not require a true ground-up approach and instead can be implemented by modifying existing edge platforms, making adoption easier and more scalable. In essence, the concept is to take a host-driven and memory-limited application and execute that workload in each device installed on the storage bus. In one case, where 4 cores are present, if you have a system with 10 drives, you have effectively added 40 cores of parallel processing to the system with no net physical changes or adds, save using computational storage SSDs instead of the traditional ones. The ability to move compute into storage, where the data resides, saves host CPU and memory from the traditional round robin (data from storage into memory, analyze, dump, repeat) data management. Instead the host CPU simply has to aggregate the results from all the parallel paths.
By eliminating most data movement, computational storage and in-situ processing remove a major bottleneck that has prevented more advanced IoT applications from taking off. This method ensures that the data gathered by these platforms can deliver on its promise of improving analytics and enabling important new use cases.
Imagine a commercial jet that can determine in seconds rather than hours what its maintenance needs are as it sits outside the gate before its next takeoff. Another great example of an edge implementation is object tracking in surveillance. Consider a remote camera platform that can analyze and track a single person in a stadium in real time by running the AI-based search algorithm while the data is being stored on cameras. No need to ‘look back’ over the data. We can even take this to the Autonomous “anything,” in which telemetry, statistics and use parameters are all stored locally to the machine and only the truly valuable bits are sent “over the air” via 5G, saving bandwidth and allowing for faster aggregation of data from all the inputs. The next generation of advanced IoT will rely on an intelligent edge infrastructure that’s powered by computational storage.
Scott Shadley is principal technologist, NGD Systems. | <urn:uuid:3d3c2a5a-f6ad-49da-9028-c8cf40412249> | CC-MAIN-2022-40 | https://ngdsystems.com/nvme-computational-storage-edge-intelligence-for-next-gen-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00271.warc.gz | en | 0.918636 | 1,020 | 2.6875 | 3 |
Since the first successful phishing attack, we have trained our email users to read every URL before clicking.
Microsoft’s Advanced Threat Protection (ATP) included a feature called Safe Links that worked against this. Previously, Safe Links obscured the original URL with a rewritten link, belying decades of user education efforts by hiding the visual clues end-users need to identify phishing and other exploits.
A year ago, Microsoft improved Safe Links by adding Native Link Rendering in the Office Web Application (OWA). Now, end users can read the original link. This allows them to make a more informed decision to click. But, is this Safe Links update safe?
What Is Safe Links in Office 365 Advanced Threat Protection?
When someone clicks on a URL in an email, Safe Links immediately checks the URL to see if it malicious or safe before rendering the webpage in the user’s browser.
Safe Links checks if that destination domain is not on either Microsoft's Block List or a custom Block List created by the organization. If the URL leads to an attachment, the attachment will be scanned by Microsoft for malware.
If the URL is identified as insecure, the user is taken to a page displaying a warning message asking them if they wish to continue to the unsafe destination.
What Was Wrong with Safe Links?
Formerly, Safe Links replaced the URLs in an incoming email with URLs (*.outlook.com) that allow Microsoft to scan the original link for anything suspicious and redirect the user only after it is cleared.
For example, an email containing a link to www.avanan.com, was replaced with: na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fwww.avanan.com
Safe Links made it impossible for the end user to know where the link was going. The link is rewritten as an extremely dense redirect, making it difficult to parse.
Here's an example from real life—look at the two links below and attempt to discern which leads to the real UPS site and which is from a fake phishing attack.
The second link points to a malicious site at webtracking.email.
Additionally, end users were more likely to login to fake Office 365 pages if the domain reads outlook.com. Diligent users who checked where the link led to would see a URL in "*.outlook.com", a Microsoft registered domain name. End users are more likely to enter their credentials into a page that appears to be hosted on a known Microsoft domain.
How Did Microsoft Update Safe Links in Office 365?
Previously, SafeLinks cluttered email appearance with rewritten URLs that were illegible. Customers also argued that it is easier to recognize an original bad link than deal with the aftermath of a failed SafeLink.
With this landmark update, the end user can now see the original URL in a window when they hover over the hyperlink. The rewritten URL only appears at the bottom, confirming that Microsoft has still wrapped the link in the back end for analysis.
Enhancing the SafeLinks experience with Native Link Rendering supports efforts to educate end users, and improves overall security posture by giving individuals more information to make decisions.
Why Microsoft Safe Links Are Still Unsafe
Although Safe Links is a seemingly logical method of combating phishing, it has major shortcomings that end up making your email less secured from phishing attacks.
1. Safe Links Still Rewrites URLs in Outlook Clients
Native Link Rendering is unavailable in the Outlook client, which is installed on desktop and mobile devices. This update only runs in Outlook on the Web (OWA) For the large number of organizations using both OWA and the Outlook client, this might cause some confusion among end users.
2. Safe Links Does Not Dynamically Scan URLs
Safe Links does not offer dynamic URL scanning to evaluate the link for threats on a case-by-case basis. At time-of-click, Safe Links only verifies if the URL is on known Block Lists of malicious sites. This means that ATP struggles to detect zero-day, unknown URLs.
3. Safe Links Can't Act on Detections Across Mailboxes
When Safe Links identifies a malicious URL, it does not generate an alert to notify admin of instances of the same link in other user mailboxes. In order to purge malicious URLs from a phishing campaign affecting the organization, admin must run a query and remove the threats via PowerShell.
4. Safe Links Bypassed with IP Traffic Misdirection
As mentioned above, Microsoft follows links to determine their risk before allowing the user to navigate to them.
Microsoft follows the Safe Links from special IP addresses that are easily distinguished from end user requests. The hackers created and shared their own Microsoft IP's Block List with those IP addresses here.
So, when the request is coming from a Microsoft IP, it is redirected to a benign page and Microsoft's ATP clears it. But then it redirects the user straight to the malicious URL.
5. Safe Links Bypassed Using Obfuscated URLs
Another weakness of the Safe Links scan is that it doesn’t apply Safe Links to domains that are whitelisted by Microsoft. Popular sites like Google.com are given a pass.
This might sound reasonable, but it opens the door for another common trick named "Open Redirect". For example, this link will not be changed by Office 365 Safe Link since Google search is whitelisted.
Google will also not check this link for malicious content — they never claim to — and the end-user will be redirected to the malicious site.
Here's a recent phishing attack that used this trick: SiteCloak: Hackers Take Phish Obfuscation to the Next Level.
And here's another example, with the TattleToken Script:
Safe Links Is Safer, But It's Not Your Savior
When Safe Links used to rewrite URLs, it created a false sense of security that misled users, and undermined efforts to encourage people to inspect URLs for misspellings or other suspicious indicators. Now that Safe Links leverages Native Link Rendering to preserve the original URL for the end user, Safe Links deserves the name. However, there are still some obscure workarounds that hackers can employ to interfere with the protection available in Microsoft ATP. | <urn:uuid:2dbb15cd-8441-4ae4-86f7-84f5417a6575> | CC-MAIN-2022-40 | https://www.avanan.com/blog/microsoft-atp-safe-links | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00271.warc.gz | en | 0.900645 | 1,321 | 2.515625 | 3 |
Spammers are known to use a variety of methods to gain access to an organization’s email addresses. Directory harvest attacks are one common approach, and you are especially susceptible if your organization uses a standard, predictable format for internal emails. So, what exactly is a directory harvest attack, and how can you prevent them?
What is a Directory Harvest Attack?
A directory harvest attack (DHA) is a common method spammers use to collect email addresses without the knowledge of the users of those email addresses. Spammers typically use this technique to flood the Exchange Servers of organizations with unwanted emails. They send out bulk emails to mailboxes that may or may not exist in order to discover valid or existent email addresses at a domain.
How is a Directory Harvest Attack Carried Out?
Directory Harvest Attacks are conducted by spammers who know that you are likely to have certain common names among employees at your organization. For example, chances are there are employees named John, Sarah, Peter, and so forth in companies with high numbers of staff. The spammers compile a list of possible names in a company and attach them to a known domain name. For example, if the spammer aims to harvest email addresses from example.com and the above names are in their list, they will send spam emails to email@example.com, firstname.lastname@example.org, and so forth.
They also use different permutations of common names. For example, if the target user is John Smith, they can send spam emails to email@example.com, firstname.lastname@example.org, or email@example.com. Basically, they try any possible combination, and with the help of email-generating programs, they can produce different permutations of any name they can think of. Moreover, they also send spam emails to email addresses common in most companies, such as firstname.lastname@example.org, email@example.com, and firstname.lastname@example.org.
Figuring Out Which Email Addresses Are Valid
Once spammers send a directory harvest attack, they rely on the responses from the server to purge invalid email addresses from their list. This includes those email addresses not delivered for any reason, and those that returned verbiage indicating the email address does not exist. Finally, they end up with a list of valid email addresses that they can use to attain their goals.
To determine if an email is valid or not, spammers use non-delivery reports and recipient filtering during the early phase of SMTP conversation. With these two approaches, it becomes easier for them to end up with a list containing valid email addresses.
Why Businesses Should Be Concerned About Directory Harvest Attack Prevention
Countering directory harvest attacks needs to be a top priority to avoid inconveniences that can significantly impact your day-to-day operations. Here are some of the reasons why you need to focus on directory harvest attack prevention:
The Potential to Miss Important Emails
When your mailboxes are flooded with garbage, there is a high possibility you may miss very crucial emails. Spammers may send spam often, sometimes every day, which will make it difficult to go through all your emails to identify the legitimate messages.
The Bandwidth Needed to Clean Them Up
Spammers send spam in bulk. These are not the usual emails you receive every day. If they find their way to your mailboxes, you may need a couple of hours to delete them. Keep in mind that you can’t just select and delete en masse, as some resemble genuine email addresses.
4 Steps to Directory Harvest Attack Prevention
The good news is there are steps you can take to prevent directory harvest attacks.
1. Use Atypical Address Formats
Using standard email formats makes it easier for spammers to succeed in sending DHAs. Counter this by using atypical formats where spammers can’t easily decipher a combination of characters. For example, you can include the year an employee joined your company in their email address. So, if John Smith was hired in 2019, you might have something like email@example.com.
2. Send False Non-Delivery Reports (NDRs)
The goal behind sending NDRs is to make spammers believe the email address doesn’t exist so they will stop sending spam. You need an anti-spam application to achieve this. Anti-spam software uses keywords to filter out emails that look like spam. While this method is effective, it is good to note that it uses a lot of resources.
3. Disable NDRs
The other option is to disable NDRs. This is a good option if you are not ready to invest in anti-spam, but you need to be very cautious when implementing it. First, when you disable NDRs, anyone sending genuine emails to you won’t know if the emails were delivered even when the email address was incorrect. So, even when a delivery fails, senders might end up thinking you are ignoring them, which is not the case.
The other reason you need to be cautious is that when no NDRs are generated, spammers may automatically assume your email address exists and proceed to send more spam. Keep in mind spammers rely on NDRs to prepare a reliable list of valid email addresses. So, by disabling your NDRs, you may be making the work of spammers easier.
4. Disable Delivery Receipts
Disabling delivery receipts can go a long way in helping you save on bandwidth and other resources. However, if you choose to implement this approach, it is important to note that legitimate senders will not receive delivery receipts and may think their messages weren’t delivered.
Minimize the Risk of Directory Harvest Attacks
While not the most malicious form of cyber attack, directory harvest attacks can still seriously hinder the day-to-day performance of your business. Thankfully, there are various steps you can take to counter the inconvenience of DHAs. At Electric, we have a team of experts ready to guide your company on cybersecurity issues such as how to prevent DHAs. For more information, book a meeting to speak with one of our IT specialists. | <urn:uuid:b690db35-6d63-4e03-aaf8-bcdd16e949d3> | CC-MAIN-2022-40 | https://www.electric.ai/blog/directory-harvest-attack-prevention-4-ways-to-protect-your-company | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00271.warc.gz | en | 0.935467 | 1,273 | 2.65625 | 3 |
Cybersecurity experts strive to enhance the security and privacy of computer systems. Quietly observing threat actors in action can help them understand what they have to defend against. A honeypot is one such tool that enables security professionals to catch bad actors in the act and gather data on their techniques. Ultimately, this information allows them to learn and improve security measures against future attacks.
Definition of a honeypot
What does “honeypot” mean in cybersecurity? In layman’s terms, a honeypot is a computer system intended as bait for cyberattacks. | <urn:uuid:103c96a4-b1b0-4f13-a4be-dc82d9acd959> | CC-MAIN-2022-40 | https://dataprotectioncenter.com/hacker/what-is-a-honeypot-how-they-are-used-in-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00271.warc.gz | en | 0.942915 | 116 | 3.15625 | 3 |
History is a subject that possesses the potentialities of both a science and an art. It does the inquiry after truth. History is a science and is on a scientific basis. Also, it is based on the narrative account of the past; thus it is an art or a piece of literature. Physical and natural sciences are impersonal, impartial, and capable of experimentation. Whereas, absolute impartiality is impossible in history because the historian is a narrator, and he looks at the past from a certain point of view. History cannot remain at the level of knowledge only. History is a social science and art. In that lie, its flexibility, its variety, and excitement.
Let’s discuss a few major Historical events in Today’s History.
1609: Henry Hudson discovers the Hudson River and Manhattan Island, where New York City now stands.
Henry Hudson was an English explorer of the 1500s. He was the first European to sail up what is now known as the Hudson River, New York. In 1607 he was hired by the English Muscovy Company to lead an expedition from England to discover a northeastern sea passage to Asia and the spice islands of the South Pacific. Making his way as far as Greenland and Spitzbergen, he found his route was blocked by ice. He attempted a second voyage a year later, sailing farther to the east along the northern coast of Norway, but was again blocked by ice.
In 1609, he was hired by another company, the Dutch East India Company, to attempt yet another voyage to find a northeastern passage. After being thwarted by ice again at Spitzbergen, Hudson sailed in the opposite direction, to North America. He explored along the coast of Nova Scotia and down to what is now New York Harbor, sailing up the Hudson River on 11 September 1609 as far north as the site where Albany now stands.
Because Hudson had been hired by the Dutch East India Company, the Dutch later claimed the area and established a colony, naming it New Amsterdam. Peter Minuit of the Dutch West Indies Company bought the island in the year 1626 from the Manhattan Indians for $24 worth of merchandise. However, it was renamed New York when the English took control in 1664.
1863: Bushranger Captain Thunderbolt escapes from the supposedly escape-proof Cockatoo Island Jail.
Bushranger Captain Thunderbolt was born Frederick Ward at Wilberforce near Windsor, NSW, in the year 1836. Being an excellent horseman, his specialty was horse stealing. For this, he was sentenced in the year 1856 for ten years on Cockatoo Island in Sydney Harbour. In July 1860, Ward was released on a ticket-of-leave to work on a farm at Mudgee. While he was on ticket-of-leave, he returned to horse-stealing and once again sentenced to Cockatoo Island. Conditions in the jail were harsh, and he endured solitary confinement several times. On the night of 11th September 1863, he and another inmate escaped from the supposedly escape-proof prison by swimming to the mainland.
After his escape, Ward embarked on a life of bushranging, under the name of Captain Thunderbolt. Much of his bushranging was done around the small NSW country town of Uralla. A rock originally known as “Split Rock” became known as “Thunderbolt’s Rock”. After a six-year reign as a “gentleman bushranger”, Thunderbolt was shot dead by Constable Alexander Walker in May in the year 1870. | <urn:uuid:47771d52-f8fc-4b65-9b76-1d438c6a9ed6> | CC-MAIN-2022-40 | https://areflect.com/2020/09/11/today-in-history-september-11/?amp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00271.warc.gz | en | 0.976554 | 747 | 3.203125 | 3 |
Talk about mind-boggling. This week the United States launched a cyberattack on Iran. Now there’s a report that the computer systems used at the Department of Homeland Security and other federal agencies are frighteningly out of date and open to data breaches.
The information comes from a report released this week by the Permanent Subcommittee on Investigations of the Senate Homeland Security Committee. The report reviewed 10 years of inspector general reports.
So, no matter how well you protect your personal information, the report found that several government agencies responsible for shielding millions of Americans personal data don’t even have the basic tools to defend their computer systems from a cyber attack. Even more worrisome, the department tasked with our nation’s security is in the same boat.
Windows XP and security patches
The report accuses eight agencies, including the Department of Homeland Security, the State Department, the Social Security Administration and the Department of Education, of failing to take even the most rudimentary steps to protect themselves from a malicious hacker attack.
The report found that the agencies were using outdated systems, including one that was almost 50 years old, failing to apply mandatory security patches and neglecting to keep track of hardware and software. For example, Homeland Security still uses Windows XP and Windows Server 2003 on many of its systems. Four years ago, Komando.com was sounding the alarm about the federal government’s continued use of Windows XP.
Microsoft hasn’t provided support for XP since 2014 and Server 2003 since 2015.
The Department of Education hasn’t been able to stop unauthorized devices from connecting to its network since 2011. According to the report, the department announced last year that it had managed to limit this unauthorized access to 90 seconds.
For hackers, however, 90 seconds is more than enough time to, as the report states, “launch an attack or gain intermittent access to internal network resources,” which include the personal data of millions of Americans. Don’t forget that the agency stores sensitive financial data from students and their parents applying for college loans.
Perhaps the most head-spinning information found in the report comes from the Social Security Administration. The agency that stores retirement and disability information on tens of millions of Americans uses a system that relies on a programming language developed in the 1950s.
The number of people at the agency who know the language is dwindling rapidly.
At the Transportation Department, the report found that a system tasked with cataloging hazardous material data had, until last month, been in use for 48 years. It was replaced because almost no one knew how to operate it.
Cyber attacks and changes
According to the report, the number of cyber incidents reported by federal agencies went from 5,500 in 2006 to an astounding 77,000 in 2015. Reported incidents dropped by 56% in 2017. But the Senate report states that the drop is due to rule changes allowing agencies to report fewer kinds of attacks, including hostile network scans.
“The federal government remains unprepared to confront the dynamic cyber threats of today,” the report stated. Solving the problem will take making sweeping changes to the government’s cybersecurity infrastructure. The report recommends new budgeting procedures that address the most critical threats and making cybersecurity expertise a priority in hiring.
Although the federal government doesn’t seem to have a handle on its own cybersecurity, that doesn’t mean you can’t defend your private information from ransomware attacks, viruses and data breaches.
Here are some ways to stay protected:
- Do not follow web links in unsolicited email messages because it could be a phishing attack. If you need to contact a business or website, make sure to type the web address directly into your browser to avoid a spoofed website.
- Set up two-factor authentication when available. That means in order to log in to your account, you need two ways to prove you are who you say you are.
- Use unique passwords instead of the same one over multiple websites. If your credentials are stolen from one site, it’s easy for the cybercriminal to get into other accounts.
- Back up your critical files and store them offline so ransomware and other viruses won’t capture those files as well. | <urn:uuid:d9f5d93f-be1e-43a6-ac1e-fdb350028411> | CC-MAIN-2022-40 | https://www.komando.com/security-privacy/federal-governments-computer-systems-are-out-of-date-and-waiting-to-be-breached/576552/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00271.warc.gz | en | 0.943172 | 874 | 2.640625 | 3 |
Prevent Social Engineering
We call social engineering, the deception art. It is, simply, capturing information by deceiving/manipulating the target person. There are several ways to do this. For example, a fake message that you have sent to someone to hack his/her email is a simple way of social engineering, or an email or SMS saying, You have earned $500 constantly, is meant to deceive you.
News about celebrities, for instance, attracts more attention than other news. They often appeal to wider masses, mostly fans and followers, and attract media. Media organizations have to rely on sensational and exaggerated news to attract public attention. The more incredible the title, the more readers gather to read the news. When someone clicks on news links with interesting titles that promise incredible and scandalous disclosures, these links often lead to specially designed malicious sites that take advantage of fake news about the celebrity. Similar to many fraudulent numbers, these sites contain malware, or victims are directed to a survey or advertising sites.
Millions of people surf social networking sites every day. For this reason, it is not surprising that social network/media phishing, a form of misleading social media platform that uses certain features, has become widespread. This can be done using new social media features with applications installed on your systems that often take over your account, steal personal information, or direct you to malicious pages. In these cases, you need to be careful about links that ask you to download a feature or application.
Trust is a great source of motivation. That’s why social engineers sometimes use a language that creates a sense of trust to lead you to fulfill their request, such as to give away your personal information or money. You will not see anything suspicious, as messages (through email, SMS, or phone calls) can seem to come from government officials or legitimate business managers in the form of urgent warnings, usually requiring immediate action on the system or financial security.
This kind of message creates a sense of trust, where you think that you have to do what the officials have demanded in order to get rid of this menacing situation. However, one must keep in mind that an official, or someone legal, never demands personal information on the phone, or via email messages. Therefore, no matter how frightening their tactics are, they do not lead to huge damage under normal conditions unless you surrender. You must be careful with scary email topics and contents that ask you to do something, otherwise awful consequences arise.
New Year’s day or other holidays are celebrated by many people around the world and will always be the favorite bait of social engineers. You can see suspicious spam and social media shots that propose incredible offers during holidays. The links in them are never connected to free products or great discounts, but to websites that host malware. You should keep in mind that very good online offers in all likelihood are fake.
Social engineers, using some programs, can call from any phone number. For example, they can make a call as if it is the number of a bank you know, or they can even imitate services such as 911. The social engineers exploit individuals trusting tendency and easily trick people into giving away their money. The most reasonable solution to keep in mind is that official authorities will never make a statement requesting a password or credit card information.
As we explained, social engineers use various tools and techniques to manipulate target individuals. Phishing, vishing, smishing, and the other kinds of attack tools are grouped in various forms, and social engineers use different scenarios and new attack techniques day by day, because this method provides great financial gain and higher success rates.
Phishing attacks are the most common and most dangerous security problems that both people and companies encounter during the course of information security.
Measures can be taken in light of the following suggestions against different kinds of social engineering attacks: ( Prevent Social Engineering )
- Phishing attacks do not just happen by email! Cybercriminals can initiate phishing attacks via phone calls, text messages, or other online applications. If you do not know the sender or the caller, or if the message content seems too good to be true, this is probably a social engineering scheme.
- Be aware of the signs. If you have an email that contains spelling or grammar mistakes, and if there is an urgent request or a proposal that looks good at incredible levels, you should immediately delete the message.
- Confirm the sender. Do the necessary checks to make sure the email address of the sender is legitimate. If you have taken a call from a legitimate enterprise that is demanding personal information, you should turn off the phone and contact their official by yourself, to verify their call.
- Do not be fooled by message content that seems real! Phishing emails often have convincing logos, real links, legitimate phone numbers, and email signatures of real employees. But if the message urges you to act (especially actions such as sending sensitive information, clicking on a link, or downloading a response), be careful and look for other signs of phishing. Do not hesitate to communicate directly with the company the message comes from because these companies can verify the authenticity of the message and at the same time they may not even be aware that their company name is being used for fraud.
- Never share your passwords. Your passwords are the key to your identity, your data, and even to your friends and colleagues. Never share your password with anybody. Corporations and company IT departments you work with never demand your password from you.
- Avoid opening links and attachments of unknown senders. Avoid clicking unauthenticated email links or attachments. Suspicious connections can carry ransomware (such as CryptoLocker or a Trojan). Get into the habit of writing URLs to your browser. Do not open attachments unless you expect a file. If a suspicious message comes, call the sender and verify the email.
- Do not talk to strangers: If you receive a call from someone you do not know, and you are asked to provide information, turn off your phone and notify the authorities.
- Watch out for abandoned flash memory. Cybercriminals can drop flash drives to attract their victims, so someone who finds it can install harmful software on their computers without knowing it. If you find a derelict flash drive, do not plug it into a computer, even if it’s for finding the real owner of the flash memory because it could be a trap.
- Delete the suspicious email. Incoming messages from unverified sources that are difficult to verify are likely to be malicious. If you are in doubt, conduct activities such as reaching the alleged source by telephone or communicating using a known and generic email address to verify the authenticity of the message.
- Use email filtering options when possible. Email or spam filtering can prevent a malicious message from reaching your inbox.
- Install and update antivirus software. Scan your operating system with the latest antivirus software to take the necessary measures against malicious software.
- Update all devices, software, and add-ons regularly. To reduce risks to your computer, check your operating system, software, and plug-in updates frequently, or set up automatic updates if possible.
- Back up your files. Frequently back up files on your computer, laptop, or mobile device so that you can easily restore your files when your files are compromised by malicious software. This way, you will not have to give a ransom to a cybercriminal who locks your files and asks for money to open them.
- Train your employees. More than 90% of system breaches have been caused by a phishing attack. Therefore, training employees on cybersecurity best practices is the most effective way to prevent phishing attacks.
Best Social Engineering’s Books to Read
Kevin Mitnick recommends my book :
You can order the book from Amazon via this link , or any other book retailer of your choice
Order Via Amazon:Order From Amazon
Order Via Packt Publishing :Order From Pakct
Order Via Amazon AU
Order Via World of Books World of Books
Order Via Angus & Robertson Angus & R
Dymocks Order From Dymocks
To learn about my other books : | <urn:uuid:5dd1c605-117d-48a7-8bf9-9885ef8b2ceb> | CC-MAIN-2022-40 | https://www.erdalozkaya.com/prevent-social-engineering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00271.warc.gz | en | 0.931105 | 1,661 | 3.03125 | 3 |
The American presidential election debates have made political correctness an issue.
Though, I have never liked the term “political correctness.” It implies that whatever you are saying is not really correct, and you are only saying it to look good. Donald Trump is saying we should stop being politically correct but unfortunately he means we should start saying all those things that are not politically correct. What we need to understand is that being respectful is just plain correct.
We used to have mandatory classes that were called “sensitivity training.” Perhaps many offices still do. The classes are based on the presumption that if people understand why certain comments are hurtful then they won’t make them anymore. I can’t say if it actually changed anybody’s thinking, but it did make it clear that they should not speak their disrespectful thoughts. And that those of us that find it offensive should speak up and respectfully challenge misconceptions.
In elections it is important to be respectful of everyone’s voice and ensure they all get a fair chance. The Democrats seemed to think that they could say one thing in public and then have different discussions behind their firewall.
I sincerely hope that as a result of the election leak people will become more security conscious and invest in proper precautions but not so they can have disrespectful conversations. There is information that must remain secure for safety and privacy reasons, and it is a fact of our connected world that someone will be trying to get to that data. As Information Technology leaders we must make them understand that only very extreme measures can come close to keeping information entirely secure. It has become harder and harder to find a space where things can stay hidden.
In this sense, as we get more connected the world has become more like a small town. Our mothers always used to say “if you can’t say anything nice, don’t say anything at all.” This was just a fact of life in a small town.
Now that our technology has connected everyone, we need to work out how we can give each other space and still keep our information factual. My previous blog discusses the concerns I have about these information leaks and that I consider it a case of “two wrongs don’t make a right” when we encourage the hackers that do this. I also do not want people to have to use false names on their accounts just to keep their privacy. But I don’t want to make space for people to continue to be comfortable being racist, chauvinistic or bigoted.
As IT professionals, we want to be ethical and truthful, but we don’t have to say things that are offensive to others. And maybe I’ve stepped over that line with some of what I’ve said about our American neighbours.
Being Canadian, I’ll just apologise for that now. | <urn:uuid:84b94c0e-825e-4274-b370-011c1653ee74> | CC-MAIN-2022-40 | https://www.itworldcanada.com/blog/learning-about-being-politically-correct-from-the-u-s-election/385369 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00271.warc.gz | en | 0.977442 | 587 | 2.765625 | 3 |
Digital Signatures are the Cybersecurity Vulnerability You Need to Stop Ignoring
Enterprises’ employees and third parties exchange an abundance of confidential documents, both internally and externally, and through a variety of means, including email and file-sharing platforms. These employees and third parties are often advised to secure these documents by attaching a digital signature. A digital signature is a trace attached by the author of a document to attest to its authenticity. Digital signatures create an outgoing hash that can only be decrypted by a public or private key that is held by the receiving party.
How digital signatures work
Let’s quickly cover how digital signatures actually work. The encryption processes of digital signatures can be divided into two categories: asymmetric (or public) and symmetric encryption. The first refers to using different keys for encryption and decryption, while the second alludes to systems where both keys are similar. Since it is the most widespread, we are going to break down the asymmetric system. The process can be divided into two steps: signing and verification.
When a file is signed, the used software applies an encryption method to create a hash out of the initial data. This hash, in turn, is encrypted by attaching the signer’s private (or public) key. After the encryption, the signer attaches the developer’s certificate to affirm the authenticity of the signature. Therefore, the signed document contains both the signature of its creator and that of the software (or authority).
It is important to note that certificates are distributed to software developers by a certification authority. They can attribute the developer’s possession of a domain name, without necessarily making sure that it is a trusted source.
Upon receipt of a document, your software deciphers the signed data into two blocks: raw data and the signature itself. Both of these parameters undergo a decryption mechanism: the first one is to hash the data itself and the second to decrypt the signature using the receiving party’s public or private decryption key.
As a result, two types of hashes are produced (see image below). If these hashes end up being equal, the signature becomes verified and thus considered valid.
Source : Center for Advanced Studies, Research and Development in Sardinia (CRS4), available here
Digital signature cybersecurity vulnerabilities
Data integrity is the main purpose for digital signatures. They make it possible for users to ensure the safety and authenticity of the data that they are dealing with. Moreover, they make sure that any request sender can be verified in order to avoid sending information to an untrusted party. Their final goal is to ensure that any party to a communication can be held liable for accepting the authenticity of the signature they apply on any document.
However, digital signatures introduce several security vulnerabilities. One classic strategy employed by threat actors involves stealing private trusted keys in order to sign fake documents to make them appear trustworthy. The methods extend from a simple theft through network infiltration to extensive research attacks (stuffing different key combinations to guess the correct one).
Taking Advantage of Vulnerabilities
A less conventional method to overturn digital signatures is exploiting the vulnerabilities that occur during their execution. When executing a digital certificate, algorithms seem to overlook the header storage size. This leaves extra space for software developers to add links to updates and new content without having to sign it again. However, this storage space (of roughly 8 bits) can be exploited by hackers in order to plant extra data that can be dangerous to the user without changing the outcome of the signature itself. Therefore, algorithms could potentially execute dangerous content.
Although hackers can use illegal methods in order to exploit digital certificates, there is a gray area that allows them to remain in the realm of legality while being malicious. This can be demonstrated through the process of attribution of digital certificates by certification authorities. In fact, most of these authorities attribute a certificate to organisations on the basis that they have a domain name, along with other minor criteria.
This means that not all certificates can be trusted, which explains a surge in untrusted certificate attribution in the last decade. In addition to this, certification authorities tend to be relatively slow on certificate revoking processes. To put it succinctly, it is possible that people open files that are signed with untrusted certificates and that could potentially infect their systems.
Verisign, an internet infrastructure company, underwent a serious attack caused by the signature faking malware called Troj/BHO-QP (Browser Helper Object). The malware was hidden under the appearance of a flash player extension from Microsoft, that was installed to accompany the game automation software QQ. This malware was used to install a fake “VeriSign Class 3 Code Signing 2009 CA” trusted root certificate, which allows Troj/BHO to avoid being declared as “not verified”.
This malware can pose several types of threats, from phishing and adware up to collecting data through installing undesirable extensions (web browsers being easy access installers). Although the attack was complex, the hackers actually overlooked several details. A closer look from an individual with basic cybersecurity knowledge would notice that the nomenclature on the rogue certificate is filled with mistakes. However, the backdoor installation of malware could not have been easily spotted. | <urn:uuid:0b8cbf74-7a43-4ef6-8548-689304105b2a> | CC-MAIN-2022-40 | https://cybelangel.com/digital-signatures-are-the-cybersecurity-vulnerability-you-need-to-stop-ignoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00471.warc.gz | en | 0.935995 | 1,083 | 3.671875 | 4 |
“If I had asked people what they wanted, they would have said faster horses” – Henry Ford.
In about exactly 13 years (target as set by NASA) or rather 7 years (as set by Elon Musk’s Space-X) from now, we humans are going to set foot on Mars and become truly space faring race.
We live in pretty exciting times riding on a threshold of Continuous Imagination empowering Continuous Innovation. Every product in every domain is undergoing sea change, adding new features, releasing them faster than their competitors, adapting to incremental rate of technological substitution. But most of these new feature improvements, new product launches are not guided by new requirements from customers. In face of stiff competition that gets stiffer by the day, evolution and adaptation is the only natural process of survival and winning. As Charles Darwin would put it “Survival of the Fittest”.
But as history suggests, there are and there always will be skeptics among us who will doubt every action that deviates from convention – like doubt climate change, doubt the need to explore the unexplored, and the need to change.
“The path of sound credence is through the thick forest of skepticism” – George Jean Nathan.
A human mind exposed to scientific education exhibits skepticism and pragmatism over dogmatism and largely remains technology agonistic. It validates everything agnostically with knowledge and reasoning before accepting new ideas. But human progress always came from philosophical insights, imaginations that led to discovery or invention of new things. Technological progress has only turned science fiction (read philosophy) into scientific facts.
“The future is already here – it is just not evenly distributed yet” – William Gibson.
With the above premises in mind, in this article we intend to explore the realm of IOT, its implications on our lives and our own limitation in foreseeing the imminent future as Companies and Customers.
We understand Internet, but what is IOT (Internet of Things)?
IoT, the Internet of Things or Objects denote the entire network of Internet-connected devices – vehicles, home & office appliances, and machinery equipment embedded with electronics, software, sensors, actuators, and the wired/Wi-Fi and RFID network connectivity which enable these objects to connect and exchange data. The benefits of this new ubiquitous connectivity will be reaped by everyone and we will for the first time be able to hear and feel the heartbeat of the Earth.
Source: The Economist, 2010.
For example, as cows, pigs, water pipes, people, and even shoes, trees, and animals become connected to IoT, farmers will have greater control over diseases affecting milk and meat production through availability of real time data and analytics. It is estimated on an average each connected cow will generate around 200 MB of data every month.
According to Cisco, back in 2003, penetration of internet and connected devices per person were really low but grew at exponential rate doubling after every 5.32 years similar to the properties of Moore’s Law. Between 2008 and 2009, with advent of Smartphone these figures rocketed and it was predicted that 50 billion connected devices shall be in use by year 2020. Thus, IOT was born and is in the adolescent phase already.
Source: Cisco IBSG, April 2011
Today, IoT is well under way, as initiatives such as Cisco’s Planetary Skin, smart grid, and intelligent vehicles, HP’s central nervous system for the earth (CeNSE), and smart dust, have the potential to add millions—even billions—of sensors to the Internet 1.
But just like as in during social media explosion the new age of IOT, connected devices, connected machines, connected cars, connected patients, connected consumers and connected network of Things we will need new age collaboration tools, new software, new database technologies and infrastructure to accommodate, store and analyze huge amounts of data that will be generated like host of emerging technologies like graph database, bigdata, microservices and so on.
IOT – Internet of Things will also require IOE – Integration of Everything for meaningful interaction between devices and provides huge opportunity for middleware/integration technology providers like Kovair.
But as Kai Wähner of TIBCO discusses in his presentation “Microservices: Death of the Enterprise Service Bus” 2 that microservices and API-led connectivity are ideally matched to meet integration challenges in the foreseeable future. Mulesoft’s “Anypoint Platform for APIs” backed by Cisco, Bosch’s “IOT platform” or the upcoming API management suite from Kovair is a pointer to all this and shall empower the IOT revolution.
The explosion of connected devices each requiring specific IP addresses have already exhausted in 2010 under IPv4 and requires IPv6 implementation immediately that will also suffice inter-planetary communication for a much longer period. Governments and World Wide Web Consortium have remained laggards and skeptical with IPv6 implementation and allowed the exhaustion of IP addresses.
But not just governments, bureaucratic and large technology driven organizations like Amazon, Google and Facebook can remain skeptic under disguise and continue to block movements like Net Neutrality/ ZeroNet, Blockchain technology, IPFS (Inter Planetary File Sharing protocol) over cumbersome HTTP as they fear their monopolies will be challenged 3.
We humans engaged in different capacities as company executives or consumers, government officials or technology evangelist have role reversals as that of sceptics or futurists and will continue to doubt IOT although embracing it until we actually truly benefit from it.
Exactly after remaining Skeptical for 120 years of the pioneering work done in Kolkata by the Indian Physicist Prof J.C. Bose during colonial rule, IEEE finally recognized and conferred on him the designation of the “Father of Telecommunication”. The mm wavelength frequency that he invented used in his experiment in 1895 in Kolkata is the foundation of 5G (Wi-Fi Mobile network) that scientists and technologists across the world are now trying to reinvent that will provide the backbone for IOT 4.
Finally, we leave it to the reader’s imagination about the not so distant future when all the connected device in IOT begins to pass “Turing Test”. | <urn:uuid:93553e43-bf85-46f1-a6e0-d5b9f3c1e87f> | CC-MAIN-2022-40 | https://www.kovair.com/blog/companies-customers-still-skeptical-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00471.warc.gz | en | 0.93564 | 1,295 | 2.75 | 3 |
A NASA astronaut aboard the International Space Station conducted a power beaming test with a light-emitting rectifying antenna developed by the Naval Research Laboratory.
ISS crew member Jessica Meir demonstrated the potential of NRL’s LEctenna technology to transform electromagnetic waves into an electric current, NRL said Monday.
The technology primarily served as a STEM project that sought to encourage student innovation through the Department of Defense’s Space Test Program.
NRL researchers are studying the space-based solar power beaming concept as a potential clean energy source in various military and civilian systems.
The method seeks to collect and bring energy from the sun down to Earth where it can be turned into usable energy through a LEctenna-like approach.
Meir noted that NRL is exploring approaches to remotely power drones or charge mobile devices wirelessly. | <urn:uuid:5dd94d00-bd90-4396-a383-59c499df965b> | CC-MAIN-2022-40 | https://executivegov.com/2020/04/international-space-station-demos-naval-research-labs-power-beaming-concept/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00471.warc.gz | en | 0.929415 | 169 | 3.5625 | 4 |
Malek Murison and Chris Middleton report on a brace of new humanoid robotics studies that reveal just how easily human beings can be influenced by cleverly designed machines.
A series of infamous experiments in the 1960s by social psychologist Stanley Milgram suggested that the majority of people are obedient to authority figures – sometimes to an extreme.
His experiments apparently showed that all it took to coerce one person into harming another was a man in a lab coat issuing instructions: the ‘agentic state’ theory, in which human beings subsume their personal responsibility and consciences in the will of an authority figure.
In recent years Milgram’s experiments have been discredited to a degree, but they remain a fascinating study of how authority figures can either push a majority of participants into causing harm to others, or make people feel obligated to please them, depending on how one interprets the results.
But what if that same type of staged process was applied to exploring human-robot interactions? Would human beings harm an emotional robot? Or is the harm, in reality, of a different and more serious nature?
How we treat robots that have social skills
That was the question that researchers from the University of Duisburg-Essen wanted to answer.
To test empathy between humanoids and humans – and the extent to which a robot’s social skills determined how interactions would play out – they recruited 89 volunteers to sit down, separately, with a NAO machine, the toddler-sized humanoid from SoftBank Robotics.
The interactions were split into two distinct styles: social, in which the robot mimicked emotional human behaviour with some participants, and purely functional, in which it acted more like a simple machine with others.
The study, published in the journal PLOS One, explains how participants thought they were taking part in a learning exercise to test and improve the robot’s abilities. But the real purpose of the experiment centred on how the interactions – whether social or functional – ended: once the exercises had finished, scientists asked the participants to switch the robot off.
In around half of these staged interactions, the robot was programmed to object, regardless of whether it had previously behaved in an emotional or functional style. On top of pleading – with empathy-triggering statements like “I’m afraid of the dark” – it would beg, “No! Please do not switch me off!”
Out of the 89 volunteers, 43 were faced with these objections from the NAO machine. Hearing the robot plead not to be switched off, 13 refused point blank to do so, while on average, the remaining 30 took twice as long to comply with the researchers’ instructions than those who didn’t experience the pleas for mercy.
There are further observations to be taken from the study. For example, volunteers faced with a robot apparently begging for its life following a purely functional interaction hesitated the longest out of all the participants. Intriguingly, it seems, the sociable robot was easier to switch off, even when it objected.
Though unexpected, this result indicates the role of dissonance in human reactions: when a monotonous, machine-like interaction suddenly gains (apparent) sentience and/or the robot speaks in emotional terms, we take more notice.
Children easily influenced by robots
Another research study, carried out at the University of Plymouth in the UK, found that young children are significantly more likely than adults to have their actions and opinions influenced by robots.
The research compared how adults and children respond to an identical task when in the presence of both their peers and humanoid machines. It showed that while adults regularly have their opinions influenced by peers, they are largely able to resist being persuaded by robots – a finding contradicted by the German results, perhaps.
However, children aged between seven and nine were more likely to give the same responses as the robots, even if these were obviously incorrect.
Writing on the university’s website, the university’s Alan Williams explains how the study used the Asch paradigm, first developed in the 1950s, which asks people to look at a screen showing four lines and say which two match in length. When alone, people almost never make a mistake, but when doing the experiment with others, they tend to follow what others are saying (Milgram’s experiment rears its head once again).
When children were alone in the room in this research, they scored 87 percent on the test, but when the robots joined in, the children’s score dropped to 75 percent. Of the wrong answers, nearly three-quarters (74 percent) matched those of the robot.
Like the emotional robot study, the Plymouth research reveals concerns about the potential for robots to have a negative or manipulative influence on people – in this case, on vulnerable young children.
The research was led by Anna Vollmer, a postdoctoral researcher at the University of Bielefeld, and professor in Robotics Tony Belpaeme, from the University of Plymouth and Ghent University.
Professor Belpaeme said, “It shows that children can perhaps have more of an affinity with robots than adults, which does pose the question: what if robots were to suggest, for example, what products to buy, or what to think?”
The Plymouth study concludes: “A future in which autonomous social robots are used as aids for education professionals or child therapists is not distant.
“In these applications, the robot is in a position in which the information provided can significantly affect the individuals they interact with.
“A discussion is required about whether protective measures, such as a regulatory framework, should be in place that minimise the risk to children during social child-robot interaction, and what form they might take, so as not to adversely affect the promising development of the field.”
Research with one eye on the future
Studies like these confirm the findings of previous research in this space: humans are inclined to treat robots and other devices as living beings, particularly if they are able to express – or rather, mimic – sentience in some way.
And that’s significant because, moving forward, how we treat robots, and how they behave with us, will become increasingly important.
As they become more lifelike and ingrained in society in either software or hardware form, robots need to be designed in a way that makes them affable, predictable, and easy to cooperate with.
But the research findings indicate that machines can easily be programmed with behaviours that are highly manipulative in terms of human responses.
This suggest that we may need one of two things in the medium term: either a middle ground where robots are designed to be clearly distinct from people in terms of how they handle interactions – in order to avoid confusion; or acceptance from humans that, despite their apparent sentience, humanoid machines do not deserve our empathy.
In short, vulnerable people may need to be protected from manipulative machines, rather than the other way around. At least until true artificial intelligence – sentient, self-aware machines – emerge years in the future, at which point we may enter a very different age of robot rights.
There are also fears that designing lifelike robots for the sole purpose of objectification – such as those developed for sexual gratification – could normalise predatory and abusive behaviour.
Internet of Business says
The NAO (pronounced ‘Now’) robot – like its larger ’emotion sensing’ cousin, Pepper – presents a fascinating anomaly in humanoid robot development. Now commercially available from SoftBank, NAOs were originally designed by France’s Aldebaran Robotics as research platforms for universities and robotics labs.
Aldebaran – acquired by SoftBank five years ago – set out with the goal of creating robots that could be ‘friends’ with humans, rather than presenting a clear, practical application of humanoid robotics.
The NAO machines are small, almost childlike, amusing, speak with light, friendly voices, and are programmed with a range of expressive behaviours. They also sing, dance, and tell stories. As a result, they’re popular in education, including in specialist areas, such as teaching children who are on the autism spectrum.
However, despite their fun design, entertaining behaviour, and sophisticated engineering, they are simply computers: an Intel Atom processor, to be exact, combined with a secondary ARM 9 chip, along with a collection of servos, sensors, microphones, and cameras, all packaged in a tough plastic casing with a cartoon-like face. Everything else is software programmed by human beings.
NAO machines have no AI as most people would recognise it, and merely perform pre-programmed routines, which can either be downloaded from the SoftBank community, or created by owners using the Choreographe application, developed by Aldebaran in 2008.
However, a recent tie-up between SoftBank and IBM means that NAO and Pepper machines can run as front ends to Watson in the cloud, which has opened up broader applications for the robots in some sectors, such as leisure and retail, when linked with industry-specific data sets.
Nevertheless, NAO machines’ much-publicised autonomy is largely limited to a mode in which they can explore their environment and cycle through other pre-programmed functions randomly.
In short, NAO robots have zero sentience or awareness of human beings; they are clever simulations of life.
As such, they can be viewed as either brilliant design and engineering achievements, or as highly manipulative, deceptive devices that encourage humans to treat machines as having feelings, where none exist. A computer programmed to make people feel they should take care of it is, in some ways, a dangerous – even sinister – concept, outside of the world of toys, at least.
The German university’s research perhaps reveals this fact more than any other.
Disclosure: Internet of Business editor Chris Middleton, author of this commentary, owns the well-known NAO robot, ‘Stanley Qubit’. He has no relationship with SoftBank Robotics. | <urn:uuid:9c922012-3bfa-4f4a-aa73-06af21e4a011> | CC-MAIN-2022-40 | https://internetofbusiness.com/begging-robot-study-social-interactions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00471.warc.gz | en | 0.955266 | 2,076 | 2.96875 | 3 |
Most of the spam we see today comes in with a variety of hooks, the most dangerous
being those looking to steal data and account credentials. Because criminals know that
the weakest link is the human, it should be no surprise that spam continues to be one of
the biggest issues facing many enterprises.
IT shops have thrown everything but the kitchen sink at the issue and more times than
not, come up empty on long-term solutions. Lately we’re hearing a good deal about Sender
Policy Framework (SPF) as the answer to our spam woes. Is it?
Nearly all abusive e-mail messages carry fake sender addresses. The victims whose
addresses are being abused often suffer from the consequences because their reputation
gets diminished and they have to disclaim liability for the abuse, or they waste their
time sorting out misdirected bounce messages. Worse, a financial loss can be devastating
to an organization should its domain end up on a black list because of spam runs done via
a successful phish of a user account or the discovery of an open relay.
You probably have experienced one kind of abuse or another of your e-mail address in
the past – e.g., an error message saying a message allegedly sent by you could not be
delivered to the recipient, although you never sent a message to that address. SPF sets
out to solve this problem and significantly mitigate spam.
So How Does It Work?
SPF is an open standard specifying a technical method to prevent sender address
SPF is the protocol-level identification of the delivering mail server, and it is
usually invisible to recipients. It is mirrored in the Return-Path header, the address to
which mail delivery errors (or bounces) are sent. For individual e-mail addresses or
small domains, it may sometimes be set to the user’s e-mail address. But for larger and
more professionally managed domains, it is usually a domain related to the mail server
that sent the message.
SPF protects the envelope sender address, which is used for the delivery of messages.
This allows the owner of a domain to specify its mail-sending policy by specifying which
mail servers are used to send mail from the domain. The technology requires two sides to
participate: The domain owner publishes this information in an SPF record in the domain’s
DNS zone, and when someone else’s mail server receives a message claiming to come from
that domain, the receiving server can check whether the message complies with the
domain’s stated policy. If the message comes from an unknown server, it can be considered
Is This Really Going to Do Much?
According to one study by SpamTitan, many organizations think SPF alone can
effectively take care of spam. Approximately 52 percent of organizations surveyed were
not aware that SPF can stop spammers from forging only the “From” field in the e-mail and
that SPF does not stop spammers from sending e-mails from a domain of which it is a
member. Spammers already know that one-way domains go around SPF, so right from the
start, SPF had its challenges. This means that for now, IP-based reputation systems, such
as SpamCop and SpamHaus, will be needed in addition to SPF and any additional spam
filtering/e-mail reputation solutions.
Be that as it may, SPF does help some and has already become a commodity offering by
hosting companies. Typically it is offered in a bundle with antivirus and antispam. True,
it is not perfect by any means, but it is a step in the right direction of reducing the
huge percentage of spam e-mail coming to the front door. We already see CAPTCHA solutions
being added to the mix of tools used to fight spam, and in the future look to public key
frameworks gain wide adoption to further dwindle down the avenues for spammers to deliver
But while organizations are busy catching up on traditional communication systems such
as e-mail, spammers are already making headway into the new era of communication – mobile
devices and Web 2.0. You can be certain that if and when spam is ever effectively
mitigated on traditional communication platforms, spammers will simply put all their
efforts on whatever communication systems don’t yet have adequate protections.
Article courtesy of Enterprise IT Planet | <urn:uuid:cc46398b-2a37-4076-ade1-bb911719ae32> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/is-spfs-spam-fighting-prowess-overestimated/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00471.warc.gz | en | 0.926786 | 945 | 2.609375 | 3 |
By thinking before printing and picking the right paper for the job, you can decrease your office’s environmental footprint. Here are four simple tips for smart paper use.
- Use both sides of the page.
It’s called duplex printing and it’s the best way to reduce paper use. Choosing copiers, digital printers, and multifunction devices that can print on both sides of the paper is always a good decision. Adding duplex as your default mode will also save even more paper.
- Be selective: Print only what you need, when you need it.
Preview your print to avoid printing drafts that will be discarded. Print on demand instead of stockpiling forms, letterhead, or instructions that will go out of date.
- Use the right paper.
There are multiple options that promote sound environmental practices. Go with papers that are certified through global organizations, like the Forest Stewardship Council or the Program for the Endorsement of Forest Certification. Both of these organizations have strict international standards for sustainable forestry.
- Collect used paper so the fiber can be used again.
Recycling the fiber saves trees, reduces energy and water use, requires fewer chemicals, and keeps paper out of landfills.
Even Xerox, one of the world’s largest suppliers of paper for office printers and copiers, is on board with this idea. “It may be a surprise that Xerox is concerned about excessive paper use. But, the hallmark of our business has always been operating in an environmentally responsible way. That means holding our suppliers to tough standards on how they make paper, improving forest management and protecting endangered forests. Through Xerox innovation, we help our customers minimize their impact on the environment while meeting their business needs.” | <urn:uuid:08c3a344-0bf9-40f4-8f6b-74a0fa818e02> | CC-MAIN-2022-40 | https://www.jdyoung.com/resource-center/posts/view/118/four-paper-tips-to-use-less-and-use-wisely-jd-young | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00471.warc.gz | en | 0.906224 | 365 | 2.78125 | 3 |
Cybersecurity is at the heart of protecting our digital economy and society. But only 10 percent of the profession is female, which is a striking statistic to consider when we celebrate International Women’s Day on 8th March.
Closing the Gender Gap: Women in Cybersecurity: March 8 was International Women’s Day. A lot of folks will say, “why do we need a special day for women? Women have equality now – don’t they?” To an extent, this is true. Compared to the… Go on to the… https://t.co/ISPPrz81rA pic.twitter.com/FAQW0yMGPZ
— CS Threat Intel (@cipherstorm) May 25, 2018
One of the women in cybersecurity is Sivan Nir, the Threat Intelligence Team Leader at Skybox Research Lab, part of Skybox Security. She talked about the diversity gap in the industry.
Sivan Nir, the Threat Intelligence Team Leader at Skybox Research Lab:
“Quite clearly, the number of women in cybersecurity is far too low. This is such a waste because it’s a field that’s longing for more skilled people to join up so that the challenging skills gap can be closed. It’s a perfect industry for women to work in: because cybersecurity is so new and dynamic, it’s a field that welcomes and thrives on diversity. My own team comes from all walks of life, which is one of the main reasons why we’re so successful at researching and understanding the context of cyber threats”.
“Personally, I never felt held back from choosing science and engineering as my career. My father is an engineer and I grabbed the opportunities given to me to follow a technology path at school, choosing options in physics and computer science.
“To get more women and girls into my profession, you need to start young. Girls need to be encouraged to make more tech-oriented education choices when they’re still at school: working in technological fields should be seen as exciting, not intimidating. Cybersecurity, in particular, is never boring – we tackle real-world challenges at a fast pace every day.
“Women need to feel that they are encouraged to take chances with their STEM career choices. I’ve benefitted from studying bio-technology engineering at university – it led to me becoming a data analyst and finally to leading a cyber threat research team for a global organisation. There are many rewarding opportunities out there for women in cybersecurity and I’m excited to see more join our ranks in the future.” | <urn:uuid:b6164035-5c60-4e6b-ac57-89109f8053e0> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/why-does-a-diversity-gap-persist-in-cybersecurity-this-international-womens-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00671.warc.gz | en | 0.943781 | 545 | 2.5625 | 3 |
The more we learn, the more it becomes clear that there is no "universally optimal" brain. We all have our own unique strengths and weaknesses. Things we do to help people with different neurotypes aren't just accommodations for rare individuals. Being considerate of each other's mental operating systems can improve everyone's functionality.
Each year brings more reports that document the challenges of hiring in cybersecurity, with an alarming number of unfilled positions. But this may ring hollow to those struggling to find work in the industry. There are many factors that cause this discrepancy, and today let's look into one such area: inclusive hiring practices for neurodiversity.
Most of us have a clear mental stereotype of a "typical engineer." This may include personal issues and quirks as well as traits that help people succeed in intellectually demanding jobs. The positive qualities include things like intense specialized interests, laser-like focus, creative and vivid imagination, or the ability to find signals within noisy data sets.
From a neurological perspective, many of these traits — both positive and more challenging ones — frequently intersect with signs of "mental operating system" differences such as autism and attention deficit hyperactivity disorder. As a result, popular tech-hiring practices can sometimes put off the very people who have always been an important part of science and technology.
Neurodiversity also includes a wide variety of neurological differences related to developmental and learning disorders, mental health conditions, and mental perception variances such as amusia and aphantasia. Individuals are referred to as "neurodivergent" while groups of people are referred to as "neurodiverse." While many people define these variations as "disabilities," the traits can and do bring benefits to individuals as well as potential employers.
Hiring Benefits of Neurodiversity
Part of the benefit of having diversity is that it improves the breadth of knowledge within your organization. People with different brains — as well as genders and ethnicities — will have different backgrounds as well as strengths. And naturally, they'll have different security and privacy concerns, most of which will not be obvious to people outside of those groups.
Paying extra attention to hiring practices can help you root out ways you might be generating "false negatives" that exclude neurodiverse job candidates for reasons that have nothing to do with their ability. In an environment where talent is scarce, it's imperative to remove artificial barriers to entry.
It's also important to understand that women and minority communities tend to have high rates of under-diagnosis, so they may not be identified as neurodivergent. And because the constellations of qualities that lead to someone being identified as neurodivergent are not traits absent in "neurotypical" people, being inclusive will help everyone. Here are five neurodiversity hiring practices to keep in mind:
Set Expectations Early and Often
Hiring is seldom a straightforward process because there are many variables that can affect timing. But it's important to tell people what your process is and to give them a window of time in which steps should occur, including notifying applicants if they were not chosen for the position. If you need to deviate from that schedule due to unforeseen circumstances, it's best to notify candidates as early as possible rather than leave them guessing. Once someone has been hired, set them up to succeed by continuing to set goals and schedule dates for deliverables, including discussion about deferred activities.
Err on the Side of Clarity
Not everyone processes information the same way. Some people prefer text to verbal instructions, or they may understand diagrams better than written words. Some may misunderstand idioms or interpret things very literally. It's better to cover all your bases, and stick to simple and clear descriptions. If the option is available, ask people their preferred communication method and double-check that your words are interpreted as you intended them. When you're not able to ask, err on the side of providing as many options as are appropriate.
Consider your job ad wording
It can be difficult to communicate the level and types of skills a prospective employee is expected to have. The way this is most commonly done is with numbers — for example, such as "five years of experience" associated with a certain technology or position. But there's nothing intrinsically magical about five years of experience. You can express the same idea more clearly by rewording it as "experience with" or "fluent in," or other phrases that more clearly express the problems you're trying to solve or level of familiarity with a technology that you require.
Stick to Criteria that Pertain to the Position
Coders don't necessarily need to maintain a lot of eye contact to be effective. Being a social butterfly doesn't indicate someone is a better reverse engineer. Make sure that the criteria on which you're judging candidates are decided by a group of interested parties in advance, that they pertain to the job at hand, and that they are the deciding factors that employees are graded on.
- 2018 State of Cyber Workforce
- Empathy: The Next Killer App for Cybersecurity?
- Lessons from My Strange Journey into InfoSec
Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register. | <urn:uuid:f9870574-1e1d-4ff8-9b2e-3a54b6e76299> | CC-MAIN-2022-40 | https://www.darkreading.com/threat-intelligence/the-typical-security-engineer-hiring-myths-stereotypes | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00671.warc.gz | en | 0.96043 | 1,120 | 2.96875 | 3 |
By Milica D. Djekic
The encryption may get a crucially significant point in some communication or information
exchange activities. It’s not only important to a Defense Force as it may get from significance to some terrorist organizations using cryptography to plan and prepare their attacks. Some terrorist groups may use software tools to protect their sensitive data, while many would rely on hardware solutions which would offer them a somehow better level of protection. As it’s known, many software solutions may get hacked, but is that the case with the encryption hardware?
Through this effort, we would want to analyze how to apply a skillfully prepared state-sponsored attack – that could make it possible to damage some hardware remotely taking advantage of its physical imperfections.
Why terrorist groups use the encryption
Many terrorist groups would use cryptography to protect their sensitive information or
communication channels from being breached from the outside. Sometimes encryption software would be enough to protect a certain amount of data, but – would it get resistive to skillfully organize hacker’s attacks? Our answer to this question is – no. Once the intelligence community discovers some of the machines using the cryptographic tools, it would become feasible to disable such a computer with its entire network.
Also, the current software cryptography would offer many weaknesses such as the encryption keys that should get delivered through the special channels. We know that terrorists may use emails, web or even mobile devices to obtain such a confidential information transfer. The intelligence community got its methods to discover the terrorist organizations and their members, while the state-sponsored hackers got plenty of skills and expertise to attack their information-sharing systems.
A bit better situation is with the hardware-based encryption solutions. These solutions may
appear as the smaller boxes or even computer sticks which would do data encryption and its transmission. It appears much harder to affect such a solution, but the question is – would that be possible?
The experience would suggest that it’s not easy at all to break into that cryptographic system, but we would want to suggest that it’s not necessary to deal with that protection from the inside or, in other words, send your intelligence agent to a highly risky task – it’s simply sufficient to get well-trained staffs with the appropriate technology operating from the outside.
The challenges of hardware encryption
The hardware cryptography would, as we said, include some physical devices is based on digital technology. Those devices may cover up printed boards or any other micro-packaging technologies. So commonly, in case of skillfully prepared hacker’s attacks – it’s possible to see the external devices being connected to that computer or its network.
Many times the hackers would try to disable those gadgets, but – is that the case with the encryption equipment which may get some sort of the access control. In a practice, the hardware-based encryption would include some interface software being capable to communicate with the cryptographic hardware. It sounds like a challenge.
Well, basically – it is. Before we explain how complex it could be to threaten the well-protected encryption hardware from the outside, we would want to make a brief overview of how digital technologies work in reality.
The majority of digital systems would use the logic gates and some electronic components
being applied for their operations. Those logic elements would include transistors, diodes,
resistors, capacitors, integrated circuits (ICs) and so on. So, all of them would be real
components being made from the real materials with their physical performances. We would not go deeply into the analysis, but let’s say those elements – even being used for the terrorist printed boards – would get their physical limitations.
For instance, they would get considered as a low-voltage circuit and in such a case – the
binary 0 would include a range between 0 and 2 V, while the binary 1 would deal with the
spectrum from 3 to 5 V. So, it’s clear that if we try to apply some electrical network’s voltage being between 220 and 230 V in Europe or 110 V in the US – we would definitely cause such equipment damage.
Let’s say that computers and their supporting equipment would use the power from the standard electrical networks as well as many voltage converters offering them to work within the supposed range.
Finally, we would give some illustration explaining how diodes work and suggesting that it’s
quite simple to burn them using the amplified voltage.
The physical disadvantages of hardware solutions
So, we would see above that hardware solutions would deal with some physical limitations
causing them to be so sensitive to the higher voltage. That’s practically the point for a reason that even if terrorist groups deal with the hardware-based encryption or not – it’s possible once they got located to burn their equipment using the amplified signal.
For instance, if the circuit’s diode suffers it’s a current breakthrough or – in other words, if it gets burned to apply the heat which would be the consequence of Joule’s law – it would work as a short circuit to the entire solution and produce its destruction.
The concluding remarks
The aim of this review got to provide a closer insight to the challenges of hardware-based
encryption especially being used for terrorist purposes. It appears that our world could be
a much securer place to live and work in – once we decide to rely on our scientific findings.
That’s the reason more to combat terrorism and bring progress to the entire Human Kind.
About The Author
Since Milica Djekic graduated at the Department of Control Engineering at the University of Belgrade, Serbia, she’s been an engineer with a passion for cryptography, cybersecurity, and wireless systems. Milica is a researcher from Subotica, Serbia. She also serves as a Reviewer at the Journal of Computer Sciences and Applications and. She writes for American and Asia-Pacific security magazines. She is a
volunteer with the American corner of Subotica as well as a lecturer with the local engineering | <urn:uuid:5fd5eaeb-7814-4d16-9693-163df82e7ac9> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/the-ways-of-responding/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00671.warc.gz | en | 0.962564 | 1,258 | 2.734375 | 3 |
Today, we look at the basics of artificial intelligence, which permeates almost every aspect of our lives. This article will explore the main concepts revolving around artificial intelligence and the answers to frequently asked questions without getting into technical complexities as much as possible.
Copyright: dataconomy.com – “The Basics of AI for Beginners”
What is Artificial Intelligence?
Artificial intelligence (AI) is a field of computer science that focuses on developing smart machines capable of accomplishing tasks that require human intellect.
Most people immediately think of Artificial General Intelligence (AGI) when they hear about AI. It can perform anything that a human being can, but it does so far superior. However, the fact is that we are nowhere near to creating one. AI currently exists as Artificial Narrow Intelligence (ANI), which is very specialized. You can teach it a few things, and it will perfect them. However, give it another assignment, and it will ruin the job horribly.
Types of AI: Narrow, General and Super AI Explained
When we talk about the basics of AI, we first need to look at the types that are in use and in still in theory. AI applications are generally divided into three categories based on their ability to accomplish activities. These sorts differ and represent a natural progression for AI systems today.
Narrow Artificial Intelligence: Reliable Machines on Missions
Narrow AI is the most common type of AI today. Narrow AI is sweeping the world, from mobile phone apps to the Internet to big data analytics. The term originates from the fact that these artificial intelligence systems are designed for a specific purpose. They are also known as “weak” AI due to their restricted approach and inability to complete tasks other than those assigned to them. In conjunction with their limited capacity, this narrow focus makes them ‘weak’ AI.
Narrow AI is generally restricted in scope since it addresses a specific issue. Its architecture and operation are intended to guarantee that a task is completed, with its focus reflected in its design and functioning. Because of these limitations, narrow AI has laser-sharp attention to the specific goals for which it was designed.[…]
Read more: www.dataconomy.com | <urn:uuid:163270d3-3010-4e8d-b6ef-b6533015d185> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/05/07/the-basics-of-ai-for-beginners/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00671.warc.gz | en | 0.972856 | 454 | 3.421875 | 3 |
Project Management - Sequence Activities Process
Once all activities have been defined in the process of “define activities”, you need to place them in order of precedence in the “sequence activities” process. Here, you essentially create a map/diagram that effectively illustrates the relationship existing between these activities and identifies the order or sequence in which they need to be performed. It’s important to mention that the schedule is not created during this process, and so at this stage, you need not assign any starting or finishing dates (or time frames) to these activities.
Inputs for Activity Sequencing
The inputs for the sequence activities process include: activity attributes, project scope statement, activity list, milestone list, as well as organizational process assets. The activity list created by you contains your scheduled activities and has to be arranged in the order in which tasks/sub-tasks need be performed. The attributes of the activities chosen by you provide an additional insight into the activities that need to be addressed before others. Additionally, the milestone list offers you with the key milestones which might influence the overall order of all activities. For instance, to develop a timely, impact ful and user friendly interface for a software program that you are developing, you may need to finish the processes of shading and model rendering before going ahead with the same.
The project scope statement goes a long way in ensuring that nothing goes amiss. More often than not, it influences the activity performance order. For example, if a project demands that a garden has to be opened to the public in the next two weeks, the manager might consider recruiting human resource personnel prior to organizing a volunteer-based event. The assets associated with the organizational process come in handy in case there is some relevant (prior) information that facilitates the prioritizing of these activities in a more efficient manner.
Tools for Creating Activity Sequences
The activity sequencing process uses four tools:
- Precedence Diagramming Method (PDM)
- Determination of dependency
- Application of leads and lags
- Schedule network templates
The PDM is typically a graphical representation of the activities list and defines the order in which the tasks/ sub- tasks need to be performed. Often showcased as a simple flow chart with arrows depicting the dependencies of activities, rectangles representing activities, and the units of duration written above the nodes, this diagram sets the precedence for determining the dependency of activities.
There are three kinds of dependencies: discretionary, mandatory, and external. A mandatory dependency, also referred to as hard logic, is always true and considered unavoidable; for instance, a hole has to be dug before concrete can be poured in to create a swimming pool. Discretionary dependencies, also known as soft logic, are not always true. They are best determined by an organization’s best practices, historical information, and expert judgment; for example, one may choose to slice a cucumber before a tomato for fixing a salad and the precedence could take place either way. External dependencies, though outside the project’s scope and control, are important enough to be considered; for instance, if the construction of your building requires strict compliance to certain regulations, you may have to address an external dependency situation if revisions are in the offing.
Leads and Lags
In certain cases, an activity may get a jump start before another, or may incur a waiting period in between other activities; these situations are referred to as leads and lags. Leads occur when particular activities offer the required resources to begin a dependent activity, but are not quite finished; for example, a music event requires the tracks to be decided before recruiting a D.J. Here, a lead may take place if the organizers have prior knowledge about the genre of music that will be used to start their search for a D.J. proficient in the required genres; but then, the music has to be picked out beforehand. A lag takes place in case of a waiting period existing between two activities, for instance, a wait time is required for the wet paint to dry up completely, before decorations can be hung in a building’s interior.
Outputs for Activity Sequencing
The outputs from the activity sequencing process encompass the project schedule network diagram and any other project document updates that may be necessary. The network diagram merely represents the dependencies of activities and is not the schedule. The timeframes and schedules are developed through an altogether different process. The diagram may include the summary nodes of activities, or an entire representation of total activities--in line with the needs of the project. If summary nodes are used, enough documentation should be available to make sure that the basic flow of all activities is well understood.
All the best.
Author : Uma Daga | <urn:uuid:ce43b496-15be-4ae5-9781-6f21eecce685> | CC-MAIN-2022-40 | https://www.greycampus.com/blog/project-management/project-management-sequence-activities-process | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00071.warc.gz | en | 0.936641 | 980 | 3 | 3 |
Last Updated on May 24, 2015
The New York Times has recently reported the news related to a (yet another) targeted cyber-attack against JAXA (Japan Aerospace Exploration Agency). This targeted attack has allegedly led to the exfiltration of sensitive information related to Epsilon, a solid-fuel rocket prototype supposed to be used also for military applications, suggesting the targeted attack is probably part of a cyber-espionage campaign.
The targeted attack has been carried on by mean of a malware installed in a computer at Tsukuba Space Center. Before being discovered, on November 21, the malicious executable has secretly collected data and sent it outside the agency.
This is the second known targeted attack against JAXA in less than eleven months: on January 13, 2012, a computer virus infected a data terminal at Japan’s Space Agency, causing a leak of potentially sensitive information including JAXA’s H-2 Transfer Vehicle, an unmanned vessel that ferries cargo to the International Space Station. In that circumstance officials said that information about the robotic spacecraft and its operations might have been compromised.
Unfortunately the above cyber-attacks are not episodic circumstances, confirming that Japan is a hot zone from an information security perspective, and a coveted target for cyber espionage campaigns. Undoubtedly, the strategic importance of this country in the global chessboard and hence its internal secrets and the intellectual property of its industries are more than a good reason for such similar targeted cyber-attacks.
The list is quite long…
19 September 2011: Mitsubishi Heavy Industries, Japan’s biggest defense contractor, reveals that it suffered a hacker attack in August 2011 that caused some of its networks to be infected by malware. According to the company 45 network servers and 38 PCs became infected with malware at ten facilities across Japan. The infected sites included its submarine manufacturing plant in Kobe and the Nagoya Guidance & Propulsion System Works, which makes engine parts for missiles.
24 October 2011: An internal investigation on the Cyber Attack against Mitsubishi finds signs that the information has been transmitted outside the company’s computer network “with the strong possibility that an outsider was involved”. As a consequence, sensitive information concerning vital defense equipment, such as fighter jets, as well as nuclear power plant design and safety plans, was apparently stolen.
25 October 2011: According to local media reports, computers in Japan’s lower house of parliament were hit by cyber-attacks from a server based in China that left information exposed for at least a month. A trojan horse was emailed to a Lower House member in July of the same year, the Trojan horse then downloaded malware from a server based in China, allowing remote hackers to secretly spy on email communications and steal usernames and passwords from lawmakers for at least a month.
27 October 2011: The Japanese Foreign Ministry launches an investigation to find out the consequences of a cyber-attack targeting dozens of computers used at Japanese diplomatic offices in nine countries. Many of the targeted computers were found to have been infected with a backdoor since the summer of the same year. The infection was allegedly caused by a spear-phishing attack targeting the ministry’s confidential diplomatic information. Suspects are directed to China.
2 November 2011: Japan’s parliament comes under cyber attack again, apparently from the same emails linked to China that already hit the lawmakers’ computers in Japan’s lower house of parliament. In this circumstance, malicious emails are found on computers used in the upper chamber of the Japanese parliament.
13 January 2012: Officials announce that a computer virus infected a data terminal at Japan’s space agency, causing a leak of potentially sensitive information. The malware was discovered on January 6 on a terminal used by one of its employees. The employee in question worked on JAXA’s H-2 Transfer Vehicle, an unmanned vessel that ferries cargo to the International Space Station. Information about the robotic spacecraft and its operations may thus have been compromised and in fact the investigation shows that the computer virus had gathered information from the machine.
20 July 2012: The Japanese Finance Ministry declares to have found that some of its computers have been infected with a virus since 2010 to 2011 and admits that some information may have been leaked. 123 computers on 2,000 have been found infected and, according to the investigation, the contagion started in January 2010, suggesting that information could have been leaked for over two years. The last infection occurred in November 2011, after which the apparent attack suddenly stopped. | <urn:uuid:5a7db517-9234-41a6-bc39-ebdc3169d910> | CC-MAIN-2022-40 | https://www.hackmageddon.com/2012/12/02/big-in-japan-yet-another-targeted-attack-against-a-japanese-target/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00071.warc.gz | en | 0.968132 | 918 | 2.625 | 3 |
LiFi - UP VLC - Ultra Parallel Visible Light Communications
Li-Fi, if you've not hear of it, is the idea of using the conventional light bulb to transmit IP traffic. The beauty of this idea is that every household, office space, industrial warehouse, well basically any building has lighting infrastructure is already in place. By using a modified light bulb, that turns on and off so fast, that it is not perceivable to the human eye, 1's and 0's can be transmitted at rates faster than conventional RF frequencies. As the light spectrum is not congested like the RF spectrums we use for data transmission, it opens up the possibilities for a new way we communicate in the interior space.
While Chinese scientists have developed a 1W bulb that can transmit at 150Mbps, a £4.6 fund research project which incorporates 5 Universities in the UK headed by Professor M.D. Dawson of the University of Strathclyde and mentored by Professor P. Blood of Cardiff University, have developed a technique called Ultra-parallel visible light communications (UP-VLC), which can transmit at 10Gbps!
The research team managed to split the transmission over 3 streams, one for each LED colour type, with 3.5Gbps per stream. All of this is controlled by a CMOS chip to handle the tuning of the light patterns, intensity and modulations.
The above pic shows the colour emission patterns generated by the LED/CMOS smart display. The project is running from October 2012 to September 2016, so there's still a long way to go before we see any commercially viable products to buy from the local store, but does indicate an exciting new prospect for high bandwidth communications that could replace conventional RF transmission. | <urn:uuid:79129c39-b243-47fe-a6fb-18309897a314> | CC-MAIN-2022-40 | https://www.digitalairwireless.com/articles/blog/lifi-vlc-ultra-parallel-visible-light-communications | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00071.warc.gz | en | 0.93472 | 356 | 2.9375 | 3 |
A web application firewall (WAF) protects web applications from application-layer attacks such as cross-site scripting (XSS), SQL injection, and cookie poisoning. Attacks on apps are the leading cause of breaches—they are the gateway to your valuable data. With the suitable WAF in place, you can block the array of attacks that aim to exfiltrate that data by compromising your systems.
The web application firewall (WAF) protects your web apps by filtering, monitoring, and blocking any malicious HTTP/S traffic traveling to the web application and preventing unauthorized data from leaving the app. It uses a set of rules and procedures to determine what kind of traffic is good or bad. In addition, the WAF acts as an intermediary to protect the web application server from a potentially malicious client as a proxy server. As a reverse proxy, the WAF is the one that covers the web application server.
Webs are the perfect choice for a WAF appliance. They’re easy to deploy and maintain and fit for web-scale operations. In addition, a policy can be customized to meet the unique needs of your web application or set of web applications. While many WAFs require you to update your policies regularly to keep up with changes in emerging threats, advances in machine learning allow some WAFs to do this automatically. Automation is a critical component of your security posture, and the growing threat landscape is making it more critical than ever before.
The difference between a web application firewall (WAF), an intrusion prevention system (IPS), and a next-generation firewall (NGFW).
An IP address is used to identify an individual computer or network in cybersecurity uniquely. A WAF is a web application firewall, and an NGFW is a next-generation firewall. What are the differences between them? The IPS is a more broadly focused security product. Security policies typically include security-focused processes that are often signature and policy-based. They generally are well established by large companies, and you can easily incorporate them into your infrastructure.
The IPS establishes a standard based on the database and policies, then sends alerts when any traffic deviates from the average. Over time, a signature grows in size and complexity as new vulnerabilities are discovered. IPS protects traffic across a range of protocol types such as DNS, SMTP, TELNET, RDP, SSH, and FTP. When IPS operates and protects layers 3 and 4, the network and session layers typically use and protect only layers 3 and 4. IPS sometimes provides limited protection at the application layer.
The web application firewall (WAF) is a powerful security tool designed to analyze each HTTP/S request at the application layer. It protects the application layer. Most applications and websites are not only user- or session-aware but also aware of the application services that are offered. Because of this, a WAF acts as an intermediary between the user and the app, analyzing all communications before they reach the app or the user. With traditional WAFs, you are restricted to performing only those actions allowed by your security policy.
When organizations choose to use WAFs for their applications, they often focus on the OWASP Top 10, which are the most-seen application vulnerabilities. These are the Top 10 currently. They are
The next-generation firewall monitors the traffic going out to the Internet. It monitors websites, email accounts, and SaaS. It’s an important concept to understand in developing applications, especially mobile apps. With a UGFW, you enforce policy based on who is doing what with what assets, so you can apply content filters, anti-virus/anti-malware, and more in conjunction with URL filtering. Although a web application firewall (WAF) is typically a reverse proxy (used by servers), network-based firewalls (NFW) are often forward proxies (used by clients such as a browser).
There are several ways to deploy a WAF: Where you want to deploy it and the services needed. Do you want to manage it yourself, or do you want to outsource that management? Is it better to run your web application firewall (WAF) in the cloud or data center? How you want to deploy will help determine which WAF is best for you. Choose from the options below. | <urn:uuid:bb872155-527f-435f-a0f8-3ce6899252bc> | CC-MAIN-2022-40 | https://www.appviewx.com/education-center/web-application-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00071.warc.gz | en | 0.943464 | 882 | 2.828125 | 3 |
This is a guest post by Tiffany Rowe
In the past, computer security could do little to guard against infection; instead, early security software merely reacted swiftly to threats detected on the machine. Thanks to data, that changed. Today, security tools are effective at predicting threats and guarding against them, so users are hardly inconvenienced by malicious attacks. It is only because diligent data-gatherers accumulated information on malware and attack patterns that cybersecurity software could improve so dramatically – and it will be thanks to data that security continues to improve.
How Big Data Improves Cybersecurity
A recent study on cyber threats found that the biggest gaps in businesses’ cybersecurity strategies concern the ability to detect malware, corruption and attacks before they devastate the organization. By applying Big Data to security, businesses can close those gaps and protect themselves against emerging cyberthreats.
With every attack, regardless of whether it was successful or unsuccessful, cybersecurity professionals and organizations can collect data on the event. This data includes information such as existing defenses before the attack, vectors of the attack, symptoms of the attack, targets, thefts and more. Every day, overwhelming amounts of data are created and collected for the purposes of understanding the current threat environment and strengthening security – and it’s working.
A recent study found that over the past few years, there has been a decline in the success of security breaches. One reason might be the overall increase in security awareness; businesses know they need only a basic layer of protection to ward off the vast majority of attacks. However, available security solutions have also improved in recent years thanks to the accessibility of data. By understanding what is happening in the wild, security organizations can develop software solutions that better protect against common types of attack, and they can build tools that learn and act autonomously to keep their clients safe.
Trend Micro’s TippingPoint intrusion prevention system is an excellent example of how data can dramatically improve cybersecurity solutions for businesses. TippingPoint closely monitors network traffic in real time to detect potential threats. Using data from previous network attacks, the software can recognize the signs of a breach and take action to block malicious traffic without human intervention. This cybersecurity solution relies heavily on machine learning technology, as nearly all cybersecurity software of the future will also do.
Artificial intelligence is the most anticipated tech in the cybersecurity landscape. Using machine learning, security providers will be able to develop software that learns and adapts without updates and patches, ensuring that individual organizations are fully protected against the specific threats that jeopardize their data and devices. By accumulating Big Data, analyzing it and applying it, we are slowly but surely approaching a time when AI will dominate cybersecurity.
Big Data Is Also a Significant Threat
Data is precisely what most cybercriminals are after, so by collecting data, businesses are making themselves bigger and juicier targets for cybercrime. Often, business data includes payment information, personal identification information, login credentials and similar valuable numbers and codes. Criminals can use this data for personal gain in a variety of ways – but in the future, cyberattackers could use valuable data in more devastating ways.
Just as trustworthy security organizations like Trend Micro are using Big Data to develop stronger protections for business, so are malicious cyber criminals using Big Data to build more effective methods of attack. In fact, malicious attackers are even developing their own machine learning and AI tools to increase the success of their attacks, making it even more imperative that security professionals develop equivalent protections. In fact, it shouldn’t be doubted that cybercriminals are already devising methods of attack that organizations have never encountered before, which makes AI tools critical for safety and security.
Big Data is undeniably important to the advancement of civilization. Not only does data help organizations better identify their audiences’ needs and wants, but it keeps businesses and consumers safer from existing threats. Unfortunately, data is not just a force for good; bad actors can also employ data to expand the scope and power of their attacks. Thus, it is critical that businesses equip themselves with the strongest and most up-to-date security systems available, especially those that employ machine learning and similar futuristic features to fight threats. | <urn:uuid:0ab0ffbe-348b-4d4d-934c-8ba0c8d38728> | CC-MAIN-2022-40 | https://blog.bigdataweek.com/2018/06/06/why-data-is-leading-the-way-to-a-cybersecure-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00071.warc.gz | en | 0.944364 | 843 | 2.765625 | 3 |
Improve Security with a Zero Trust Model
We’ve talked about how zero trust is superior to traditional perimeter-based defenses in today’s world of cloud-based applications, off-site data storage and remote work. Let’s give some concrete, real-world examples through common kinds of security breaches. We’ll explain how zero trust principles and methodology can better defend your network from these attacks and, in the case of a breach, contain the damage.
1. Misused Passwords and Phishing
Stolen or misused credentials remain the one of the most common causes of a breach. The Verizon 2021 Data Breach Investigations Report attributed 61 percent of breaches to leveraged credentials. One common way malicious actors steal credentials are through phishing attacks — using false identities to try and pry information from someone within an organization.
Last year, a 17-year-old spoofed a Twitter employee’s phone number by SIM-swapping, created fake Okta login pages for Twitter employees and convinced one employee he worked in Twitter’s IT department. Once he gained the credentials he needed to access Twitter’s network, he hijacked and posted a bitcoin scam from Twitter accounts belonging to, among others, Barack Obama, Joe Biden, Bill Gates and Elon Musk.
Zero trust architecture reduces the surface attack area by eliminating all passwords, save for one, through single sign-on. Context-based policies trigger multifactor authentication (MFA), providing further security in the case of misused or stolen passwords. And more tools are rolling out that allow zero trust enterprises to eventually evolve to a passwordless environment.
2. Ransomware and Malware
A ransomware attack on Colonial Pipeline in May 2021 forced the fuel distribution company to eventually shut down its entire network, catapulting U.S. gas prices above $3 per gallon for the first time since 2014. Colonial Pipeline ultimately paid $4.4 million worth of bitcoin to the attackers to get their system back. The likely source of the breach was a leaked password to an old account that had access to the company’s VPN.
Beyond the additional security around passwords outlined above, a zero trust architecture eliminates the need for VPNs, which grant complete access to a network. Instead, zero trust requires providing access only to the resources that the user or device has explicit permission to access. With segmentation of network resources and data, it becomes harder for malicious code that somehow gets into one area of the network to spread to other areas of that same network. Any attempts to access a new area of the network will require authentication again, and MFA where required by your policies.
3. Denial-of-Service (DoS) and Distributed-Denial-of-Service (DDoS) Attacks
Denial-of-service attacks flood networks with traffic to cause shutdowns and distributed-denial-of-service attacks hijack other devices to send the traffic flood from multiple sources, making it harder to stop. These attacks are rising; since the beginning of the COVID-19 pandemic, DoS attacks have increased three-fold to close to 30,000 attacks per day, according to The Cambridge Cybercrime Centre. Such an attack last year shut down the New Zealand Stock Exchange.
In a zero trust environment, the continuous verification of identity and principle of least-privileged access can help protect against denial-of-service attacks, ensuring that only authorized users gain access to critical network resources. IP addresses can be verified, and behavior monitoring can ensure that traffic from a device meets acceptable limits and does not exhibit patterns of a DDoS attack.
4. Network Eavesdropping and Man-in-the-Middle Attacks
Eavesdropping attacks are just what they sound like: someone “listening in” on communications and traffic on your network to intercept and steal sensitive information. This is one reason why public and unsecured WiFi networks are unsafe — anyone else on that same network could potentially intercept traffic on that network. In 2017, Equifax had to pull its mobile app because it was discovered, after an initial secure authentication, some features of the app did not use secure, encrypted communication. That left data communicated between app users and Equifax vulnerable to interception.
Traditional security stances would encrypt communication only between data centers but not necessarily between devices inside a data center. Zero trust principles call for encrypting all internal communication, including those between devices, using Transport Layer Security (TLS) to prevent network eavesdropping. Network segmentation called for by zero trust principles also prevents someone inside one area of the network from accessing traffic elsewhere on the network.
Assume it's not safe! While zero trust methodology provides better defenses against these kinds of attacks, it doesn’t promise that a network breach will never occur. In fact, it’s just the opposite: zero trust assumes that a breach will occur at some point. That’s why a key component of zero trust methodology is not just trying to prevent attacks but containing them if a breach does occur. Containment through network segmentation and continuous authentication prevents lateral movement within the network, so that if a hacker is able to access one part of your network, they don’t have the ability to then access other parts of the network. | <urn:uuid:caa49239-41d4-49f5-9a88-8532d396ab54> | CC-MAIN-2022-40 | https://www.majorkeytech.com/resources/insights-and-news/improve-security-with-a-zero-trust-access-model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00071.warc.gz | en | 0.926714 | 1,081 | 2.578125 | 3 |
June 16, 2020Leveraging Data Analytics to Move Towards the “New Normal”
OLAP is the acronym for OnLine Analytical Processing. Database researcher, E. F. Codd, coined the term “on-line analytical processing” (OLAP) in a whitepaper published by Computerworld in 1993 that set out twelve rules for analytic systems. The term OLAP was created as a slight modification of the traditional database term online transaction processing (OLTP).
OLAP tools were developed so that data professionals could analyze multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations:
Consolidation which involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends.
Drill-down is a technique that allows users to navigate through the details. For instance, users can view sales by individual products that make up a region’s sales.
Slicing and Dicing is a feature whereby users slice a specific set of data of the OLAP cube and dice the slices from different viewpoints. These viewpoints are sometimes called dimensions such as looking at the same sales by salesperson, by date, by customer, by product, by region, etc.
Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad hoc queries with rapid execution time. They borrow aspects of navigational databases, hierarchical databases and relational databases.
MOLAP is Multidimensional Online Analytical Processing which means that the data resides in the multidimensional structure.
ROLAP is Relational Online Analytical Processing which means that data resides in relational databases.
COLAP is Cloud Online Analytical Processing COLAP makes OLAP work at Cloud scale to analyze large amounts of data without moving it out of customers’ cloud data warehouses or data lakes.
With all of the different OLAP options out there, you may wonder which one can actually help you achieve your big data strategy or which version of OLAP is most suitable for your BI environment?
Want to learn more? Watch this on-demand webinar: Modernize Your Investment in SSAS Without Giving Up OLAP with AtScale Founder and CSO, Dave Mariani, as he explains more about OLAP and he demonstrates how AtScale Adaptive Analytics scales and modernizes your OLAP infrastructure.
Editor’s Note: This article was originally published in March 2019 and has been updated. | <urn:uuid:4db435f0-b86a-4d1f-b7b1-fdd40b7efaa3> | CC-MAIN-2022-40 | https://www.atscale.com/blog/olap-molap-rolap-colap-bi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00071.warc.gz | en | 0.918107 | 528 | 2.609375 | 3 |
A new statutory code limiting the amount of data online services can collect from children went into effect in the United Kingdom on Sept. 2. Developers have to make sure data protections are available by default in online applications and services used by children or face potentially high fines.
The Age Appropriate Design Code applies to any businesses providing “online services and products” likely to be used by people in the United Kingdom under 18 years of age. That includes educational websites, messaging services, community forums, social media platforms, streaming services with large audiences of children, makers of connected toys (Internet of Things toys), and game companies and platforms. The code outlines 15 standards for developers to follow so that users—children—have a certain level of privacy by default when visiting a website or opening an app.
“\K[ids] are not like adults online, and their data needs greater protection," Information Commissioner Elizabeth Denham told the BBC.
The Information Commissioner’s Office will have the power to fine violators up to 4 percent of their global revenues. Online service providers, app developers, and other relevant businesses have one year to make sure their services and applications are complying with the rules, as enforcement will begin Sept. 2, 2021. The ICO has said it has the power to take more severe actions if necessary.
"The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child," according to the code. Even if the service or device is not explicitly targeted for children, the code’s requirements apply if children are likely to use the service. This expands the type of businesses impacted by the code. For example, streaming services such as Netflix aren’t specifically for children, but provide children’s programming, making the company subject to the rules.
Another thing to consider is the fact that the Children’s Code (as it is also called) defines children as under the age of 18, not 13. This means makers of connected devices such as fitness trackers have to make sure their data policies are compliant if they want to continue selling wearables to teenagers in the UK.
Similar to Europe’s GDPR, the Age Appropriate Design Code will affect businesses outside of the United Kingdom. The code is very clear that it applies to any business with users who are children in the United Kingdom—and in this interconnected world that means any company with any kind of presence in the UK.
Concerns about children’s privacy isn’t just limited to that side of the Atlantic Ocean. Last fall, the United States Federal Trade Commission fined YouTube $170 million for collecting data on children under the age of 13 without the consent of their parents.
The Children’s Code requires developers to take into consideration children’s best interests when designing and developing services, to refrain from using children’s data in ways that are detrimental to their well-being, and to ensure that settings default to high levels of privacy. There are a few specific requirements, such as the fact that geolocation must be switched off by default and children’s data cannot be shared unless there is a compelling reason to do so. Dark patterns in user interfaces—methods designed to trick users into making decisions they otherwise would not have (such as making the opt-out link very small and faint to see on a page)—are not allowed.
“Nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections,” according to the code.
The ICO has said it will provide support to businesses trying to make the necessary changes to comply.
“We want children to be online, learning and playing and experiencing the world, but with the right protections in place,” Denham said in a statement. “A generation from now we will all be astonished that there was ever a time when there wasn’t specific regulation to protect kids online. It will be as normal as putting on a seatbelt.” | <urn:uuid:5f3f3ac9-b6c3-4fa0-a857-fb19117be4ef> | CC-MAIN-2022-40 | https://duo.com/decipher/uk-says-childrens-apps-must-have-built-in-privacy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00271.warc.gz | en | 0.950062 | 838 | 2.53125 | 3 |
Why Use Wayland versus X11?
Wayland and X11 are two different display server technologies that allow you to see your desktop and manage the windows that each application and tool generates on the desktop. They have a lot in common but also some key differences — mainly in the way that the graphical data is relayed between application, window manager/compositor. In this post, we will compare Wayland to X11, look at the advantages of each one, and learn more about window management in Linux.
What is Wayland?
Simply put, Wayland is a display server protocol that seeks to replace X11. It is not an application or a downloadable program. Instead, it is a standard or specification that needs to be adopted by window managers and desktop e12/1nvironments. Wayland is designed to be easier to use than X11. X11 has been around for a while, and is starting to show its age thanks to legacy code that bloats the system.
Wayland is trying to develop a new way of managing your graphical system and how you interact with it. Wayland also aims to be easier to integrate into Linux systems with more straight-forward code. As it stands currently, Wayland has support in the GNOME desktop environment and some other platforms such as KDE’s KWin. It has been in development for some time, which has left many people wondering if it will actually materialize as a viable alternative to X11.
Why Use Wayland?
X11 (version 11 of X Server) has been in use since 1987, so it is well past its expiration date. For a complete list of X releases check out their website. The reality is that it has been developed over a very long period, which still contains many legacy components within its code that make it very difficult to develop any further.
Part of this legacy structure is the client/server model that it employs to render windows. Long ago, a server would handle all the rendering requests and a rendering workstation would receive the graphics and windows that the server created.
X11 is primarily a display protocol, so it was designed to render graphics over the network. This is why it is possible to forward X11 sessions over SSH, giving you a secure remote session to a graphical desktop on a networked server or PC.
The client’s applications need to communicate with the X11 server before the compositor (window manager) can generate the window that is needed for the application to render properly. This is reliable, but it is very slow by modern standards, and when comparing it to newer systems, such as Wayland.
Wayland uses a simple, modern approach: client side rendering. This cuts out any server type component that acts as a middleman, and lets the application communicate directly with the compositor that it wishes to render a window for. This makes load times much quicker in theory and is technically easier to implement, thanks to the simplified codebase that Wayland brings to the table.
How Does Wayland Work?
The basic concept behind the process model for Wayland is that it is the server and client combined, which means it communicates directly with the compositor. This means that applications that wish to use Wayland need to give all of their display information to Wayland.
These parameters include screen size of the window, position and state (minimized, maximized etc). The application itself draws the window that it will run in, instead of like in the case of the X11 server that needed to relay this information back and forth between the application and the compositor. This process is the client side rendering aspect that we mentioned above.
However, this means that applications that wish to support Wayland will need to be updated or rewritten entirely as a different version to support this new standard. This has slowed down adoption to a certain extent, but most popular Linux distributions have made it available as part of their environments.
Why Use Wayland Instead of X11?
The first and most obvious reason why you would want to use Wayland instead of X11 is the reduced latency between opening an application and having it render on your desktop. It also makes tasks such as dragging windows, resizing them or switching them to full screen feel that much more smooth and modern.
The simpler code that has been written for this protocol also gives it a performance edge over X11. If you also consider that Wayland is a newer project, it has had less time to gather deprecated and bloated code, making it more agile and reactive than the aged X11 protocol.
Wayland has also been designed with security in mind and is not vulnerable to the same types of attacks that X11 is, such as the Unauthenticated Access exploit, although this has been patched in later releases for the most part.
The protocol that Wayland uses also makes it easier for designers and developers alike to create cross-platform apps, which have always suffered from problems rendering on Linux due to compatibility issues between various versions of GTK or Qt.
Who Created Wayland and Who Supports It?
Wayland was created by a project called the Wayland project. This is a different entity than X11, which was created by the Open Group as an extension of XFree86’s design.
Some examples of Linux distributions that support Wayland are:
Debian Stretch (unstable)
When Will Wayland be Released?
The alpha version of Wayland 1.19.0 was released January 27, 2021. Most of the hardware that supports Wayland is Intel’s open source driver or AMD’s open source driver. Raspberry Pi and Android both support Wayland out of the box with no need for any additional drivers.
What Might Stop Wayland's Release or Slow Down its Adoption
Wayland has been in development for a long time, so it is quite possible that unexpected issues could delay it even further. It has released multiple improvements and currently posts these to this website when they unveil new versions.
There is currently no working draft to make it a standard. This means Wayland will need to be adopted by the Linux Foundation and other organizations who can help with development, testing, and promoting its use before it becomes an official standard.
Because Wayland is a protocol it takes quite a different approach to traditional X11 programmed applications, which also hampers its adoption. There isn’t currently an official draft standard to make Wayland the standard, which means that it could still be some time before it becomes accepted as a mainstream alternative to X11.
Final Analysis: Which is best?
As things stand, X11 is probably still the better choice, just from a compatibility perspective. X11 has been in use for a very long time by computing standards, and it is reliable and stable. This stability comes at a performance cost, however. If you are running production systems or systems that rely on legacy applications, then X11 will be the better choice for you.
If you are looking to experiment and try out something new, then Wayland is a great way to do just that. It is lightweight, and it will not take up as many resources on your system as X11 would, although modern systems tend to handle X11 just fine.
There are also a lot of new features that you can experiment with, like GPU sharing or Wayland specific compositors to try out. It all boils down to what you want to do with your graphical system, what kind of environment you will be running your system in, and compatibility between your applications and Wayland.
In the final analysis, you should try installing a fresh OS with X11 and Wayland and then test it for yourself. This is by far the best way to see what works best for your own specific needs. There is no right answer because both Wayland and X11 are great options with their own pluses and minuses that make them unique in different ways. The choice is yours. | <urn:uuid:42144500-fc9f-40b3-85ff-cccacca94ccb> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/networking/why-use-wayland-versus-x11 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00271.warc.gz | en | 0.960809 | 1,654 | 2.859375 | 3 |
In the era of cloud computing, many compute functions are being offered as an on-demand service with metered use. While on-demand software could rightfully be traced back to the mainframes of the 1960s, arguably it started in the modern era with Salesforce and SaaS in 1999 launching its CRM software-as-a-service.
Since then there have been numerous other as-a-service offerings, from virtualized operating environments, storage, application development, desktops and even disaster recovery.
Include network as a service (NaaS) in that collection. NaaS is the sale of network services from a third-party to companies that don’t want to build their own networking infrastructure. Like all of the other as-a-service offerings, network as a service offers its functionality on a subscription basis, through the cloud, and with metered use, so you only pay for what you use.
Foundation of Network as a Service
NaaS is a series of network and value-added services – plus computing and network resources – sold as a service by communications service providers, including IT hardware vendors, cloud service providers, and telcos.
NaaS is presented to the customer though a single self-service portal where the customer can order, deploy, and manage their services on demand as needed. This reduces the work needed and the level of expertise required to deploy services.
Like the other as-a-service offerings, it provides a networking setup through a subscription (an operating expense, or OPEX) rather than the large up-front acquisition cost (a capital expense, or CAPEX). That means while you start off much cheaper, you have to be wary of your use and to not let consumption costs exceed the Capex costs.
Networking as a service arose around 2015, as companies started to embrace software-defined networking (SDN). SDN decouples the network control and forwarding functions, enabling the network control to become directly programmable. This makes a SDN more adaptable, dynamic, manageable, and cost-effective than traditional networking because the SDN can adapt to networking changes on the fly as needed.
The abstraction of applications from the hardware layer enables the use of application programming interfaces (APIs) to orchestrate and manage the network infrastructure in a more flexible and extensible way. It was now possible to program your network.
As SDN grew, companies started to virtualize the network process and use virtual logic entities to control the network instead of utilizing hardware switches and nodes. Companies were able to reduce complexity and increase network automation, eliminated manual configuration, centralized control and monitoring and were able to deploy applications and services faster by leveraging open APIs.
Three Services of Network as a Service
As the name implies, networking is sold as a service. There are three primary services under the NaaS umbrella:
- Virtual private networks (VPN): NaaS extends a VPN and the resources contained in the network across other networks, like the public Internet. A VPN is only a point-to-point connection; from a remote worker’s laptop to the company network. But if they went outside the corporate network, they had no VPN protection. NaaS enables that protection outside the corporate firewall.
- Bandwidth on demand (BoD): A technique by which network capacity is assigned based on requirements between different nodes or users. An app or user who suddenly needs more bandwidth can be dynamically adjusted to their needs.
- Mobile network virtualization: This is a model where a telecommunications manufacturer or independent network operator – many of whom are NaaS providers — builds and operates a network and sells its communication access capabilities to third parties.
NaaS technology contains three distinct sub-categories, each of which is sold as a service.
Benefits of NaaS
SDN set the stage for NaaS because it broke the dependency on physical servers and networking hardware in traditional networking setups. With NaaS, a lot of the network administration can also be outsourced, giving a company flexibility and freedom to manage a network with less in-house technical expertise.
This means a company can offload day-to-day maintenance of equipment and network administration and focus on their line of business.
That’s just the beginning. NaaS can include services such as Wide Area Networking (WAN) connectivity, data center connectivity, cloud connectivity, bandwidth on demand, security services, and other applications.
Because your networking is provided by a service provider, you are protected with Service level agreements (SLAs) guarantee concerns such as levels of availability, network uptime, and response and resolution times for addressing issues. A good NaaS provider will optimize your network for your needs, so it’s important to establish performance expectations before signing the contract.
NaaS Pros and Cons
Like any as a Service offering, you need to watch your usage. Data is being moved around more than ever and datasets are growing exponentially in this era of Big Data and artificial intelligence. You can easily run up a bill that eats up any potential savings realized from migrating to NaaS.
Another challenge is tradeoffs associated with kinds of outsourcing like ceding too much control of your assets to the provider. Such issues have already come up as relates to storage and who owns the data. Setting expectations in the contract and SLA are key.
Legacy data centers may prove challenging to upgrade. If you are heavily-dependent on the pre-cloud MPLS technology and have deployed little of SD-WAN, for example, you may have trouble migrating. Older hardware, like switches and routers, or on-premises applications not written for the cloud, may also prove problematic.
Finally there is always the risk of vendor lock-in. As mentioned earlier, cloud service providers have different specialties and moving off one may prove difficult because other providers don’t offer the same services. And there is always the risk that an organization may become too reliant on a particular service provider and become stuck with them.
Best Practice of NaaS
As with any as-a-Service model, the best use is to make your firm more agile and responsive to any changes in your environment. Many companies with seasonal crushes, such as Christmas, rely on AWS, Azure, etc., for bursts of compute power when needed and then dial back their usage after the need has passed.
Rely on your NaaS provider to offer the network administration skills you might not have so you can focus on core business competency and not worry about network administration. Shifting off your enterprise network to NaaS enables enterprises to scale bandwidth much quicker for increased mobile use or for excess capacity when needed. This allows customers to use features and services they might not otherwise use because they might not have had the skill sets needed in-house.
Likewise, the provider can help address capacity limitations behind the scenes so all you have to concern yourself with is rolling out line-of-business services and not setting up the network.
Who Sells NaaS and Where Do You Buy It?
Assessments of the growth of NaaS vary by research firm. Market Insights Reports predicts the NaaS market will grow from USD $6.5 billion in 2020 to USD $23.6 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 38.2% during the forecast period.
According to Market Research Future, the NaaS market is growing at a scorching 28 percent CAGR rate.
So who is selling NaaS? For starters, all of the major public cloud vendors – AWS, Azure, Google Cloud Platform, IBM Cloud, Rackspace, and so on. Major networking vendors like Cisco Systems, Juniper Networks, VMware, Aryaka Networks, and Brocade offer it, as do top communications firms like Alcatel Lucent, AT&T, Ciena, Akamai Technologies, Broadcom, Century Link, Inc., Citrix Systems, and Verizon.
NaaS providers vary in their offerings from one to the next, depending on the specialty of the provider. For example, Aryaka offers WAN and secure Virtual Private Networks (VPN) as a service, since it specializes in SD-WAN. Akamai offers CDN as a service because it is a content delivery network. With hundreds of services available, Amazon offers a massive variety of services.
Future of NaaS
NaaS is built on three emerging and growing technologies: SD-WAN, 5G, and Zero Trust networks.
To be clear, NaaS is the realization of SDN as the middle of the network for a cloud-centric enterprise architecture, or the “middle mile.” It brings SDN with its programmable networking to WAN services. But it only handles the middle mile. It does not address what is known as the “last mile,” and while some NaaS vendors may offer last mile connections between customers and POPs, not all do.
5G holds much promise but is still more vapor than product. As the expensive rollout continues, more services can be deployed around it. Of particular value is the network slicing of 5G, which virtualizes the network to insure an isolated end-to-end connection that delivers all of the needed services.
The same holds true to Zero Trust, the next step beyond the VPN in securing a network. Zero Trust networks are only now being rolled out and will grow over the next few years as a much more secure replacement to the VPN.
So as SD-WAN, 5G, and Zero Trust networks grow – and they will grow rapidly – so will Network as a Service. | <urn:uuid:bd17ef5d-1e92-4c3e-bbcc-fc7b93a40e97> | CC-MAIN-2022-40 | https://www.datamation.com/data-center/guide-to-networking-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00271.warc.gz | en | 0.947558 | 2,007 | 2.75 | 3 |
Democratization Is Leveling the Technology Playing Field
Access to technology for all and the availability of the most complex of innovations and solutions for individuals and small business is the promise of democratization of technology. A social phenomenon, democratization is the process by which advances in technology gradually make its benefits and its ability to lessen burdens accessible to all.
Imagine, for example, accessing very complex networking technology or the most high-level machine learning. Think about being able to leverage Industry 4.0 items as a low- or mid-level entrepreneur and having the most complex technology open up at a level that is accessible to those individuals and companies that are just starting out.
In this article we will look at four areas of technology democratization: data and analytics, development, design, and general knowledge, what the affects will be in those areas and how individuals and organizations can participate.
Democratization of Data
For data and analytics, which are a rapidly growing segment of information technology (IT), market-based access to data and algorithms will lower entry barriers and lead to an explosion in new applications of AI.
Since data is what fuels the growth of AI, companies that previously had all the data, such as Google or Microsoft, are now required to show it to you, to allow you to access it in the same manner as they do. As recently as 2015, only large companies like Google, Amazon and Apple had access to the massive data and computing resources needed to train and launch sophisticated AI algorithms.
Not all that long ago, mall startups and individuals simply didn't have access and were effectively blocked out of the market. The democratization of data and analytics gives individuals and startups a chance to get their ideas off the ground and prove their concepts before raising the funds needed to scale.
Access to data, however, is only one way in which data and analytics are being democratized. The other areas of the "level set" of the playing field will bring rise to the Google-type companies of the future.
Democratization of Development
The shift toward democratization of development can be seen in real time, like the open source, deep-learning software frameworks that are coming to power. A major issue in the wide-scale adoption of open source development is that there are many different software frameworks are out there. Big companies are open sourcing their code and relying on the frameworks to drive innovation, while trying to push for some standardization. This allows development's many little guy to keep pace with the big dogs.
Just as the cost of developing mobile apps fell dramatically as iOS and Android emerged as the two dominant ecosystems, so too will all development become more accessible as tools and platforms standardize around a few frameworks.
Some of the notable open source frameworks include Google's TensorFlow, Amazon's MXNet and Facebook's Torch. The tools themselves will level-set, and developer-friendly tools will emerge.
The final step to democratization of development will be the development of simple drag-and-drop frameworks accessible to those without doctorate degrees or deep data science training. Microsoft Azure ML Studio offers access to many sophisticated development frameworks through a simple graphical UI.
Amazon and Google have rolled out similar software on their cloud platforms as well. In order to let the little guy in, you must allow small players to purchase anything and everything they need. It also needs to be affordable, along with it being accessible.
A marketplace for development algorithms and datasets will need to be put in place. Not only do we have the on-demand infrastructure needed to build and run these large-scale development tools, we even have marketplaces for the algorithms themselves.
Need an algorithm for facial recognition in images, or to add color to black and white photographs? Marketplaces like Algorithmia let you download the algorithm of choice. Even better, websites like Kaggle provide the massive datasets one needs to further refine and train these algorithms.
Democratization of Design
For design, one does not need to look any further than Moore's law. Intel cofounder Gordon Moore's famous observation (now 55 years in the rearview mirror) that computer technology advances even as the cost of that technology drops has held true for decades, with computers steadily becoming both more powerful and cheaper to produce (and own).
Individuals and smaller companies involved in the world of video production could be forgiven for adhering to the conventional wisdom that it still necessitates custom hardware (both in computing power and cameras).
Slower design and slower processing, however, continue to give way to faster design and faster processing. At the present moment in design and general technology, I would argue that general purpose computer processing power is reaching a point where it can adequately handle the tasks laid by any higher algorithm.
For the video production crowd, it's important to keep in mind that the human eye can't distinguish any measurable improvement beyond 4K, and can scarcely tell the difference between 1080p and 4K. The point here is that the amount of data required for live video processing won't grow at an exponential rate anymore, because it is already at the upper bound of what humans can differentiate.
Democratization of General Knowledge
Computer processing power, on the other hand, will continue to soar. For general purposes of computation, like DNA sequencing and 23AndMe types of criminal tracking, the processing power will be more than adequate.
This means it will get easier and easier for common devices like your phone to handle HD video processing tasks. Another challenge with live video is an internet network's ability to transfer the video data from its origin to your device — meaning that it can't effectively livestream a 1080p video at a decent frame rate to your computer. This is sometimes more of a constraint than the video processing power of a computer, or your eye's ability to process images.
What does this all mean? In one sentence: The rapidly improving quality of mobile wireless networks by moving to 5G and IPv6. The insane quality of the camera in your pocket and the processing power of your computers (phones, tablets, laptops) means that very soon individuals and small start-up companies will be able to produce a live video events at the same quality as you see on television. You will be able to process any sort of dataset, just from the sheer power of the computer itself.
The next key shift will be from technical constraints to the constraints of human desire. And if there is any doubt about human desire to produce live video, you can take a look at the enormous quantities of content pouring into Facebook Live, YouTube Live, TikTok, Instagram, Twitch, Periscope, etc. Look at the data sets that the next big content start-up is going to have to process.
We are on the cusp of democratization across a broad range of technology areas. The time is ripe for smaller companies and individuals alike to take advantage of these unprecedented times. | <urn:uuid:8134827b-2e97-4296-a6a4-c72a4bd0d7fe> | CC-MAIN-2022-40 | https://www.gocertify.com/articles/democratization-is-leveling-the-technology-playing-field | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00271.warc.gz | en | 0.947257 | 1,408 | 2.953125 | 3 |
I know you’re not surprised by the emergence of blockchain as a service. Two of the hottest terms in IT these days are “blockchain” and “x as a Service,” where the X can be anything from Software to Big Data. So it was inevitable that the two would merge.
The on-demand world has yielded quite a few interesting services, like databases, Big Data, high performance computing and firewall, among many others. If it is software or infrastructure-related, it will eventually become a service.
The same applies to blockchain. Blockchain shows how far we’ve come in the tech world, in that it is being rapidly adopted and no one knows who invented it. Someone named Satoshi Nakamoto claims to have developed the technology but there is some dispute of that, given his flawless English and use of British terms.
Whoever Nakamoto is, he or she or they have created a technology that the world is falling all over itself to adopt, and there is no software company behind it. No IBM, no Microsoft, no Google. It is a remarkable statement of blockchain’s effectiveness.
First, Blockchain is a public electronic ledger, like a database, that creates an unchangeable record of transactions between users, each one time-stamped and linked to the previous one so it cannot be altered or duplicated. Each digital record or transaction in the thread is called a block, and the string of blocks are a chain, hence the name.
Blockchain can only be updated by consensus between participants in the system, and when new data is entered, it cannot be erased. This makes the chain a verifiable and unalterable string of records for every transaction ever made in the system.
The main proof of concept for blockchain for now is Bitcoin, the most widely hyped of the cryptocurrencies although not the only one. Bitcoin is a method of digital, secure transactional payments over an open network. It was initially billed as a peer-to-peer version of electronic cash that could be done between individuals without going through a financial institution.
There are other uses for blockchain, many of which are emerging from major IT solution providers and often for a dedicated purpose. This list highlights the work of many but not necessarily all of the major blockchain as a service (BaaS) providers.
The Linux Foundation deserves special mention because it is the software provider of a popular BaaS product. The organization last year released Hyperledger Fabric 1.0, a collaboration tool for building blockchain distributed ledgers, such as smart contracts, for vertical industries. IBM and Oracle both offer services based on Hyperledger Fabric.
Microsoft became one of the first software vendors to offer BaaS when it launched Azure Blockchain Service in 2015. Last year it launched Enterprise Smart Contracts, which provides users with the schema, logic, counterparties, external sources, ledger, and contract binding for building their own blockchain services.
In November 2015, Microsoft and ConsenSys announced a partnership to create Ethereum blockchain as a service (EBaaS) on Microsoft Azure. The service is designed to help customers build private-, public- and consortium-based blockchain environments on Azure’s global platform.
A year later, Microsoft announced a collaboration with Blockstack Labs, ConsenSys and a variety of developers on an open source, blockchain-based identity system that allows people, products, apps and services to interoperate across blockchains, cloud providers and organizations.
There is no greater testimony to the impact blockchain has than the sheer number of companies behind R3, a consortium behind a distributed financial ledger called Corda that operates like a blockchain while denying it is one. The consortium started in 2015 with financial institutions like Barclays, Credit Suisse, Goldman Sachs, J.P. Morgan, and Royal Bank of Scotland, and has grown to more than 70 partners, including Bank of America and Wells Fargo.
Corda is a specialized ledger for financial institutions to process financial transactions. The ledgers are interoperable, so software applications can communicate, exchange data and use that exchanged data.
3. HPE R3
HPE’s has a blockchain SaaS offering is based on Corda. The offering runs Corda on HPE’s Mission Critical server services to deliver resiliency and scalability for enterprises bringing distributed ledger applications into production. HPE’s Mission Critical DLT systems promise virtually no down time, and in the event of infrastructure failure, transactions are not lost. They are saved and processed once the system is running again.
4. SAP Cloud Platform Blockchain
SAP’s blockchain as a service is called “Leonardo,” which in turn is based on Hyperledger, and resides in the SAP Cloud service, so it can be accessed from any device and requires no on-premises hardware or software. SAP Leonardo functions as a blockchain cloud service, machine learning service and supports the Internet of Things (IoT) in a single ecosystem.
BitSE runs VeChain, a Chinese cloud product management platform built on a blockchain in collaboration with PricewaterhouseCoopers (PwC) to boost blockchain adoption in the Asia-Pacific markets.
VeChain focuses on four areas: anti-counterfeiting, supply chain management, asset management and client experiences. It allows merchants to put unique IDs on products to prevent counterfeiting, a major problem in Asia. Its first major use was with D.I.G., China’s largest fine wine importer, to stop counterfeit wines.
South Korea’s Blocko has more than 90% marketshare of the enterprise blockchain market in its native country. Their blockchain-as-a-service platform, Coinstack, is used by Samsung, LG CNS, Hyundai and many of the other national giants. It set up a biometric login and payment authorization system for Lotte Card, a major South Korean credit card provider, which reduced authentication time from 7-10 minutes to 2-3 minutes, which in turn cut Lotte Card’s annual security solution expenditures to 10% of their original operating costs.
Blockstream offers a micropayment processing system called Lightning Charge on its Lightning Network for making payments with Bitcoin. It’s designed to make it easier for developers creating Lightning-powered payments applications. Blockstream claims its network allows for faster and cheaper than using the native Bitcoin blockchain network.
PayStand uses blockchain technology to simplify the sending and collecting of money in the accounts receivable and payable process. Its network automates cash management from accounting software to reconciliation. PayStand customers can certify and notarize payments from request to receipt and third-party notaries or auditors manage certifying payment records, eliminating the possibility of tampering.
9. Peer Ledger
Peer Ledger offers identity management blockchain to externally certify real-world identities, giving blockchain real-to-digital identity mapping, something the company says the technology has lacked. Peer Ledger uses the public key infrastructure (PKI) system to certify identities outside the blockchain before connecting them to blockchain accounts. The company is targeting trust-sensitive industries such as healthcare for their solution.
Consulting giant Deloitte has a solution for businesses called Rubix Core. Rubix Core’s customized blockchain architecture is designed for building a private network customized to an industry or organization. Rubix Core offers a full stack Ethereum-compliant enterprise infrastructure as well as a set of GUI tools that make it easy to rapidly build smart contract apps. | <urn:uuid:f2caa9ef-87d9-45a7-a4bf-e957294c28f2> | CC-MAIN-2022-40 | https://www.datamation.com/data-center/top-10-blockchain-as-a-service-providers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00271.warc.gz | en | 0.934489 | 1,575 | 2.78125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.