text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Web scraping, or data scraping, is the process of extracting and collecting data from websites. Today, data harvesting is mostly automated, relying on specific tools. On a much smaller scale, regular internet users often participate in data scraping. This manual process requires users to copy-paste the information into a locally stored document or file.
Businesses mainly use the automatic extraction of web data. It’s an efficient way to gather millions and even billions of data points for business intelligence, market research, generation of sales leads and price comparison. | <urn:uuid:9b939958-452a-4ea5-984b-c0f33199b5de> | CC-MAIN-2022-40 | https://dataprotectioncenter.com/phishing/data-scraping-associated-security-and-privacy-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00388.warc.gz | en | 0.88046 | 108 | 2.90625 | 3 |
June 7, 2018 by Siobhan Climer
This blog is Part 1 of a three-part guide to cloud storage. In Part 2 we will examine the technical specifications and architecture of cloud computing. In Part 3 we will discuss how to leverage cloud computing for your business.
What Is Cloud Storage?
What is cloud storage? Let’s start with a definition.
Definition: Cloud storage is the process of storing digital data in an online space that spans multiple servers and locations, and it is usually maintained by a hosting company.
Here’s a diagram representing the process:
Essentially, an individual or organization can store and access data in this online space maintained by a host service using the internet. You are likely using cloud storage already – whether it is Microsoft Office 365, Dropbox, Google Drive – or one of the other dozens of cloud storage providers. Even media sharing devices like Instagram and YouTube or webmail clients like Gmail and Hotmail fall into this category. Each of these services stores data – your data – on the cloud.
Some big questions remain, even when you understand the process. If you are asking ‘what is cloud storage?’, you probably also want to know:
- How cloud storage works?
- Where is your data?
- What are the benefits?
- What are the challenges?
Today, we’re going to answer these questions.
How Cloud Storage Works
In the past, organizations relied on storing data on large hard drives or external storage devices, like thumb drives, compact discs, or – yes, we’ll say it – floppy disks. Over time, organizations found ways to consolidate data onto a local on-premise storage device or devices. Today, individuals and enterprises alike use cloud storage to store data on a remote database using the internet to connect the computer and the off-site storage system.
Since cloud storage systems are ubiquitous, the size and maintenance of those systems can be quite different. The smallest cloud storage system consists of a single data server that connects to the internet. Other cloud storage systems are so enormous the equipment can fill entire warehouses called “server farms”. Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Computing all maintain these gigantic data centers, storing data from all over the world.
If you’re considering a transition to the cloud, check out our Path to the Cloud analysis.
Where Is Your Data?
The cloud can feel elusive to some users. Many wonder not only what is cloud storage, but where all your data goes after you click “save” or “send”?
Let’s use webmail as an example (Gmail, Yahoo, Hotmail, etc.). You can log on and view your email on any computer or device – your laptop, a friend’s computer, a library computer, your phone, a tablet while on vacation – it doesn’t matter. You’ll see all the same emails and folders. That’s because those emails aren’t stored on the hard drive of your computer. They are stored on the email providers’ servers.
The data is still physically stored. Companies that provide cloud storage must also have servers dedicated to storing your data. So, where are these server farms located?
The fact is you may never know for certain. Even providers that are based in one country, such as the U.S., might have servers in China or France or Canada, or anywhere else in the world. Some providers also enlist a third party to store data. In the same way a construction project might have dozens of subcontractors, a cloud storage system can have dozens of sub-cloud storage providers.
What Are The Benefits?
If that makes you a little nervous, you aren’t alone. 49% of businesses are delaying a move to the cloud due to cyber security skills gaps and security concerns. Despite this, 80% of all IT budgets are committed to cloud solutions, and 73% of companies are planning on moving to the cloud in the next two years. Why?
People are asking what is cloud storage because any savvy business leader can see it is the direction the market is moving. Cloud storage provides a competitive advantage if adopted early. And it’s easy to see why:
Data Retrieval: You can retrieve data from virtually anywhere. Companies can now offer work-from-home and bring-your-own-device (BYOD) solutions. As the business changes, the agile nature of cloud storage makes it easy to adjust.
Collaboration: Other people can access the same data and tools, enabling teams to work together more easily. Engaging the right provider with the right tools, even version control can become a breeze. Cloud providers can work with cloud solution platforms – like Zerto Virtual Replication 6.0 – to provide a disaster recovery and backup strategy.
This is especially beneficial to contact centers. Cloud storage is an essential tool for enabling omnichannel strategies.
Pay-As-You-Use: In the past, businesses had to purchase computing infrastructure – hardware, licenses, equipment – based on their expectations on business growth for the next several years. Companies tended to over-buy, fearing being underprepared, and were left with expensive equipment and storage space they didn’t use. Cloud storage and virtualization allows businesses to pay for the data they actually use. This means businesses can scale and adjust up and down as their business needs and strategy change.
“Green” Business: Sometimes this benefit is left off the list, but businesses that switch to cloud computing can cut their energy consumption by up to 70%. Maintaining even a small data center is expensive: servers, networks, power, cooling, space, and ventilation all contribute to energy costs. Not to mention work-from-home policies reduce community CO2 emissions and office energy consumption.
Increased Capabilities: Small and medium-sized businesses of the past were only able to use the tools they could afford. Today, using a cloud storage provider means SMBs can utilize the different services the cloud provider offers for all their clients. Think of it this way: your data could be stored in the same facility as the U.S. Department of Defense’s data. Since the DOD has strict requirements for security and capabilities, the data center will meet those needs. And your business benefits, too. Virtualization, web applications, collaboration tools, disaster recovery solutions, centralization, data protection, and security protocols are all available to the users.
What Are The Challenges?
Rarely is a solution a silver bullet. Cloud storage is no different. Different concerns have arisen as cloud adoption has increased three-fold in the last year. Specifically, businesses and IT specialists have identified the following concerns:
Attack Surface Area: This is a mathematical understanding of risk. The data stored on a local network is in one place and one place only. By distributing data to more locations, the risk of that data being subject to unauthorized access increases. More people = more risk of compromise. The cloud also uses the internet to transfer data. So, instead of data being transferred via a local area network (LAN), it uses a wide area network (WAN), which increases the risk.
Cloud providers offer additional services to decrease these risks, such as crypto-shredding. A key-aggregate cryptosystem can consolidate the increased number of decryption access keys. Encryption technology is essential to ensure that data is protected both while it is in transit and when it reaches its destination.
Supplier Sustainability: What happens if your cloud storage provider goes bankrupt, suffers a disaster, or changes their business strategy? Teaming up with another party means aligning business goals. That isn’t always easy.
Security: Many businesses store sensitive data, such as healthcare or financial records. These industries are subject to strict regulations. Putting this data in the hands of someone else can be a difficult decision to make. Understanding the safeguards cloud storage providers take and how your team will need to leverage those tools for added security is an important step in migrating your data to the cloud. Security is a shared responsibility. Your customers, clients, employees, and team look to you to protect their information and the information vital to your business.
The cloud is here to stay. Understanding cloud computing is essential for adapting to the ever-changing world of technology. Hybrid cloud adoption grew 3X in the last year alone. Businesses and individuals can use cloud solutions to remain agile and prepare for tomorrow.
If you started by asking ‘What is cloud storage?’, it is likely you may now have questions about the specific details of cloud architecture or are looking for ideas on how to leverage the cloud for your business strategy.
- What is cloud storage architecture?
- What are public vs. private vs. hybrid clouds?
- What are object, file, and block storage types?
- How can you protect your data?
- What does a cloud migration strategy look like?
- How can you leverage cloud services for business strategy?
- What is the right cloud solution for you?
In the meantime, you can contact Mindsight to discuss how cloud storage can work for your business, answer any questions we didn’t cover today, or begin planning a roadmap for your business with the help of our solution architects.
Like what you read?
Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our highly-certified engineers and process-oriented excellence have certainly been key to our success. But what really sets us apart is our straightforward and honest approach to every conversation, whether it is for an emerging business or global enterprise. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges.
About The Author
Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She previously taught STEM programs in elementary classrooms and museums, and writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s writing fantasy, gardening, and exploring the world with her twin two-year old daughters. Find her on twitter @techtalksio. | <urn:uuid:e8ca0c87-cf7e-42d0-8aff-2326e3c35e84> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/what-is-cloud-storage-overview-cloud-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00388.warc.gz | en | 0.930954 | 2,197 | 3.34375 | 3 |
A credit score is a number that reflects how much money you have borrowed, the way you use credit, and your history of paying off any loans and credit cards.
This is calculated based on many sources of information, including:
While your credit score is a record of positive information, such as your track record making repayments on time and in whole, it also includes negative information such as late payments, court adjudications, bankruptcies, and insolvencies.
All in all, the higher the score, the better and the more likely lenders will be to approve loan or credit card applications you might make. The lower the score, the riskier you will be perceived by lenders, making them less inclined to support your application for a loan, which is why having a high credit score is essential.
The pandemic has struck U.S. consumer finances and attitudes in a significant way. Furloughs and layoffs turned the financial health of some households upside down, and ever-changing circumstances produced market buoyancy. This, in turn, ignited a rush to liquidity as consumers shifted funds to keep more cash on hand. Even financial incentives designed to push spending have inspired many households to save and pay off debt.
However, the level of impact and consumers’ ultimate change in behaviors relied on their affluence levels. Therefore, as marketers consider their future strategies and investments, they must ultimately learn what their customers have experienced since the pandemic’s start.
The Federal Reserve warns that credit scores may have gotten less dependable during and after the coronavirus pandemic. A credit score is an all-powerful number that can decide if a consumer can qualify for a loan, rent a home, or even buy car insurance.
Scores for homeowners who took precedence of payment relief on their mortgages increased an aggregate of 14 points over the progression of the pandemic. That was an even more significant increase than the seven-point rise observed between borrowers that did not exercise restraint on their loans.
Banks and other financial firms employ credit score searches to discover consumers’ readiness and capacity to pay their loans. Nonetheless, the firms that produce the figures known as credit scores have long endured objection from consumers and advocates who believe that it is not apparent what data they use to determine the numbers or what can be arranged to boost them.
During the coronavirus pandemic, unemployment rates surged through the roof; mortgage lenders began a barge of tolerance benefits to borrowers that enabled them to delay payments for as long as 18 months. The Fed determined that approximately 13 percent of mortgage borrowers were in forbearance state for at least a month over the past year.
One thing to always keep in mind is that credit scores may be less informative than they should be. Although they are a potent tool for lenders to identify creditworthy borrowers, the protection of the forbearance programs may cause some muddling.
Additionally, it is not just credit scores suffering a loss of power. Still, the central bank had come out to state that about 8 percent of borrowers were already behind on their mortgages when they entered forbearance. After taking relief, the large majority were listed as current on their loans. As a result, credit bureau tools of mortgage default must be observed.
People enduring financial hardships may be questioning how likely late, or reduced payments may impact their credit scores. Follow these best practices to help minimize the impact of Coronavirus on your credit standing:
It is always important to stay up to date on your credit reports and make sure you examine them regularly. Sometimes, you may be able to find small mistakes here and there that may, in fact boost your score high.
There are several ways that you can do this. By simply accessing your credit report, you can skim through your information and gather all the needed data. | <urn:uuid:d853b756-f751-4e33-a3ae-c7540a390ae4> | CC-MAIN-2022-40 | https://plat.ai/blog/is-the-all-powerful-credit-score-reliable-post-covid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00388.warc.gz | en | 0.973169 | 775 | 2.71875 | 3 |
Internet users are increasingly curious about the information about them that’s available online, but they’re not doing much to monitor that information in a structured way.
That’s one finding of a new study released Sunday by the Pew Internet and American Life Project. Entitled “Digital Footprints: Online Identity Management and Search in the Age of Transparency,” the study found that while 47 percent of Internet users have searched for their own name online, only 3 percent of those who have done so make it a regular practice, Susannah Fox, associate director at the Pew Internet and American Life project and a coauthor of the report, told TechNewsWorld.
In a similar Pew study released in 2002, by contrast, only 22 percent of Internet users had searched on their own names.
The new report is based on a December 2006 national telephone survey of 2,373 adults, of whom 1,623 were Internet users. The margin of error is 3 percentage points.
More Interesting Searches
People searches have been going on for decades, but the Internet has increased the volume of information readily available and allowed amateurs to join the search, Fox noted.
“Search engines have made it easier, and Web 2.0 has made it more interesting,” she said.
Specifically, the explosion of blogs, video and online profiles through social networking sites and content sharing sites like YouTube and Flickr have increased the size of people’s digital footprints, Fox explained.
“The cumulative traces of our online activity are more visible in the age of Web 2.0,” added Mary Madden, a coauthor on the report. “The more content we voluntarily contribute to the public or semipublic corners of the Web, the more we become not only findable, but knowable.”
A Public Persona
Roughly one in 10 Internet users surveyed is required by their job to have information available online, Fox noted. Sixty-eight percent of that group use search engines to see what content is associated with them online.
“If you have a reason to be a public persona, you’re more likely to keep track of your digital footprint,” she explained.
Fifty-three percent of Internet users have searched online for information about personal and business contacts, according to the study.
Specifically, 72 percent of people searchers have sought contact information online, while 37 percent have looked to the Web for information about someone’s professional accomplishments or interests, the researchers found. Thirty-three percent have sought out someone’s profile on a social and professional networking site, and 31 percent have searched for someone’s photo.
Thirty-one percent have searched for someone else’s public records — such as real estate transactions, divorce proceedings, bankruptcies, or other legal actions — and 28 percent have searched for someone’s personal background information, the study found.
“Nostalgia seems to motivate quite a few Internet users,” Fox said. “The most popular search target is someone from the past — an old friend, an old flame or a former colleague.”
Such findings underscore the Internet’s capacity to reunite and reignite social connections, she added.
A Laid-Back Approach
Despite all the searching going on, however, people have apparently become less sensitive over the last decade to privacy concerns, Fox asserted.
Sixty percent of Internet users say they are not worried about how much information is available about them online, and only 38 percent say they have taken steps to limit the amount of online information that is available about them. Teens are more likely to restrict access to their online profiles, the study found.
Roughly a third of Internet users believe that their email address, home address, home phone number or their employer is available online; only a quarter think a photo, names of groups they belong to, or things they have written appear online; and very few believe their political affiliation, cell phone number, or video appear online.
Yet much of that data is available, Pew found in interviews with experts, suggesting that many users are simply unaware of the wide-ranging nature of online personal information.
Rather than assuming people are less concerned with privacy, however, a better conclusion from the report might be that the meaning of privacy has changed, Daniel J. Solove, an associate professor of law at the George Washington University Law School and author of the book, The Future of Reputation: Gossip, Rumor and Privacy on the Internet, told TechNewsWorld.
“I think it is true that people are currently not aware of the consequences of their online reputations,” Solove said. “This is all relatively new, so many people are putting information online without having experienced the dark side or realizing the damage it could do to themselves or others. That’s definitely a big problem.”
The fact that people are comfortable having information about them online does not, however, mean that they don’t expect privacy, Solove added.
“The idea that privacy is about keeping deep, dark secrets hidden has become an almost antiquated notion,” he explained. “Today, there are lots of other things people want, including control over the information so that it doesn’t get used in certain ways.”
People may be comfortable having information publicly available, for example, but they might not be comfortable if it were used for advertising purposes, he said.
“These results are very interesting, but how you interpret the data going into the future is fairly complicated because it depends on what we understand privacy to be,” Solove concluded. “Some people haven’t been bitten yet, and we’re also dealing with nuanced understandings of privacy. But you can’t conclude privacy is dead.”
Need for Tools
For users who are concerned, there may not even be much they can do today to protect their online information, Marc Rotenberg, executive director of the Electronic Privacy Information Center (EPIC), told TechNewsWorld.
“I think this is more evidence that there are not good tools out there that allow Internet users to safeguard their privacy,” Rotenberg said. “That’s been the problem with privacy, and not surprisingly, users are frustrated with opt-outs and systems that promise but don’t deliver anonymity.”
What needs to happen, Rotenberg added, is for steps to be taken on a higher level: “The obligation falls on the policy makers and the companies that develop these services to do a better job.” | <urn:uuid:4e1cdf7c-4316-49fc-a644-88d9fe08eb91> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/pew-study-self-googling-on-the-rise-60810.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00388.warc.gz | en | 0.947332 | 1,380 | 2.65625 | 3 |
Riken Wires a New Path to Scalable Quantum Computing
(MirageNews) Yasunobu Nakamura, Team Leader, Superconducting Quantum Electronics Research Team has written this extensive explanation as to why he believes a new circuit-wiring scheme developed over the last three years by RIKEN, in collaboration with other institutes, opens the door to scaling up to 100 or more qubits within the next decade
Challenge one: Scalability
The challenge of scalability arises from the fact that each qubit then needs wiring and connections that produce controls and readouts with minimal crosstalk. As we moved past tiny two-by-two or four-by-four arrays of qubits, we have realized just how densely the associated wiring can be packed, and we’ve had to create better systems and fabrication methods to avoid getting our wires crossed, literally.
Challenge two: Stability
In theory, one way we could deal with instability is to use quantum error correction, where we exploit several physical qubits to encode a single ‘logical qubit’, and apply an error correction protocol that can diagnose and fix errors to protect the logical qubit. But realizing this is still far off for many reasons, not the least of which is the problem of scalability.
Challenge Three: Quantum Circuits
We at Riken eventually came up with the idea of using a superconducting circuit. The superconducting state is free of all electrical resistance and losses, and so it is streamlined to respond to small quantum-mechanical effects.
We use our superconducting quantum-circuit platform in combination with other quantum-mechanical systems. This hybrid quantum system allows us to measure a single quantum reaction within collective excitations-be it precessions of electron spins in a magnet, crystal lattice vibrations in a substrate, or electromagnetic fields in a circuit-with unprecedented sensitivity.
Interfacing a superconducting quantum computer to an optical quantum communication network is another future challenge for our hybrid system. | <urn:uuid:4e484883-d520-4dc4-a8d5-27a7b34ffb1b> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/riken-wires-a-new-path-to-scalable-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00388.warc.gz | en | 0.889297 | 428 | 2.8125 | 3 |
Well this is a little disappointing, although not without precedent.
Both Sony’s PlayStation 4 (PS4) and Microsoft’s Xbox One (the Kinect-less version pictured above, obviously) scarf down a lot more electricity than their immediate predecessors (their respective “slim” versions), according to a report from The Natural Resources Defense Council (NRDC). In fact, they consume “two to three times more annual energy,” said the NRDC.
The good news is that they consume less than the launch versions of the PlayStation 3 and Xbox 360. And if history is any guide, there’s reason to hope that future iterations will be more energy efficient. For now, gamers that have upgraded to the new consoles better be ready for heftier power bills.
After cooking up a usage profile for the PS4 and Xbox (and Wii U), the NRDC concluded:
Putting it all together, the Xbox One consumes 233 KWh/y on average, the PS4 181 kWh/y, and the Wii U 37 kWh/y. The Xbox One’s annual energy consumption is therefore roughly 30 percent higher than the PS4’s, on average, and more than six times higher than the Wii U’s.
In total, consoles are predicted to consume 10 – 11 billion kilowatt hours per year, enough to power Houston, the fourth-largest U.S. city.You can check out the rest of the NRDC’s finding’s here (PDF).
Image credit: Microsoft | <urn:uuid:79ae07de-2f6a-4311-94bf-fe6f6d13024f> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2014/05/xbox-one-playstation-4-energy-consumption-nrdc.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00388.warc.gz | en | 0.886927 | 327 | 2.515625 | 3 |
We’ve all seen the commercials that show us how quickly 5G can stream a video. But it’s in the professional world where 5G will be transformative. That’s especially true for the public safety sector, where 5G makes life-saving applications possible, like real-time drone surveillance, enhanced vision for firefighters in smoky rooms, and crisis negotiation robots.
5G is positioned to help empower a safer society by supporting the development of new public safety applications and by enhancing emergency responder radio communications systems (ERRCS), including public safety distributed antenna systems (DAS).
Before explaining how 5G will improve public safety, let’s briefly review 5G.
What is 5G?
5G is the fifth generation of wireless data technology. Just as the jump from 1G to 2G cell phone technology added the ability to text and 3G made smartphones possible, the introduction of 5G will bring capabilities not possible with the current 4G and 4G LTE technologies. For example, 5G is projected to enhance pre-existing technology such as distributed antenna systems (DAS) and provide new capabilities like helping first responders access real-time data.
5G opens the door to new and innovative public safety applications and technologies because of its high network speed and lower latency. Here’s a bit more on those two terms.
High network speed - Most consumers understand this metric. Higher speeds mean less time to download large files, such as movies. For public safety agencies, higher network speeds will make it possible to send large amounts of data (such as high-definition video) over the wireless network in real-time.
Low latency - Latency is the time it takes a network to respond to a request. For example, CNET, a technology news website, describes latency as “the lag between the moment you try to shoot a space invader and the moment the internet server hosting the game tells your app whether you succeeded.” 5G promises to cut latency down to 1-2 milliseconds, which is more than ten times faster than current networks and fast enough to conduct critical life-or-death operations via remote connection.
How Can 5G Improve In-Building Public Safety Communication Systems?
For public safety agencies and the people they protect, 5G will create a whole new category of tools and provide a next-generation upgrade to existing technology such as public safety DAS and bi-directional amplifiers (BDAs). 5G is positioned to improve these tools due to high network speed and lower latency.
The performance upgrade provided by 5G is increasingly significant as property owners need to extend reliable public safety radio communications indoors to adhere to new National Fire Protection Association (NFPA) code regulations that mandate public safety wireless systems in new and existing buildings.
Next-generation in-building public safety communications systems support first-responder radio networks and bring secure connectivity to buildings, including common radio dead zones like stairwells, elevators, and basements. Having a strong, consistent indoor service throughout a facility will be especially important in the coming 5G world.
A 5G-ready in-building public safety communication system, meaning it will continue to provide reliable in-building cellular connectivity and require little modification as 5G becomes more widely available, can:
- Support 5G upgrades
- Align with most emergency notification systems
- Expedite emergency response time
- Improve the ability to call 911 throughout the building
- Enable better emergency personnel coordination
- Ensure public safety codes and ordinances are met
What are the Main 5G Public Safety Applications?
Some of the specific fire fighting, EMS, and police applications for 5G are currently in development. Earlier this year, Verizon announced a list of companies that are developing police and fire 5G applications at the wireless provider’s 5G First Responder Lab. The initial spots in Verizon’s program went to the following five companies.
- Adcor Magnetic Systems - Sensors to detect, identify, track and correlate on a digital 3D environment.
- Aerial Applications - Drone solutions to turn visual data into actionable insights.
- Blueforce Development - Situational awareness solutions and sensors to enhance field operations and communication.
- Kiana Analytics - A physical safety and security platform with engagement analytics for real-time location/situational awareness.
- Qwake Technologies - Augmented reality products to help firefighters see in smoke-filled, zero-visibility, and hazardous environments.
Prepare for 5G with a Reliable In-Building Public Safety Solution
While 5G is here for some, it will likely be years before it’s widely available because the super-high frequency spectrum that allows 5G to transmit data comes with the drawback of a shorter range than 4G networks. That means wider 5G availability will require a massive national 5G infrastructure build-out that may take several years to reach some police and fire departments.
Because of this, emergency service professionals should familiarize themselves with the coming 5G capabilities but shouldn’t delay communication system upgrades that use the current 4G technology.
One key existing technology facility managers in all types of venues should consider implementing today is a public safety DAS built on 4G service. Facility managers should partner with a premier integrator to design, install, maintain, and monitor an in-building public safety communication system that is 5G ready, multi-carrier capable, and supported by experienced technicians 24/7.
Experienced technicians design ERRCS solutions based on the spectrum environment (the radio frequencies used locally by first responders), building parameters such as wall types, thickness, and building materials, and the local authority having jurisdiction’s (AHJ) code requirements to ensure successful implementation.
Editor's Note: This post was originally published on January 03, 2020, and has been updated for accuracy and current best practices. | <urn:uuid:a08cad77-e92e-4b1d-88f7-fbbdf4651c7a> | CC-MAIN-2022-40 | https://www.anscorporate.com/blog/how-5g-will-benefit-public-safety-agencies | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00388.warc.gz | en | 0.914893 | 1,210 | 2.703125 | 3 |
Bonus Material: 5 Steps To Better Risk Management
What Is Governance In Business: Introduction
When we get behind the wheel of a car, we adhere to a certain set of rules. We stop when we see a stop sign, we give the neighboring lane the right of way when we see a yield sign, we stay on our designated side of the road…but what happens when someone ignores one of those rules? Sometimes, nothing; other times, fatal catastrophe.
We all have our own interests in mind while driving, but without drivers following the rules that are in place to keep us safe and make things more efficient for all, the road would be a much scarier place.
Roads are an excellent microcosm of what governing bodies do. When we hear the term “governance,” we tend to think about people putting rules in place on a micro level in order to gain some semblance of control over their area of responsibility on a macro level. So how do people in charge decide on the rules that we’re all expected to follow?
Good governance does not just refer to making only the “correct” decisions; it’s about truly using the best possible processes to inform your decision making. Depending on your area of responsibility, the decisions you make have the potential to impact your employees, customers, stakeholders and the community at large.
In this guide, we’ll discuss governance as it relates mostly to organizations. We’ll explain “what is governance,” the definition of governance, and some various types of governance. Then, we’ll delve into how to solve for some common challenges of corporate governance.
Governance is defined by the processes used to make and implement decisions. This is the broadest explanation of the term; in order to fully understand it, let’s examine the different types of governance that can exist.
Types of Governance
There are infinite types of governance. So long as an entity has been delegated any amount of power over another, there exists a certain type of governance.
In history class, you most likely learned about the various forms of governance as it pertains to ruling a state: monarchy, democracy, oligarchy, authoritarianism, and totalitarianism.
HR professionals also govern. Their governing efforts may include how their employees should prioritize their projects, holding an ethics council, deferring to a board of financial advisors, and clearly defining how operations should be executed.
IT governance, for instance, refers to the method that an organization uses to control and direct information technology. Practices within IT governance vary widely by industry, but a few common practices include compliance, audit, asset management, quality assurance, information security.
Corporate governance refers more specifically to the framework of practices by which a company or organization ensures fairness and transparency with its stakeholders. These stakeholders most likely include customers, employees and their associated community (American Progress).
There exists both explicit and implicit agreements between the corporation and its stakeholders in order for responsibilities to be properly distributed. There also need to be procedures in place for resolving conflicting interests, proper supervision and various other controls in order to properly impose a system of checks and balances.
In short, corporate governance is a method of governing a company much like a sovereign state. Its own set of customs and laws are applied to its employees from the highest and lowest levels. It’s intended to increase accountability and prevent massive disasters from occurring.
Historically, poor corporate governance has been tied to failures in risk management. Some notable examples are the Target breach, the Volkswagen emissions failure and the Wells Fargo account scandal. These highly publicized scandals indicate more than a financial and reputational hit for the associated companies, because each case directly impacted a wide variety of stakeholders. Enterprise risk management, or ERM, is a solution that encourages companies to protect their shareholders through good governance.
Corporate Governance & ERM
Governance functions were designed to create a standardized process for managing the risks that organizations face most prevalently. In operations, these risks include financial misstatements, fraud, vendor management, disaster recovery, security threats, non-compliance, human error and so much more.
ERM is a framework — it shouldn’t be a siloed function, but rather a collaborative approach to all governance areas. Its focus is on creating a standardized structure and taking a risk-based approach to objectively prioritizing risks across all functions and levels at an organization.
Good Governance Challenges
Information is scattered. Housing risk, performance, and compliance information on various spreadsheets and word documents using different methods and tools throughout the organization makes it hard to even locate — let alone compare and aggregate — the necessary information.
Misalignment of priorities. Unfortunately, stakeholders’ priorities aren’t always aligned with the overall goals of the organization. Because of this, it can be difficult to know which tasks are important to whom and which activities are considered top priority. Without proper alignment, it’s impossible to accurately prioritize activities and their associated risks. A lack of transparency makes risk, performance and compliance information hard to discover, collect and maintain.
Siloed activities. Within every organization, business areas are conducting governance activities. Unfortunately, each of these areas are conducting these activities under different names based on different assumptions and standards by different methods.
Unknown relationships. Understanding the relationships between all the activities and related resources of our everyday, and how we are personally connected to the activities and resources of others is incredibly complex. Understanding the risk, performance and compliance consequences of all of these activities and resources, as well as where existing risk lies is even more difficult. This is why we sometimes feel we are in meetings all day discussing loss events that have already happened, rather than identifying emerging future risks.
Organizations need to build a robust ERM framework, or taxonomy, which provides a holistic view of all the information and relationships across the organization. Taxonomy structures and preserves the integrity of information, so even as changes occur in multiple parts of the organization, managers can compare risks like apples to apples.
Accountability is a fundamental requirement of good governance. We rely on businesses every day, and they have an obligation to assess risk, implement the appropriate controls, monitor their effectiveness and report to the board. If the process owners at the organization are not answerable for the consequences of risks, they can negatively impact customers, employees and investors.
LogicManager is fighting to make a change: we believe that ERM and GRC is the key to implementing and sustaining good governance, and our mission is to provide the tools and services that make this possible. Our all-in-one ERM software can help your business integrate governance and risk areas.
Want to see for yourself? You can find out more about our comprehensive GRC solutions here. | <urn:uuid:e13f9dc7-28f9-49b8-81a3-10a0c36cafac> | CC-MAIN-2022-40 | https://www.logicmanager.com/resources/grc/what-is-governance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00588.warc.gz | en | 0.947644 | 1,403 | 3.171875 | 3 |
Gastric problem & symptoms
Bloating is common cause of Gastric problem the sensation of excess stomach gas that has not yet released and is stuck, causing discomfort.
The bloating, burping and passing of gas are natural functions of the body. Usually caused by swallowed air or in the process of breakdown of food through digestion.
Eating too fast and too much gulping too much food or drink makes us take in a lot of air along with the food which could lead to gas.
However, Gastric problem often created by certain types of foods and aerated drinks. Rich, fat laden greasy foods and sodas increase the chances of gas.
Research shows that certain food components in the normal diet. Such as resistant starch, oligosaccharides and plant fibers. Incompletely absorbed foods to cause gas and other stomach discomforts.
Moreover, poor digestion is a major cause for gas as it produces a more unstable microbial
community in the gut.
Drugs that help to alleviate stomach acid issues comprise of H2 blockers such as rantidine, nizatidine, cimetidine, famotidine. | <urn:uuid:c7647aa0-3292-48ac-95f0-b405563a0863> | CC-MAIN-2022-40 | https://areflect.com/2018/08/20/todays-health-tip-7-ways-to-stop-gastric-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00588.warc.gz | en | 0.958229 | 234 | 3.40625 | 3 |
The line between security and compliance is easily blurred. Sometimes they feel like a moving target. Maybe you’ve asked yourself one of these burning questions:
- How do we create comprehensive security programs while meeting compliance obligations?
- Is checking the compliance box really enough?
- How does all this enable the business to function and move forward?
These questions shape the direction of an organization and ultimately cause it to succeed or fail.
So, in this article, let’s clarify the differences between IT security and IT compliance.
What is IT security?
Security officers follow industry best practices to secure IT systems, especially at the organizational or enterprise level. Security pros are constantly looking at how to both:
- Prevent attackers from harming the company IT infrastructure and business data
- Mitigate the amount of damage that is done when an attack is successful
In the past, administrators would take a purely technical approach and rely heavily on systems and tools to protect their network. Today, though, things have changed.
Due to increased specialization and technical know-how, IT security is not limited to a single field or discipline. Instead, there are multiple areas such as architecture and infrastructure management, cybersecurity, testing, and especially information security—arguably the most critical policy for any organization.
Information security (InfoSec) is exercising due diligence and due care to protect the confidentiality, integrity, and availability of critical business assets, something security pros know as the CIA Triad. Any IT security program must take a holistic view of an organization’s security needs and implement the proper physical, technical, and administrative controls to meet those objectives.
Taking the three key functions of confidentiality, integrity, and availability, organizations can implement effective InfoSec protocols. But what does CIA actually mean?
Here are three key sections in understanding how InfoSec must be managed.
- Confidentiality. Company information can be sensitive information—customer data, proprietary information, innovations in the works. It is the duty of IT security to protect this information. Ensuring that only the correct and authorized user(s) and system(s) can read, change, and use data is key.
- Integrity. Information and the system it is contained in must be correct. Having integrity means knowing that what is stored is correct and the system has measures to ensure that.
- Accessibility. Systems and information need to be available when they are needed. If a system isn’t available, it can’t be relied on.
Two additional properties, authentication and non-repudiation, are also vital to IT security.
(Learn more in our IT security policy explainer.)
How IT security looks today
Traditionally, security professionals would rely on devices like firewalls and content filters along with network segmentation and restricted access. But as modern threat agents became more and more sophisticated, the tools that security analysts and officers have to use become more complex too.
Old-school technical controls cannot account for:
- 95% of cyber-security breaches coming from human error
- 45% coming from hacking, including technically adept threat agents exploiting vendor-created backdoors or executing remote code
Today, security professionals need to have a fuller kit of tools to battle against malicious outside threats.
The concept of IT Security comes down to employing certain measures to have the best possible protection for an organization’s assets. At the heart of all good IT security protocols is the CIA triad.
What is IT compliance?
IT compliance is the process of meeting a third party’s requirements with the aim of enabling business operations in a particular market or aligning with laws or even with a particular customer.
Compliance sometimes overlaps with security—but the motive behind compliance is different. It is centered around the requirements of a third party, such as:
- Industry regulations
- Government policies
- Security frameworks
- Client/customer contractual terms
Let’s say that IT security is a carrot. it motivates the company to protect itself because it is good for the company. IT Compliance, then, is the stick—failure to effectively follow compliance regulation can have serious effects on your business.
Often, these external rules ensure that a given organization can deal with complex needs. Sometimes, compliance requires an organization to go beyond what might be considered reasonably necessary. These objectives are critical to success because a lack of compliance will result in:
- At minimum, a loss of customer trust and damage to your reputation.
- At worst, legal and financial ramifications that could result in your organization paying hefty fees or being blocked from working in a certain geography or market.
Areas where compliance is a key business concern:
- Countries with data/privacy laws like GDPR, the California Consumer Privacy Act, and more
- Markets with heavy regulations, such as healthcare or finance
- Clients with high confidentiality standards
These areas almost always demand a high level of compliance. Importantly, IT compliance can apply in domains other than IT security. Complying with contract terms, for example, might be about how available or reliable your services are, not only if they’re secure.
When is compliance necessary?
When you need to comply with certain regulations depends on many factors:
- Your industry
- Your company’s size or location
- The customers you serve
- Many other factors
Many laws outline very specific criteria that a business must meet—but they don’t apply to everyone. For example:
- HIPAA is a U.S. law that defines how the healthcare industry protects and shares personal health information.
- SOX is a financial regulation in the U.S. that applies to a broad spectrum of industries.
- Payment Card Industry Data Security Standards (PCI-DSS) are a group of security regulations that protect consumer privacy when personal credit card information is transmitted, stored, and processed by businesses.
- ISO 27001, on the other hand, is not a law but a standard that companies can opt into by aligning with these InfoSec standards.
Other standards you must comply might not be law or opt-in—some might originate directly with your customers. A high-profile client may require the business to implement very strict security controls in order to award their contract.
Compliance & GRC
Compliance is only one section of a greater scheme of ensuring an organization is compliant with industry, government, or other regulations. These are summed up in the acronym GRC:
- Governance. Before compliance is possible, organizations need to make plans that are directed and controlled. Setting direction, monitoring developments, and evaluating outcomes are all key to effective governance.
- Risk. Danger is everywhere and it needs to be recognized. Compliance needs for risks to be identified, analyzed, and controlled as much as is possible.
- Compliance. When appropriately governed and risk-managed, an organization can evaluate its compliance. Standards are not just set but evaluated and managed at every step.
Comparing IT security & IT compliance
Security is the practice of implementing effective technical controls to protect company assets. Compliance is the application of that practice to meet a third party’s regulatory or contractual requirements.
Here is a brief rundown of the key differences between these two concepts. Security is:
- Practiced for its own sake, not to satisfy a third party’s needs
- Driven by the need to protect against constant threats to an organization’s assets
- Never truly finished and should be continuously maintained and improved
- Practiced to satisfy external requirements and facilitate business operations
- Driven by business needs (rarely technical needs)
- “Done” when the third party is satisfied
At first glance, it’s easy to see that a strictly compliance-based approach to IT security falls short of the mark. This attitude focuses on doing only the minimum required in order to satisfy requirements, which would quickly lead to serious problems in an age of increasingly complex malware and cyberattacks.
How security & compliance work together
We can all agree that businesses need an effective IT Security program. Robust security protocols and procedures enable your business to go beyond checking boxes and start employing truly effective practices to protect its most critical assets.
This is where concepts like defense-in-depth, layered security systems, and user awareness training come in, along with regular tests by external parties to ensure that these controls are actually working. If a business were focused solely on meeting compliance standards that don’t require these critical functions, they would be leaving the door wide open to attackers who prey on low-hanging fruit.
While compliance is often seen as doing only the bare minimum, it’s useful in its own right. Compliance is an asset to the business—it isn’t just hoops you must jump through. Becoming compliant with a respected industry standard like ISO:27001 can:
- Bolster your organization’s reputation
- Garner new business with security-minded customers
Compliance can also help to identify any gaps in your existing IT security program which might not have otherwise been identified outside of a compliance audit. Additionally, compliance helps organizations to have a standardized security program, as opposed to one where controls may be chosen at the whim of the administrator.
Secure & comply: both business-critical
The astute security professional will see that security and compliance go hand in hand and complement each other in areas where one may fall short.
- Compliance establishes a comprehensive baseline for an organization’s security posture.
- Diligent security practices build on that baseline to ensure that the business is covered from every angle.
With an equal focus on both of these concepts, a business will be empowered to not only meet the standards for its market but also demonstrate that it goes above and beyond in its commitment to digital security. | <urn:uuid:1c1c2aed-c46f-493a-9ca4-1f84c67701f0> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/it-security-vs-it-compliance-whats-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00588.warc.gz | en | 0.938106 | 2,028 | 2.640625 | 3 |
Political corruption data measures levels of corruption within governments. This includes either the government of a country as a whole or only local governments.
This data primarily comes from third-party international organizations. However, some government agencies monitor corruption within the government and in the private sector, such as the US Department of Justice’s Public Integrity Section and the Inspectors General.
Corruption data is generally presented as a rank or a score, based on measures and methodologies set by the investigating agency. Some organizations—for example, Transparency International with its Global Corruption Barometer—poll citizens on their perceptions of corruption in their governments.
All organizations that collect political corruption data state that they use the information to fight against all forms of corruption; their aim is to eliminate it entirely. Activists and politicians use the collected data to do the same thing.
By its nature, measuring corruption directly is extremely difficult. Researchers therefore often use indirect methods, despite their lack of accuracy. Polls and surveys, in particular, cannot make distinctions between perception of crime and evidence of it. Afterward, agencies supplement these indirect measures with analysis from experts.
Despite these difficulties, however, researchers can still gather direct evidence of corruption via reports submitted to regulatory agencies, news, and criminal convictions data. They also closely watch election results for anomalies.
In short, to ensure your political corruption data is as accurate as possible, make sure your sources are as diverse and fact-based as possible, with any supplemental expert analysis provided by only the best and most thorough authorities you can find.
In 2020, the world saw the largest foreign bribery resolution to date, as Airbus, a global provider of civilians and military aircraft based in France, agreed to pay nearly US$4 billion in combined penalties to the U.S., France, and United Kingdom to resolve foreign bribery charges. France played a huge role in holding the company to account, thanks to a recent law that helped reform their anti-corruption legal framework.
The Fund for Peace’s Data for Peace uses data science to forecast and mitigate—if not prevent—major conflict.
The Fund for Peace Fragile States Index (FSI) tracks 179 countries in the world. Indicators for fragility—which includes violence and political instability—range from group cohesion to economic development to emigration.
The Fragile States Index publishes a yearly report, using both qualitative and quantitative data to provide the fullest picture of national stability and growth.
Chinascope Data tracks official PRC reports to Chinese social media to the personal and business connections of US parties with China | <urn:uuid:6aa493c6-a971-42bc-b4cb-13241c6b9d4a> | CC-MAIN-2022-40 | https://www.data-hunters.com/category/political-data/political-corruption-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00588.warc.gz | en | 0.949556 | 522 | 3.046875 | 3 |
Public schools have often been accused of failing to provide their students with an individualized learning experience. Increasing budget cuts and class sizes are becoming a national trend in the United States, and federal grants are being paid out based on standardized test scores. Can you blame schools for doing whatever they can to stay afloat?
A newly launched program aims to help schools and students. It offers more personalized learning opportunities, although the technology behind it may make parents uneasy. The Bill & Melinda Gates Foundation, working with the Carnegie Corporation of New York and school officials from across the country, have developed a $100 million database to chart the academic careers of public students from the time they first enter elementary school all the way through graduation.
Tracking student development
The database will track many aspects of a student’s development, including test scores, learning disabilities, behavioral issues and even hobbies and how much homework they complete. Educational software developers have said that they can use the information in this database to create custom products for students and teachers alike.
For example, developers can create games that appeal to certain combinations of hobbies and learning needs. Or they could craft lesson plans for a specific classroom. In theory, the quality of education to public school students will be increased while software companies are able to raise their profits with more focused products.
States and individual school districts are free to opt in to the database if they choose. Right now, seven states have agreed to provide student information to the project. This aspect of the program may raise some concerns with both parents and privacy advocates. The Department of Education has determined that public schools aren’t required to obtain parental consent in order to share information about their students as long as the recipient is a “school official” with “legitimate educational interest.” The tricky part is those terms have been very broadly phrased. A “school official” could be a company hired by the school. It wouldn’t take much contractual gymnastics to find loopholes in this setup.
The cost of information
While the idea of schools tracking and selling student information may make some people uncomfortable, what has caused real alarm is the practice of sorting student information by their social security numbers. The notion of schools selling sensitive student information to private companies has resulted in resistance from parents and advocacy groups. In addition to concerns about what those companies might do with that information, passing around social security numbers and storing them on multiple databases raises security fears.
Are you comfortable with schools storing and selling information about their students? Tell us what you think in the comments section below and remember to check out our Facebook page! | <urn:uuid:dd284cd1-b13f-45ad-acf7-93f1bfaf38cb> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/the-path-to-success-assigning-students-a-barcode | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00588.warc.gz | en | 0.963627 | 531 | 2.8125 | 3 |
In today’s changing work environments, cyberthreats are becoming more and more prevalent and are constantly evolving. Cybercrime is one of the fastest-growing types of crime in the world. In fact, according to Harvard Business Review, “If it were measured as a country, cybercrime would be the world’s third-largest economy after the U.S. and China.” It affects all industries and no business, big or small, is invulnerable, so it is essential to keep your company secure from cyberattacks. Here are 10 cybersecurity tips to protect your business.
1) Think before you click
Cybercriminals will often use email to try to gain access to your computer and steal your data. Their goal is for you to click on a link, open an attachment, or download something; if you do, malware will automatically install – often without you even knowing it – and compromise your computer. Links can be disguised to appear as if they are legit when they aren’t and it’s easy to click on a link in an email without even thinking about it. Check links before you click on them by hovering over the link to see the target URL.
If an email from an unexpected source contains an attachment and/or if the attachment seems suspicious, don’t open it. A “best practice” is to NEVER open an attachment in an email unless you are expecting to receive it.
2) Stay updated
Software updates can be issued when any security flaws are identified. Sure, update notifications can be annoying, but updating and rebooting your computer is a small inconvenience when compared to the risk of getting infected with malware or other security vulnerabilities. To protect yourself, always update to the latest version of your software.
3) Watch out for phishing scams
Over 3 billion fake emails are sent daily. Phishing attacks are one of the biggest cybersecurity threats because they are easy to create and inexpensive to send and have the potential to reach many victims. Phishing is a cybercrime in which a hacker will pose as someone the recipient knows or is familiar with or as a legitimate business or institution to trick them into clicking on a link. The link will take the user to a malicious webpage that looks just like the authentic site. The malicious site then requests sensitive personal data or can install software that infects the system with malware. This can result in financial loss, identity theft, and loss of data, just to mention a few.
Avoid emails from unfamiliar senders, look out for grammatical errors, be aware of inconsistencies or anything that looks suspicious in the email, look at the actual email address (the email may appear as coming from someone you know but if the actual address is different, it’s likely a phishing scam), and hover over any links to check the destination URL.
Overall, if an email seems suspicious, don’t open it, as it may be a phishing scam.
4) Be smart about passwords
Use strong, unique passwords for every account and change passwords periodically. Cybercriminals know that people tend to use the same passwords for multiple online accounts. The dark web provides hackers with billions of username/password combinations available for sale – or even for free. They’ll use these stolen credentials to see if they can get into your accounts. So be sure to use unique passwords for all your accounts.
Put a policy in place to change passwords regularly – at least every 3 months.
It’s also important to use strong passwords. Avoid family or pet names, city you live (or have lived) in, a favorite sports team, and your birth year or other year that has special meaning. Cybercriminals can easily find out this kind of information through social media and online. You may want to use a passphrase, which is a short sentence that only you would know and incorporates special characters and numbers in place of some of the letters. The longer and more complex, the better. For example, “I drive to work on 1st Avenue every Monday” is a great passphrase that is long, complex, and easy to remember.
5) Use multiple layers of protection
Having a strong password is important but you should also have two-factor or multi-factor authentication. This is the practice of requiring additional credential information so if an attacker has accurately guessed your password, there’s an additional security measure in place to ensure your account is not breached.
Any device connected to the internet is at risk of being infected with viruses, malware, and spyware. Protect your network and endpoints by deploying firewall, VPN, and antivirus technologies.
6) Secure your connection
This tip is an easy one to ignore. You may think connecting your device to an unsecured connection is harmless, but it’s not worth the potential consequences. When connected to a public network, the network is shared with everyone else that is also connected. So, any information you send or receive can easily be compromised. Try to only connect to private networks, especially when handling sensitive information. For even more security, use a virtual private network (VPN). This will encrypt your connection and allow you to protect your information, even from your internet service provider.
7) Avoid the “it won’t happen to me” mentality
No one is immune. If you think “it won’t happen to me” or “I don’t go to any unsafe websites”, you couldn’t be any more wrong.
Most small to medium-sized businesses (SMBs) are unaware that 64% of today’s cyber-attacks happen to organizations just like theirs. They’ve unknowingly fallen victim to the constantly evolving cyberthreats, especially with the current changing workforce environment.
Think about this: there’s a lot of information to be found in customer data, such as social security numbers, payment information, credit card information, driver’s license numbers and pictures, and other data that can be easily offered for sale on the dark web. To a cybercriminal, this information is like gold, so SMBs aren’t only their targets – they are their favorite targets.
On average, a single cyberattack can cost a small business more than $200,000, according to Hiscox Cyber Readiness Report.
Unless your business is completely offline, there’s no such thing as being “secure enough.” Even with the huge amounts of money large corporations put into security, they’re still affected by cyberattacks. Try to invest in upgrading your security – the cost for this is much less than the enormous costs and consequences of a security breach.
8) Back up data
A security breach can cause you to lose important data. In order to enable you to restore data that is lost, make sure it is backed up frequently on the cloud or a local storage device.
Some hackers don’t necessarily want to steal your data but rather are aiming to encrypt or erase it. With these ransomware attacks, which are on the rise, cybercriminals use malware that keeps companies from accessing their system unless they pay a hefty ransom. According to a study by IBM, ransomware attacks have increased by 6,000% since 2016. And they are increasing even more with the pandemic, as hospitals and health care systems are being targeted.
So, back up your data often. If you fall victim to a cyberattack, it’s the only sure way to repair your system by erasing and re-installing it from the latest back up.
9) Security awareness training for employees
Employees are the most vulnerable component when it comes to cyberattacks, and cybercriminals like to take advantage of human error and vulnerabilities. With just one click of a mouse, an employee can cause your entire security system to crumble. On the other hand, your employees can also be your first line of defense from cyberattacks by implementing security training on a regular basis. The key to making cybersecurity work for your organization is to ensure all employees are well trained, understand the importance of cybersecurity, are following security policies and procedures, and are consistently exercising best security practices.
10) Conduct regular risk assessments
This might sound like something only large corporations need to worry about. However, small and medium-sized businesses also need to incorporate them in their cybersecurity protocols. Most business owners are too busy running their business to really think about all that goes into cybersecurity. They often believe that if they are already paying for internet services, cloud storage, business software, anti-virus protection, etc. that the necessary security measures are already built in. But even with regular software updates, password reset reminders, and malware protection, businesses still need to be aggressive in protecting their physical and digital assets.
It’s best to have a reliable, outside security analysis team conduct an audit of your security protocols to ensure the necessary cybersecurity measures are in place. They can help you understand the most critical threats (such as system failures, natural disasters, and potential cybercriminal actions), uncover any vulnerabilities, determine the impact that these things could have on your business, and identify what measures can be taken to make your company as secure as possible. Regular assessments will help reveal whether your current security measures meet the level of security your business requires.
In a world of ever-increasing and evolving cyberattacks, making sure your business is truly secure and protected also becomes more and more important. With the right technology, cyber threats can be thwarted before they ever get anywhere near your system, devices, and data.
Concerned about your company’s cybersecurity? Not sure where to start? The security experts at Merit Technologies can give you quality advice and ensure your business is protected in every possible way. Contact us today! | <urn:uuid:381de147-47e2-49ef-b583-23144b89f4d8> | CC-MAIN-2022-40 | https://merittechnologies.com/insights/10-cybersecurity-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00588.warc.gz | en | 0.947044 | 2,017 | 2.71875 | 3 |
2016 appears to have one more gift for us before calling it a year, and this time it involves both cybersecurity and the medical Internet of Things (IoT) devices — in a good way. The Food and Drug Administration (FDA), the federal agency charged with regulating food, drugs and medical devices, issued guidelines on how manufacturers can maintain security on medical IoT devices throughout a device’s lifetime. This means the FDA is arming manufacturers with the knowledge they need to deliver long-lasting and secure devices.
The guidelines are nonbinding and unenforceable, but they’re a welcome sign that federal agencies are interested in providing security standards for connected devices. This could, in theory, lead to enforceable regulation and provide a monetary incentive for device manufacturers to install and maintain better security on their devices. However, these guidelines have two immediate effects: they clarify what the FDA expects manufacturers to do when it comes to device security, and signal a shift towards consumer – or in this case, patient-centric – security for devices.
So, what does the FDA expect from medical IoT device manufacturers? First, it expects security maintenance for devices regardless of how long a device may have been out of the manufacturer’s hands. For example, if a device has been installed for more than five years, the FDA still expects the manufacturer to be able to monitor, update and maintain security software on that device. This suggests manufacturers will need to begin designing devices with security in mind at the beginning of the development cycle.
Second, the FDA expects manufacturers to detect, monitor and assess the risk of vulnerabilities in devices. As a result, manufacturers will need to employ a methodology for detecting and assessing risks. They’ll also have to work closer with cybersecurity professionals and researchers to head-off or fix any major vulnerabilities before they can affect patients .
Finally, the FDA expects manufacturers to have contingency plans in place in case a bug in a security or software update harms a patient. In this unlikely scenario, the manufacturer is required to alert the FDA of the malfunction.
Overall, these are welcome guidelines. IoT security poses a serious concern for consumers and businesses alike. It’s even more significant when people’s lives are quite literally on the line. Cyberattacks on hospitals aren’t unheard of, and cybersecurity researchers have shown how it’s possible to remotely disrupt life-saving devices like insulin pumps, pacemakers and more. There is, unfortunately, a lot more ground for manufacturers and the FDA to cover when it comes to medical IoT cybersecurity.
Still, the FDA has a history of being proactive in issuing cybersecurity guidelines. In late 2014, according to The Verge, the agency published guidelines on how to build security into medical devices and provided manufacturers with security industry standards. We should expect more in the near future.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:3c8ea294-4a25-43c1-8ab9-2e3377867ed7> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/internet-security/fda-medical-iot-guidelines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00588.warc.gz | en | 0.936993 | 600 | 2.640625 | 3 |
If you follow these simple steps, you will keep your accounts safe.
Some people advise that you update your passwords every 30 days. The intention is to make accounts more secure, but the actual result? Account security is worse than ever. People now have so many passwords to remember that they jot them down on scraps of paper, or simply make them as memorable as they can.
Which is why the National Institute of Standards and Technology recently updated its Digital Identity Guidelines and it no longer suggests frequent password updates.
So, the question becomes…
SHOULD YOU UPDATE YOUR PASSWORD AT ALL?
The advice is to not fret about changing your password too often, but you should still change your password every now and then. However, rather than focusing on a set frequency, it could be better to change your credentials when there’s a good reason to – which could be any of the following:
- You hear of a reported hack or security issue on a service you use
- You receive a notification of unauthorized access on your account’
- You find evidence of a virus (or other malware) on your device
- You shared a password with someone, but they no longer need access to the account
- You used a public computer (in a library, at school) to look at sensitive information
- You haven’t updated a password for more than 12-months, and you don’t use multi-factor authentication
For any of the above, updating your password is a reasonable protective measure. It stops any unwanted eyes from prying on your account in case your old password became compromised.
When it comes to the question of password management – just be smart with your updates to avoid the frustration of forgetting your login details and having to start the process over. We suggest the following 7 steps to keep your password management simple.
7 STEPS TO SIMPLE PASSWORD MANAGEMENT
Never forget another password. Use the following 7 steps to keep your accounts secure and your passwords under lock-and-key.
- Use a password manager
If you do nothing else, do this. When you have lots of accounts, it’s nearly impossible to keep track of every password – or when you last updated one. A password manager collects all your passwords in one secure place, storing the information in an organized, encrypted way for ultimate peace of mind.
- Run a password audit
Once you’ve collected your passwords together, it’s time to check how secure they really are. Tools such as the LastPass Security Challenge help you review how many passwords you store as well as suggesting which ones need an update.
- Update weak, repeated or compromised passwords
Some passwords are notoriously weak (they may use letters only, or are too short). The aforementioned LastPass tool will flag such cases and recommend an update. Similarly, passwords used in more than one account can be a security risk. While hacked accounts give you no choice but to update your password.
- Use more complex passwords on sensitive accounts
Certain accounts hold confidential information (think bank and investment accounts, email, or other personal records). In these cases, it pays to use a more complex password that you can remember by storing in your password manager.
Interestingly, log in details for Amazon, Netflix, and other streaming accounts are often sold on the black market. So you may want to choose something more complicated there as well.
- Consider password auto-updates
Some password managers offer an auto-update feature. This takes care of password management on 100 of the most popular websites, so you don’t even have to think about your security – let alone coming up with another memorable password!
- Always use multi-factor authentication (MFA)
We started the article with this advice, we’ll close with it as well. MFA is one of the most surefire ways to prevent a hack, or at least slow the route to a possible compromise. It stops someone from gaining access, even if they have your password.
- Change your password once per year
The answer you’ve been waiting for: provided you follow the other recommendations, you can feel safe updating your password once every 365 days. Set a date in your calendar (and be sure to stick to it), only updating other passwords when you know there’s been a compromise.
While it’s enough to update your password once a year, it’s best to start with strong passwords that you store in a password manager, then work from there. Once you’re set up, rest easy knowing you’ve done all you can to keep your accounts secure– and feel free to give us a call if you need help. | <urn:uuid:b60fdb1e-1fd2-419c-85ec-c56fe6787db5> | CC-MAIN-2022-40 | https://angelcom.com/how-often-should-you-really-update-your-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00788.warc.gz | en | 0.920873 | 971 | 2.625 | 3 |
Reports, articles, marketing materials — all are document types most of us handle at some point. We write and edit them on computers, e-mail them to colleagues or friends, share them in the cloud, hand them to clients, and so much more.
If a file you intend to show to others contains information they shouldn’t see, however, you could run into problems. Let’s figure out how to prevent that.
Secrets such as passwords in the background often show up in images, and by no means do all editing tools get rid of them properly. For example, even if you thoroughly blur over sensitive information with a semitransparent brush, simply tweaking the brightness and contrast is sometimes enough to reveal the secret. To find out how an image can inadvertently spill confidential information — and how to hide it — read this post.
In a nutshell, to really hide passwords, bar codes, names, and other secret data in images you work with in a graphics editor, you need to remember two things. First, perform any blurring with 100% opaque tools. Second, publish the image in a “flat” format such as JPG or PNG to prevent others from stripping it into separate layers.
But what if you see secret information in an image that’s embedded into a text document?
Let’s say you’re about to send a brochure to a client when you realize one of the images contains a colleague’s personal data. You draw a black rectangle over it, using the paid version of Adobe Acrobat on your office computer. All set, right?
Alas, if you send that document, the client will still be able to glean too much information about your colleague. Acrobat is not made for image editing, and it has no function to combine a picture with whatever you draw on top of it, so anyone who opens the file can delete or move rectangles and other surface graphics.
Exporting from Word to PDF
In some cases, it’s handy to adjust an image in the format in which you created the document (for example, DOCX), and then export it to PDF format. For example, if you crop an image, the trimmed part will not transfer to the PDF. Many people use that simple method for light edits on an image in a document.
However, it’s important to remember that not every image edit works to hide information in that way. For example the black rectangle trick will still fail.
If, after exporting the MS Word file to PDF, you open the resulting file in Adobe Reader and then copy and paste the image back into Word, you will see the original image with no obscuring rectangle.
When you export from Word to PDF, the original image and the object drawn on top are not combined. They’re saved separately. Any concealed bits remain in the file as well.
Ultimately, Microsoft Word is not designed for image editing any more than Acrobat Reader is. If you see a picture that needs altering in a text document, do the editing in a proper graphics editor and then reinsert it in the document.
Microsoft Office Document Inspector
Images are not the only items in a document that can retain private information. Others include headers and footers; tracked revisions, comments, and hidden text; linked files such as Excel spreadsheets that form the basis of charts in a report; sometimes even the name of a document author meant to be anonymous. A single file can be full of such trifles, and it’s easy to forget any or all of them.
To help detect such potential information breaches in good time, Microsoft Office provides the Document Inspector tool. It scans everything mentioned above, including metadata (such as author’s name), headers and footers, hidden text, embedded objects, and so on.
To inspect a file using Document Inspector in Office 365:
- Open the File tab;
- Select Info;
- Click Check for Issues;
- Select Inspect Document.
The names of the settings may differ depending on the version of Word.
If Document Inspector finds sensitive data, it suggests removing it or recommends a safer alternative. For example, if you added a chart from Excel as an interactive object, the tool recommends inserting it as a picture instead — that way the recipient will see it but not be able to examine the original table.
As for secrets in images, however, Document Inspector is of no use, because it doesn’t even look at them. You will have to recheck those manually, guided by the tips above.
Google Docs remembers everything
Sometimes a document needs editing by a team of colleagues, in which case, PDF is not usually the best format (because of its relative dearth of collaboration tools). Using Word documents locally and sending them around by e-mail is also not a good option; version control is virtually impossible, and the process is too time-consuming, not least because people have to take turns.
Help is at hand in the form of cloud solutions, which allow joint editing of the same copy of a document. However, considering privacy, it’s important to remember that cloud office suites record every action, and the full changelog is accessible to anyone who edits the file.
If you accidentally inserted an object or text with sensitive information into a cloud document, even if you spot the mistake and remove the information immediately, it will remain in the change history for your fellow contributors to see.
Even if you deleted all confidential information from the cloud file before making it publicly available, anyone with access to the file can view the change history and roll it back.
The problem has a simple solution. If you plan to invite someone to edit an online document with sensitive data in images or other elements you want to hide, create a new file and copy into it only what you want your colleague to see.
Another tip: To avoid pasting anything into a shared document by accident, check what’s in the clipboard first by pasting it into a local file to make sure it’s exactly what you want to share.
How not to leak info in a document
To sum up, here’s how to keep private information in shared documents private — from colleagues and coeditors, not to mention the general public:
- Carefully check document contents before sharing;
- Use dedicated graphics programs to edit images. Use 100% opaque elements to block information and save images in formats that don’t support layering: JPG or PNG;
- Be particularly careful with cloud documents, which keep a record of each file’s entire change history, potentially enabling other people to restore deleted or changed information;
- Do not give coauthors access to cloud documents that ever contained secret data; instead, create a new file and copy across only nonsensitive information;
- Check Word documents with Document Inspector. Download cloud documents in DOCX format and check them as well;
- Be attentive and don’t rush. | <urn:uuid:29f80132-f144-4257-bdac-b51ea4854802> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/how-to-leak-info-from-docs/37362/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00788.warc.gz | en | 0.908948 | 1,458 | 2.5625 | 3 |
The growing number of internet crimes targeting senior adults is mind-blowing.
In 2021, more than 92,000 people over the age of 60 reported losses of $1.7 billion, according to IC3, the FBI’s Internet Crime division. That number reflects a 74 percent increase in losses from 2020.
These numbers tell us a few things. They tell us that scamming the elderly is a multi-billion-dollar business for cybercriminals. It also tells us that regardless of how shoddy or obvious online scams may appear to anyone outside the senior community, they are working.
However, information is power. Senior adults can protect their hard-earned retirement funds and government benefits by staying informed, adopting new behaviors, and putting tools in place designed to stop scammers in their tracks. And, when possible, family, friends, and caregivers can help.
The FBI said confidence fraud and romance scams netted over $281 million in losses.
The top four types of scams targeting seniors: Romance scams (confidence scams), fake online shopping, false utility representatives, and government agent imposters. Here’s how to make a few shifts to mindset and your daily routine and steer clear of digital deception.
5 Safeguards to Protect Your Retirement
- Stop. Don’t share. Often phone or internet scams targeting seniors carry distinctive emotional triggers of elation (you won), fear (you owe), or empathy (please help). For instance, a phony source might urge: “You must send admin fees immediately to access your sweepstake winnings.” Or “You must provide your social security number to stop this agency penalty.” FBI and Better Business Bureau fraud experts advise senior adults to stop and think before taking any action. Be aware of common phishing scams that include legitimate-looking email messages from a bank, federal agency, or service provider requesting you “verify” personal information. The number one rule: Never give out any personal information such as a Social Security number, bank account numbers, Medicare numbers, birthdate, maiden names, work history, or your address.
- Level up your security. Changing times call for new tools and new behaviors online. Consider adopting best practices such as installing McAfee security software, using strong passwords with Two-Factor Authentication (2FA), and knowing how to identify phishing and malware scams are fundamental components of digital literacy. For a deeper dive into cybersecurity best practices, read more.
- Discuss new scams. Scammers rapidly adjust their tactics to current events such as the pandemic, tax season, or an economic crisis to emotionally bait senior adults. If you are a senior adult, check out weekly consumer alerts from IC3 or AARP to stay on top of the types of scams you may encounter. If you are a relative or caregiver to a senior adult, stay informed, discuss these scams with your loved one, and explore other ways to help
- Research all charities. Senior adults get daily calls, emails, or even Facebook messages trying to bilk them of their money. It’s essential to do your research. Before donating to a charity, you can consult Give.Org or Charity Navigator to verify the request is legitimate.
- Report all scams and scam attempts. If you’ve been a victim of an online scam or even targeted unsuccessfully, report the incident immediately. Any consumer can report online scams at the FBI’s IC3 website. Credit, debit, or bank account fraud should be immediately reported to your bank.
Just as the seasons change in our lives, so too must our behaviors when connecting to people and information via our devices. Cybercriminals target older people because they assume they aren’t as informed about schemes or technically savvy as younger people. Senior adults and their loved ones can work daily to change that narrative. With the right mindset, information, and tools, seniors can connect online with confidence and enjoy their golden years without worrying about digital deception.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:d1ecfa96-f483-4feb-b6a4-abbc3253252c> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/privacy-identity-protection/seniors-how-to-keep-your-retirement-safe-from-online-scams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00788.warc.gz | en | 0.920818 | 841 | 2.53125 | 3 |
IEEE 802.1X is a method for the provision of port-based network access control over layer 2 switches network. This allows to authenticate a client when it initially connects to a LAN before it gets an IP address and additional configuration over network.. IEEE 802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over IEEE 802, which is known as “EAPOL” (EAP encapsulation over LAN). When a Cisco switch is configured with 802.1X authentication, all ports of switch are disabled and only allow CDP, STP and EAPOL traffic. EAPOL (Extensible Authentication Protocol on LAN) is reqiured for 802.1X authentication. When using 802.1X authentication, the client device must also support 802.1X or have802.1X client software installed.
IEEE 802.1X uses three major components to complete authentication transaction. The 3 parties are detailed as below –
- Supplicant – Client device that wishes to attached to LAN. The term ‘supplicant’ is actually used to refer to the software running on the client that provides credentials to the authenticator.
- Authentication Server -The actual server running software supporting Radius and EAP protocols is called the Authentication Server.
- Authenticator – The device in between above two elements, such as a switch or wireless access point, is called the Authenticator. The authenticator acts like a security guard to a protected network
Step by step process of 802.1X Authentication –
- The Authenticator sends an “EAP-Request/Identity” packet to the Supplicant as soon as it detects that the link is active (e.g., the supplicant system has associated with the switch). If the Supplicant does not receive EAP-Request/Identity message from the Authenticator, the Supplicant initiates authentication by sending the EAPOL-Start, which prompts to request the Supplicant’s identity.
- The Supplicant sends an “EAP-Response/Identity” packet with its identity to the Authenticator. The Authenticatorthen initiates “RADIUS Access request” to the Authentication (RADIUS) Server.
- On receiving an Access-Request message, the RADIUS server (Authentication Server) responds with an “RADIUS Challenge” message containing EAP-Message attribute. If the RADIUS server does not support EAP, it sends an Access-Reject message.
- The Supplicant responds to the challenge via the Authenticator, which passes the response onto the Authentication Server.
- If the supplicant provides correct credentials, the Authentication Server responds with a success message, which is then passed on to the Supplicant. The Authenticator now allows access to the LAN inline to on attribute that came back from the Authentication Server. When the Supplicant is finished, it can send an explicit “logoff” notification to the Authenticator. 802.1X also defines a re-authentication timer, which can be used to require the Supplicant to re-authenticate periodically. | <urn:uuid:3411dd35-efe4-4379-a52b-06ea31644ff1> | CC-MAIN-2022-40 | https://ipwithease.com/introduction-to-802-1x-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00788.warc.gz | en | 0.857738 | 667 | 2.765625 | 3 |
What is Zero Trust? Zero Trust is a framework and attitude towards IT security that centers on the idea that we in the IT field cannot trust anyone or any device on our network. Many systems administrators are well versed in this model. Every employee is assigned a domain joined laptop that is locked down via Group Policy. Those employees are tracked via a user identity using their AD profile that, through security groups, either allow or restrict access to various IT assets (servers, printers, wireless, and/or client VPN). Accounting is enabled to track what the users are accessing and used in the event of a malicious attack to see where the attack originated and who preformed the attack.
OK, so I understand Zero Trust, but where does networking come into play? For the longest time networks were seen as these black pipes; pipes you couldn’t look into but we could see data come in one end and out the other. You could see the raw data to get IP and MAC information but that didn’t help you to understand who that data was for. Because of this limitation we started segmenting the network, we created parallel pipes by creating use specific VLAN’s and using firewalls and Access Control Lists at each end we could start to limit what users could access by placing them in a specific pipe. This worked well in the age of desktop computers and statically placed machines, we could assign those always wired machines to a VLAN and we would know it has the right security policy. But now that we live in a world of mobility where a user could pop up anywhere in our network those traditional security measures no longer work. Zero Trust networks address this issue.
Enter the Zero Trust Framework
When beginning on a journey it is best to know where you are going. This quote from an eBook published by Duo describes the end goal pretty well.
Zero trust is a security model that shifts the access conversation from traditional perimeter-based security and instead focuses on secure access to applications based on user identity, the trustworthiness of their device and the security policies you set, as opposed to the network from where access originates.
To achieve this end goal there are 5 key areas to focus on:
- The first is user identity. This means that anytime a user logs into the network that their credentials are pushed to the network. In this area things like Network Access Control and Multi-Factor Authentication are taken into consideration. Verifying who is logging onto the network and recording this user is key to the rest of the areas.
- Second is device identity. In the same way that we want to verify the users accessing the network, in an age of mobility we want to make sure we know what machines are connecting to the network. In this area things such as device posturing, and BYOD registration are taken into consideration. Again, this is a key element used in the rest of the areas.
- The third area is network access policy. In this area we combine the information gathered in the first two areas and do something with it. In this area 802.1x authentication on access switches and wireless access points can dynamically assign a user to a VLAN, and/or download an access control list and apply it to your port. This means that no matter where you are in the network the security policy assigned to your user using a particular machine will follow you. Additionally, the security policy can change depending on where you access the network. This means that the same user and machine logging in on the wired, wireless, or via client VPN can have access to different applications.
- The fourth area is network monitoring. By network monitoring I don’t mean traditional network monitoring that looks at the uptime or utilization of a device, but as in traffic analysis. In this area things such as netflow collection and context-aware security monitoring are used to monitor for threats. Now that we know the users and devices, and know what access policies they should follow we can now build a complete automated security solution. Knowing how traffic should flow though the network means we also know when malicious traffic is on the network as it tends to go against the traditional flows.
- The last area is to extend security visibility into the data center. With more companies adopting the hybrid cloud model maintaining consistent security policies between the on-prem data center and various cloud services can be a challenge. Workload protection platforms provide insight to possible holes between applications and take measures to reduce east/west security vulnerabilities by creating micro-segmentation. This way if a server has been compromised, security measures are in place to limit the scope of attack. While this area has some overlap with the fourth area, business requirements between the campus and the datacenter are different enough for it to warrant its own area.
Once the fifth area has been completed you and your company have ascended to what some have called Zero Trust Nirvana.
Zero Trust is an Attitude
While it seems easy to buy a bunch of products and install them on your network you won’t be able to sustain Zero Trust Nirvana without a change in attitude. The adage “You’re only as strong as your weakest link” holds true with Zero Trust. A security first attitude from both your IT staff and general users should still be maintained; the least privilege concept should still be adopted and users should still be trained against social engineering procedures. Remember that the first area in zero trust is user identity, each user holds a key to the network and can unknowingly give that key to an attacker.
Zero Trust is also designed to supplement traditional security products, not replace them. Anti-malware, and more traditional content blockers and firewalls should still be used because at any time a user could click on that malicious link, providing that backdoor into the network. With Zero Trust think more security layers, not different security layers.
With an ever changing IT landscape and a move to user mobility, the game of securing the network has always been a game of catch up. With the Zero Trust framework we now have a multi-layered, transparent, automated, security mindset that will finally allow us to quickly adapt to our evolving industry in a secure manner.
As always if you have any questions on the Zero Trust Framework above, please reach out to us at firstname.lastname@example.org and we’ll be happy to help! | <urn:uuid:94bfa2db-319d-421d-8ef0-8fe104d46ada> | CC-MAIN-2022-40 | https://www.lookingpoint.com/blog/zero-trust-network-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00788.warc.gz | en | 0.955746 | 1,293 | 2.96875 | 3 |
This is a tip for the new MySQL users out there. A common and dangerous practice used by many new developers and admins is to use the root account to access databases. While this certainly gives you the access you need to manage your databases and data from your applications it also elevates privileges far beyond what you really want.
Let’s assume that you are creating a database called mycontent for an application. You will also want to create a user that has access to just that database so that it isolates the rights to only the content you want to affect.
Decide on a good name for a user. For this example I will use the classic format of databasename_user so our user will be mycontent_user which is what we will use for our application.
Make sure you choose a complex password as well. It is always best to combine upper and lower case as well as numbers. Be careful with special characters because using some characters like /%* can cause issues with systems when they pass the characters to a command. We are going to use 7U2JkyGg9TwbNnQ as our example password.
Launch MySQL from the command line as a user who has privileges to create databases and manage security.
mysql -u myadminusername -p
Type in your password at the ‘Enter password:’ prompt
Now we create the database:
create database mycontent;
Next we create your database user. I am assuming that your database server is also your web server so we will use the @localhost to define the user.
grant usage on *.* to mycontent_user@localhost identified by ‘7U2JkyGg9TwbNnQ’;
Next step is to apply permissions to the database for your new user:
grant all privileges on mycontent.* to mycontent_user@localhost;
And finally we check our user is able to log in and access the database. Exit from your MySQL command line and log in again using the new account:
mysql -u mycontent_user -p’7U2JkyGg9TwbNnQ‘ mycontent
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 10
Server version: 5.5.15 MySQL Community Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.
Now you can use this account/password combination in your application and it will have all of the privileges necessary to manage the database, tables, indexes and content with much less chance of compromising your other databases in your system. | <urn:uuid:d6cf6961-abae-421a-9359-fcb13a4372b0> | CC-MAIN-2022-40 | https://discoposse.com/2011/08/06/mysql-create-database-user-and-privileges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00788.warc.gz | en | 0.887102 | 618 | 2.828125 | 3 |
With the self-driving car gaining traction in today’s automobile landscape, the issue of legal liability in the case of an accident has become more relevant.
Research in human-vehicle interaction has shown time and again that even systems designed to automate driving — like adaptive cruise control, which maintains the vehicle at a certain speed and distance from the car ahead — are far from being error-proof.
Recent evidence points to drivers’ limited understanding of what these systems can and cannot do (also known as mental models) as a contributing factor to system misuse.
There are many issues troubling the world of self-driving cars including the less-than-perfect technology and lukewarm public acceptance of autonomous systems. There is also the question of legal liabilities. In particular, what are the legal responsibilities of the human driver and the car maker that built the self-driving car?
Trust and accountability
In a recent study published in Humanities and Social Science Communications, the authors tackle the issue of over-trusting drivers and the resulting system misuse from a legal viewpoint. They look at what the manufacturers of self-driving cars should legally do to ensure that drivers understand how to use the vehicles appropriately.
One solution suggested in the study involves requiring buyers to sign end-user licence agreements (EULAs), similar to the terms and conditions that require agreement when using new computer or software products. To obtain consent, manufacturers might employ the omnipresent touchscreen, which comes installed in most new vehicles.
The issue is that this is far from being ideal, or even safe. And the interface may not provide enough information to the driver, leading to confusion about the nature of the requests for agreement and their implications.
The problem is, most end users don’t read EULAs: a 2017 Deloitte study shows that 91 per cent of people agree to them without reading. The percentage is even higher in young people, with 97 per cent agreeing without reviewing the terms.
“Warning fatigue” and distracted driving are also causes for concern. For example, a driver, annoyed after receiving continuous warnings, could decide to just ignore the message. Or, if the message is presented while the vehicle is in motion, it could represent a distraction.
Given these limitations and concerns, even if this mode of obtaining consent is to move forward, it likely won’t fully shield automakers from their legal liability should the system malfunction or an accident occur.
Driver training for self-driving vehicles can help ensure that drivers fully understand system capabilities and limitations. This needs to occur beyond the vehicle purchase — recent evidence shows that even relying on the information provided by the dealership is not going to answer many questions.
All of this considered, the road forward for self-driving cars is not going to be a smooth ride after all.
• Francesco Biondi is Assistant Professor in Human Kinetics at the University of Windsor. This article originally appeared at TheConversation. | <urn:uuid:2bc36037-90fd-4af1-aa99-a44cc40f025d> | CC-MAIN-2022-40 | https://news.networktigers.com/industry-news/can-passengers-be-blamed-for-self-driving-car-accidents/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00788.warc.gz | en | 0.952437 | 602 | 2.890625 | 3 |
Social engineering isn’t the first thing that comes to mind when most people think about cyberattacks. Yet, social engineering is consistently the most widely used avenue for breaking through an organization’s cyber defenses. Even when hackers use technical means to break through a system, there will often be some social engineering involved to obtain login credentials, extract useful tips and access confidential information.
It could occur through any channel including email, text, live chat, social media, phone call and face-to-face. When deployed by a seasoned cyber criminal, social engineering makes stealing sensitive information a much quicker job than conventional hacking. Given the dangers it poses, recognizing the warning signs is essential to avoiding falling victim. Check out these social engineering tactics and tips to avoid them.
According to the FBI’s 2021 IC3 report, phishing was by far the most common type of cyber attack contributing to just under 40 percent of victims. Phishing occurs through an email, text (smishing) or voice call (vishing) from what seems to be a trusted party requesting for information. There’ll be a degree of urgency to the request which is one of the primary distinctions between phishing and pretexting.
Phishing contributed to nearly 40 percent of victims in the FBI’s 2021 IC3 report
For instance, you may receive an email claiming to be from your bank asking you to confirm card details or Internet banking login credentials if you don’t want your account closed.
Spear phishing is a form of phishing that is highly targeted and based on extensive research about the target. For example, an email sent to you purporting to come from your immediate supervisor asking you to share certain confidential information.
Prevention: Do not open emails and/or attachments from senders you do not recognize, nor should you click on any links they contain. Report, then immediately delete any suspicious email or text.
Pretexting typically starts off with a genuine context to gain your attention and trust. It may be a series of messages from the perpetrator who poses as a co-worker, law enforcement officer, tax official, banker or major customer. Once trust is established, they’ll request for sensitive information such as bank and customer details.
This tactic is not limited to online channels and phone calls alone. For instance, a fraudster could find their way into your offices then pose as an IT auditor or helpdesk technician to earn your trust and gain access to your computer.
Urgency is one of the primary distinctions between phishing and pretexting
Prevention: Do not respond to calls or emails from unknown sources. Call or email colleagues, customers, bankers, law enforcement or other parties the scammer poses as, directly using their officially listed phone numbers and email addresses. This helps confirm an information request is indeed from them before you disclose any sensitive data.
Baiting rides on the target’s greed or curiosity by making false promises or presenting fake opportunities. Once the target has latched onto the bait, the attacker would access their personal data and/or infect their computer with malware.
A classic baiting technique is to use physical media such as a flash drive to disperse malware. Bad actors will place the drive in a conspicuous location such as an elevator or bathroom. They’ll make it appear interesting or authentic — like labeling it ‘payroll list’. Many people would want to see what the rest of their colleagues earn. When they plug in the flash drive, the malware would be installed.
Baiting also occurs online through ads with lucrative messaging that eventually lead the user to a malware-infected site or deceives them into downloading a virus-laced application.
Prevention: If you can avoid plugging any flash drive to your computer, the better. If you must use one, make sure you own it or have received it directly from a credible source such as your IT department. Do not click any links on strange popup windows or emails. Keep your antivirus up to date.
A popup window or a new browser tab claiming to have found dangerous malware on your computer. This is the modus operandi of scareware. Also known as fraudware or deception software, the idea is to cause panic by alleging the presence of non-existent threats.
The popup will propose an instant solution — install some magical software that makes the problem disappear. Except in this case, you’ll have installed actual malware instead. Scareware is also propagated through email containing fake warnings or offers to procure premium services.
Prevention: Do not click any links on strange popup windows or emails. Keep your antivirus up to date.
Social engineering is founded on human manipulation. Bad actors use your fear, greed or passion to get you to disclose information or provide access you otherwise wouldn’t if approached directly. Having your wits about you is crucial to defeating social engineering attacks.
Bad actors use your fear, greed or passion to get you to disclose information or provide access
If an email, text, call or enquiry sounds too good or too alarming to be true, it probably is. A healthy dose of skepticism will save you. Suspect you could be the target of a social engineering attack?
C Solutions can help your Orlando area business reduce risk and improve your cybersecurity resilience.
Schedule a free consultation today! Call 407-536-8381 or reach us online. | <urn:uuid:22bf00fa-0ea1-467e-9c52-1bdbc0ce727a> | CC-MAIN-2022-40 | https://csolutionsit.com/social-engineering-tactics-avoid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00188.warc.gz | en | 0.929936 | 1,126 | 2.921875 | 3 |
MOSQUITO is new technique devised by a team of researchers at Israel’s Ben Gurion University, led by the expert Mordechai Guri, to exfiltrate data from an air-gapped network.
The technique leverage connected speakers (passive speakers, headphones, or earphones) to acquire the sound from surrounding environment by exploiting a specific audio chip feature.
Once again, amazingly, the team demonstrated that separating the computer networks from the Internet is not enough to protect them from attackers. In the past, the same group of researchers demonstrated that it possible to listen to private conversations by reversing headphones connected to a previously infected computer.
The MOSQUITO technique establishes a covert ultrasonic transmission between two air-gapped computers using speaker-to-speaker communication.
“In this paper, we show how two (or more) airgapped computers in the same room, equipped with passive speakers, headphones, or earphones can covertly exchange data via ultrasonic waves.” reads the research paper.
“Microphones are not required. Our method is based on the capability of a malware to exploit a specific audio chip feature in order to reverse the connected speakers from output devices into input devices.”
The experts rely on the way speakers/headphones/earphones respond to the near-ultrasonic range (18kHz to 24kHz) to exploit the hardware that can be reversed to perform as microphones.
The Israeli team tested the MOSQUITO technique with different equipment at
various distances and transmission speeds.
The technique is stealth, two computers exchange data via audible sounds using speakers and headphones making impossible to discover it.
The experts shared two video proof-of-concept videos that show two air-gap computers in the environment that were infected with a malicious code developed by the experts.
The experts successfully tested the MOSQUITO technique in speaker-to-speaker communication, speaker-to-headphones communication, and headphones-to-headphones communication.
“Our results show that the speaker-to speaker communication can be used to covertly transmit data between two air-gapped computers positioned a maximum of nine meters away from one another.” continues the paper. “Moreover, we show that two (microphone-less) headphones can exchange data from a distance of three meters apart. This enables ’headphones-to-headphones’ covert communication,”
Researchers were able to exchange data over an air-gap computer from a distance of eight meters away with an effective bit rate of 10 to 166 bit per second.
Further info on the technique is included in the research paper,
Past research conducted by Ben-Gurion researchers related to the hack of air-gapped networks are:
- aIR-Jumper attack exfiltrates sensitive data from air-gapped computers using security cameras with Infrared capabilities.
- USBee exfiltrates sensitive data from air-gapped computers using radio frequency transmissions from USB connectors.
- DiskFiltration steals sensitive data using acoustic signals emitted from the hard disk drive (HDD) of air-gapped computers.
- BitWhisper that is based on the heat emissions and built-in thermal sensors.
- AirHopper exfiltrates sensitive data from air-gapped computers security cameras and infrared.
- Fansmitter exfiltrates sensitive data from air-gapped computers exploiting noise emitted by a computer fan to transmit data.
- GSMem attack relies on cellular frequencies.
Pierluigi Paganini, Editor-in-Chief
Cyber Defense Magazine | <urn:uuid:7e416048-1781-4dc7-acbd-c0551a64f80e> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/mosquito-attack-allows-to-exfiltrates-data-from-air-gapped-computers-via-leverage-connected-speakers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00188.warc.gz | en | 0.902757 | 754 | 3.140625 | 3 |
Tuesday, September 27, 2022
Published 5 Months Ago on Tuesday, Apr 26 2022 By Daryn Kara Ali
Women never failed to leave their mark as they created their path in various professional industries, and one industry is no stranger to the success of the historic women of the tech industry. Throughout the years, we have read the books of some of the most intellectual women from Jane Austen and Anna Frank, who took inspirational power from Queen Elizabeth I as she constituted a pollical stance for what she saw fit, reaching all the way to Ada Lovelace the first computer programmer who created the Analytical Engine, a machine to the carry out extensive mathematical operations.
The tech industry is extending its embrace towards the world’s women and welcoming the foresightful vision embodied within their astute skill set. While the fourth industrial revolution broke the boundaries of technology as we knew it, the blurred lines between the physicality and the biological and digital aspects of the world were drastically altered, and women have pioneered through the era of digital transformation.
The endless possibilities of the fourth industrial revolution and its technologies’ existence solemnly rely on these women’s roles as they broke through social and political barriers throughout their journey, and long after.
The world’s first computer programmer, Ada Lovelace, was the original creator of the first algorithm. The Victorian-era mathematician is mainly known for her work on the Analytic Engine with her husband, Charles Babbage, and influentially paved the way for the role of women in the tech industry today.
The daughter of the famous American writer, Lord Byron, is deemed a prominent figure in the community of Computer Science and has been chosen as a symbolistic historical figure representing women’s achievements in Science, Technology, Engineering, and Mathematics.
Since the mid of the 19th Century, Ada’s work alongside her husband’s was highlighted, which in return, paved the way for her success as a programmer at the beginning of the 20th Century, attributed to a book released in 1953 with the rise of the digital era.
Living amongst the noble entourage of Victorian society in the intellectual period that lit the first spark of the industrial revolution placing her as one of the leading historic women of the tech industry. Lovelace broke all boundaries standing in the way of women entering a male-dominated intellectual environment.
Ada discovered the machine developed to acknowledge numbers as data and not just a mere quantitative symbol. While she has a notable influence on the intellectual machines of what is now known as artificial intelligence and machine learning, Lovelace also believed that devices running on these algorithmic behaviors would be able to orchestrate music, create graphics, and help in science and development research. A reality that is now accessible within the limitations of our smartphones, one decade later, Lovelace was right.
Long before there was internal communication via a small device in the palm of our hands driven by wireless technology, long before – in the 19th Century – telegraphs were the equivalent to today’s communication means. And one female figure that stood out was Sarah Bagley; the first women telegraph operator.
Only two years following the invention of the electric telegraph, one company saw an opportunity to grab one industry and lead its success. The New York and Boston Magnetic Telegraph Company initiated operations in Lowell, Mass, and there, the first telegraph operator was hired.
Bagley’s pioneering the way as the first female telegraph operator set the stone for other females to enter the field of telecommunication as she outvoiced her ideology in the fight for equal pay after knowing that women make three quarters the salary of a man.
In her movement to fight for women’s rights while working as a telegraph operator, Bagley accentuated the role females can play in the industry to maintain its prosperity and growth. During the 19th Century, women were the key pillar supporting all telegraph operations within the industry.
Hedy Lamarr is commonly known as the famous Austrian-American actress starring in Come Live with Me, Comrade X, Ecstasy, and many more. But Lamarr is also the inventor who established the groundwork for some of the most vital aspects of digital transformation, WiFi, GPS, and Bluetooth communication systems.
In the 20th Century, this Austrian investor was mostly known for her role on the big screen dimming the light on her innovative participation in society. Her eager enthusiasm toward the technological world is mostly driven by her father’s influence on her, who would often discuss the various functionalities of different machines.
Alongside the renowned inventor George Anthiel, she created a “Secret Communications System” with characteristics that influence radio frequencies between transmission and reception at irregular intervals.
Lamarr’s invention has a historical impact on the Second World War (WII), inevitably leading to the demise of the Nazi party, ending with the descent of Germany.
Her impact on the tech industry is globally renowned and admired, esteemed as the backbone of what is now known as wireless technology, such as WiFi networks, bestowing her the title “mother of WiFi.”
The world is further and further valuing and grasping at women’s fortes as they pioneer their way through achievements in science, engineering, and mathematics. The impact of historic women of the tech industry on the fourth industrial revolution showcases the significant role females play in shaping its development today, as they had already set the groundwork for our digital era centuries ago.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Community section to stay informed and up-to-date with our daily articles.
The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:966b86d5-21f2-4ad3-8d3d-0b4aabe28319> | CC-MAIN-2022-40 | https://insidetelecom.com/the-iconic-historic-women-of-the-tech-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00188.warc.gz | en | 0.959495 | 1,283 | 3.171875 | 3 |
The goal of authentication is to confirm that the person attempting to access a resource is actually who they say they are. As you can imagine, there are many different ways to handle authentication, and some of the most popular methods include multi-factor authentication (MFA) and Single Sign On (SSO).
However, these methods just skim the surface of the underlying technical complications. In order to implement an authentication method, a business must first establish an authentication protocol. A business might also choose to blend multiple authentication methods together. The question is, where do you start?
This guide provides an overview of the most common authentication methods, along with the most popular protocols, that businesses can use to verify a person’s identity and keep systems secure.
- Authentication vs. Authorization
- Authentication Methods vs. Authentication Protocols
- Authentication Methods
- MFA vs. 2FA
- SSO vs. MFA/2FA
- Can SSO and MFA/2FA Be Used Together?
- Authentication Protocols
- SAML vs. The Rest
- Why Is SAML Best for Business?
- Supporting Cybersecurity and Data Governance
Authentication vs. Authorization
Before we dive into the intricacies of authentication, it’s worth understanding the difference between “authentication” and “authorization” as these two concepts go hand-in-hand but serve two different purposes.
Authentication seeks to validate the identity of a user. Once that is done, authorization verifies their permission level. For example, authentication is necessary for a user to enter the company database, but authorization is what decides what information they are allowed to view and change.
Both authentication and authorization are fundamental to cybersecurity, but as authentication is the foundation, let’s start there.
Authentication Methods vs. Authentication Protocols
The terms “methods” and “protocols” have two different meanings for authentication. An authentication protocol is an underlying framework that lays out the rules for verification. For instance, the Password Authentication Protocol (PAP) requires someone to enter a username and password to verify who they are.
Meanwhile, an authentication method sits on top of the protocol. An enterprise might have multiple authentication methods that help verify a user’s information against the protocol. For instance, a person might enter their username and password, but they may also be asked to verify their identity using a code sent to their phone, which is a method known as two-factor authentication (2FA).
With one or more methods of authentication in place, authentication protocols are able to verify that a user is who they say they are with extreme confidence. The following is a closer look at the various types of authentication methods and protocols, along with side-by-side comparisons of the most popular options.
A business might use one or more authentication methods to improve the security of its systems. Generally, if more sensitive information is involved or higher privileged users, more authentication methods should be enabled and enforced to maintain proper. Of course, increased security often comes as a tradeoff of convenience, which is why it’s important to choose the right methods.
Multi-factor authentication requires at least two methods of authentication. This is becoming more common because passwords alone are no longer considered secure. With advanced tools, cybercriminals can crack a password with relative ease, especially if users don’t follow good password hygiene by re-using passwords across sites or failing to make them secure enough.
With multi-factor authentication, a user is asked to enter a username and password. If it matches, a code is sent to an authentication app, phone number, email, or another resource that only the user is supposed to have access to. The user will then enter that code on the login page. For multi-factor authentication, they may repeat this method more than once.
For instance, after entering their username and password, the user may be asked to verify a code that was sent to their phone number on file. They then may be asked to verify a code that was sent to their email on file. As you can imagine, this process is very secure but can become inconvenient very quickly.
Two-factor authentication (2FA) is a subset of MFA. The only difference is that 2FA requires exactly two methods of verification, which would mean entering the user’s password in addition to one of the methods mentioned above (e.g., verifying a code sent to their phone, email, app, etc.).
Two-factor authentication is becoming more common with consumer applications because it offers much more security than a password alone without causing a lot of inconvenience for the user.
Single-sign-on (SSO) is a method of authentication that strives to offer the most convenience to the user by eliminating the need to sign in at all. Instead of entering a username and password for every website and application they use, the website will connect with an identity provider that helps authenticate users without logins.
The user will need to create an account with the identity provider and log in every so often, but as long as they are logged in to that identity provider, all participating websites and apps will authenticate them automatically.
Within an enterprise setting, using an SSO provider means that employees need to only log in once each day. From there, authentication takes place behind the scenes as the identity provider exchanges verification keys with all of the websites and apps that you have set up to use SSO.
MFA vs. 2FA
The biggest difference between two-factor authentication and multi-factor authentication is ease. Multi-factor authentication is inherently more secure because the more times you ask a user to verify themselves, the less room you leave for unauthorized access.
However, it is also far slower and more complicated when you ask a user to enter a password, verify their number, verify their email, and so on with each login. Making your app too difficult to access might increase security, but it will also damage the user experience. Therefore, 2FA strives to find a middle ground where security is vastly improved over just a password, but without adding much inconvenience.
SSO vs. MFA/2FA
If you like the security of MFA, turning to SSO might seem risky. However, when partnering with the right identity provider, SSO is incredibly secure. The biggest benefit of SSO is that it significantly speeds up the user flow, contributing to a positive user experience.
Plus, since users only need to remember one password (which is the one that gets them into the identity provider), they are more likely to use good password hygiene and create one that’s truly secure.
When it comes to security, whether MFA is more or less secure than SSO depends on how it’s implemented. One thing is for sure: SSO is far more user-friendly.
Can SSO and MFA/2FA Be Used Together?
SSO and MFA can absolutely be used together, and it’s actually considered the most secure solution that strikes a balance between ease of use and protection. To use both, your users still only need to remember one password, which means they’re more likely to use a secure one. But that’s still only one password standing in the way of a cybercriminal.
To protect your business’s most sensitive data or apps, you can implement MFA for an additional layer of security. The user flow goes like this: The user logs in to the SSO provider, and this can give them access to any app or site you wish. But, when there are databases or assets you wish to secure further, you can use MFA to verify them a second time.
Using SSO means they do not need to use a password for any resource, but MFA has them verify themselves with a code on their phone, app, or email before they can access something that requires additional protection. It’s highly recommended that you consider using these methods in unison to protect your company.
Authentication protocols lay out the underlying rules that verify a user is who they say they are. The least secure protocol of all is known as the Password Authentication Protocol (PAP) and simply asks a user to enter a password that matches the one saved in the database. PAP does not utilize any encryption, which is why it is considered insecure and outdated.
A number of other authentication protocols have been introduced over the years, all with the goal of improving security. Here’s a look at the most common.
- SAML: SAML stands for Security Assertion Markup Language (SAML). It’s designed to support SSO by allowing a user to log in to an identity provider, which verifies their identity each time they request access to an app or site through a participating service provider. SAML was designed to simplify user access to multiple apps without requiring multiple logins.
- OAuth: OAuth stands for “Open Authentication.” It allows apps to grant “secure designated access.” It is one of the most popular authentication protocols on the web. For instance, Facebook uses it to allow users to grant websites permission to view and post on their timeline without sharing the user’s Facebook password.
- OIDC: Built upon OAuth 2.0, which is an authorization service, this protocol adds a simple identity layer on top of the authorization service, allowing clients/service providers to verify the end user’s identity.
- LDAP: The lightweight directory access protocol (LDAP) was designed for speedy authentication. User information is stored in the Active Directory (AD) and can only be extracted in a usable format with the use of LDAP.
With all these authentication protocols on the market, it can be difficult for businesses to determine which one is best for their use case. However, when it comes to security and reliability, SAML is considered the industry standard. To follow is a comparison of SAML versus other authentication protocols.
SAML vs. The Rest
SAML is not the only protocol that can be used to implement Single Sign On (SSO), nor is SSO the only authentication method that you can use with the SAML protocol, but the two have become synonymous with one another. But, with so many options, why is SAML considered the best?
One of the most important advantages of using SAML for SSO purposes is that it is open standard. This means that providers and vendors can easily interact with each other assuming they adhere to the SAML standard. Additionally, because SAML uses XML, it is extremely flexible. You can transfer all sorts of data so long as XML can render it.
With these details in mind, some businesses still debate between SAML and OAuth. OAuth is somewhat newer than SAML, but it was developed by Google and Twitter, partly to compensate for some SAML shortcomings. For that reason, OAuth uses JSON instead of XML.
The other primary difference between SAML and OAuth is that OAuth was created as an authorization network, not for authentication purposes. The OpenID Connect layer sits on top of OAuth to handle authentication, and it was released much later. Another differentiator is the use case: Google and Twitter designed OAuth for use across the internet. SAML was designed with the open internet in mind, but it’s ideal for closed enterprise networks.
If you go through the list and compare SAML individually next to every other protocol mentioned here, it quickly becomes clear that SAML may not be the best for every application, but it is the clear winner when it comes to businesses looking to establish better authentication for their users. It’s also widely recognized as the fastest solution for business use.
Why Is SAML Best for Business?
For businesses who are interested in adding SSO capability, many are inclined to go down the route of creating a proprietary solution. Often, this process begins with good intentions and typically under the false mindset that a proprietary solution will be the most secure, the most flexible, or even the most cost-effective.
In reality, proprietary SSO solutions rarely work out as intended. At the enterprise level, implementing a proprietary SSO solution could mean that connecting with each app or software requires a new integration or implementation, which represents major development resources and a lot of unnecessary complexity.
If you are thinking about creating a proprietary authentication platform, think twice. By using a trusted protocol like SAML, you can invest your development resources into one implementation and then easily connect with very many partners without the need for a lot of complex coding and testing.
SAML is ideal for an enterprise that needs a reliable and scalable authentication protocol. It’s flexible not only because it’s built on XML but because its developers provide future users with a lot of leeway as to how it’s implemented. For instance, if the need arises, you can even involve multiple identity providers in the authentication process.
Obviously, SAML is also preferable because it is ideal for implementing SSO. When it comes to authentication methods, the combination of SSO and MFA is considered the new standard, so choosing SAML will set your business up with a solid foundation as your cybersecurity practices evolve to meet the needs of increased security and user experience.
All in all, SAML has proven that it is a dependable protocol for authentication, especially at the enterprise level and especially for SSO. So, where do you go from here? In most cases, businesses will want to seek out an SSO provider who uses SAML. Some examples include OneLogin.
Supporting Cybersecurity and Data Governance
Finding the right authentication protocol and methods is fundamental to shaping your organization’s cybersecurity. However, it’s far from the only piece of the puzzle. As cyber-attacks grow more frequent and more sophisticated each day, it’s critical that businesses find the time to step back and look at their cybersecurity and data governance procedures as a whole.
Data governance has rapidly secured a position in the cybersecurity conversation. Not only does it have to do with compliance and data privacy, but it also means knowing what data you possess, how it needs to be protected, who should have access to it, and what happens if it gets out. Therefore, any organization that is thinking about authentication should also be thinking about data governance.
As you go down the line, one critical element of data governance is transparency. Without visibility into your organization’s data assets, your cybersecurity team doesn’t know what it needs to protect or how to do it. That’s why a monitoring system like LogicMonitor will prove invaluable to daily operational efficiency in addition to cybersecurity and compliance.LogicMonitor offers a set of cloud-based infrastructure monitoring tools designed to support security, access, visibility, and compliance at scale. Interested in learning more about how LogicMonitor can help your business achieve observability and reduce your wasted IT resources? Try it free today and see for yourself. | <urn:uuid:0af0a41e-d9d6-4c47-a724-c561d7aa6e33> | CC-MAIN-2022-40 | https://www.logicmonitor.com/blog/what-are-the-different-types-of-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00188.warc.gz | en | 0.943388 | 3,117 | 3.59375 | 4 |
Readers, I introduce a little about a very interesting technique that is Google Hacking, is a key to investigate if we are doing a pentest, or protecting our organization or individual item. Google Hacking is the activity of using the site search capabilities, aiming to attack or better protect information of a company. The information available on the company’s web servers are likely to be in the databases of Google.
Explaining… A misconfigured server may expose several business information on Google. It is difficult to get access to files from database sites through Google. We can use as an example, the use of “cache” Google, where it stores older versions of all sites that were once indexed by their robots. This feature allows you to have access to pages that have already been taken from the air, since they already exist in the database of Google.
EVOLUTION OF THOUGHT
Let’s imagine that at some point in the history of an organization’s site, a more sensitive information was available. After a while, the webmaster has been alerted that information removed from the site. However, if the page on the site has already been indexed by Google, it is possible that even having been altered or removed, can still access it using the Google cache feature. A simple example of what we can find on Google, and you can come back to haunt the person who provided such information online is as follows: type in the search box cpf + curriculum. Certainly will return multiple results with links where we can find full name, address, phone, social security number, identity and some more information from people who publish their data on the internet. Having knowledge of how these data can be used in a malicious way, we can be more aware to publish any information on our internet.
COMMANDS TO USE GOOGLE
Search content in title (tag title) of the page .
When using the intitle command, it is important to pay attention to the syntax of the search string, since the word that follows soon after the intitle command is regarded as the search string. The “allintitle” breaks this rule, telling Google that all the words that follow are to be found in the title of the page, so this last command is more restrictive.
Find text in a URL. As explained in the intitle operator may seem a relatively simple task using the inurl operator without giving more attention to it. But we must bear in mind that a URL is more complicated than a simple title, and operation of the inurl operator can also be complex. Just as the intitle operator, inurl operator also has a companion who is allinurl , which works identically and restrictively, showing results only when all the strings were found.
Search for a particular type of file. Google search more than just web pages. You can search many different file types, including PDF (Adobe Portable Document Format) and Microsoft Office. The filetype operator can assist you in finding specific types of files . More specifically, we can use this operator to search for pages ending in a particular extension.
Finds a string of text within a page. The allintext operator is perhaps the simplest to use as it performs the function of most known search engines like : locate the term in the page text. Although this operator can be used for broad opinion, is useful when you know that the search string can only be found in the page text. Using allintext operator can also serve as a shortcut to find the string anywhere, except in the title, URL and links.
Directs the research to the content of a particular website. Although technically a part of the URL, the address ( or domain name ) of a server can be better researched with the website operator. The site allows you to search only the pages that are hosted on a particular server or domain.
Searching for links to a given page. Instead of providing a search term, the operator needs a URL link or server name as an argument.
This operator can be regarded as a companion to the link operator, since both seek links. The inanchor operated , however , search the text representation of a link , not the current URL. Inanchor accepts a word or expression as argument, as inanchor:click ou inanchor:oys. This type of search is useful especially when we began to study ways to look for correlations between sites.
Search for pages published within a “range” of dates.
You can use this operator to find pages indexed by Google in a given date range. Every time Google crawls a page, the date in your database is changed. If Google finds some dark Web site, you can happen to index it only once and never return to it. If you find that your searches are clogged with these types of dark pages, you can remove them from your search (and more updated results) through the effective use of daterange operator.
Shows the version of a given page cache. As discussed, Google keeps “snapshots” of pages indexed, and that we can access via the cached link on the search results page. If you want to go straight to the online version of a page cache , without first making a query to Google to get the cached link on the results page, you can simply use the cache operator in a query.
Existing content shows the summary of information from Google. The operator info shows the summary information of a site and provides links to other Google searches that may belong to this site. Informed of this operator parameter must be a valid URL.
HOW TO USE GOOGLE
Searching for files database on government websites:
Searching for a specific server
inurl:”powered by” site:test.com
Search search files in email format . MDB
This research seeks phones available on the intranet found Google
nurl:intranet + intext:”phone”
Conducting research in this way it is possible to identify many of subdomains Oracle
Detecting systems using port 8080
intitle:VNC inurl:5800 intitle:VNC
intitle:”VNC Viewer for Java”
Finding Active Webcam
“Active Webcam Page” inurl:8080
Finding Apache 1.3.20 :
“Apache/1.3.20 server at” intitle:index.of
Asterisk VOIP Flash Interface
intitle:”Flash Operator Panel” -ext:php -wiki -cms –inurl:as
Possible flaws in web applications :
Readers could continue citing many “google dorks” but I specified the most important, google can help them find the other… It is worth mentioning the importance of this research to test tool is the best search engine today.
Google has many features that can be used during a penetration test, and rightfully so is considered the best tool for hackers because it allows access to any type of information you want. Google is the main tool for collecting information from our target. It is the best one to use the public system for information about anything regarding our target: sites, advertisements, partners, social networks, groups, etc. | <urn:uuid:11bd8860-9063-4055-8f0d-fe5f0164828d> | CC-MAIN-2022-40 | http://securityaffairs.co/wordpress/20169/hacking/google-hacking-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00188.warc.gz | en | 0.877121 | 1,709 | 2.734375 | 3 |
As computing has become more distributed, security has evolved in multiple ways to address new use cases and cyberthreat vectors. One trend is edge computing, which is transforming how data is being processed and delivered. To briefly summarize edge computing, it involves making use of networks of edge devices, such as IoT, to process and compute data on-device in geographic proximity to the end-user, or where the data is ultimately delivered. Edge computing helps solve a number of problems, such as reducing costs of housing applications in a centralized data center, as well as reducing latency issues when processing in far-flung cloud clusters. The era of 5G, along with more powerful compute power packed into ever-smaller devices, should only accelerate the trend toward edge computing, making it more effective and economically feasible. Prominent use cases for edge computing today include augmented reality, self-driving cars, gaming, and smart cities—all areas that require super-fast processing and instant response.
Edge Security & Risks
However, this decentralized approach to computing at the edge requires a decentralized approach to data security. One such approach is edge security. The premise of edge security states that if everything is encrypted, and is transmitted encrypted, you should never have to worry about any resource (mostly data) being compromised. Thus, if all the resources on a device, anywhere in your network, is treated like an edge device, it should be exceedingly difficult to inappropriately access that resource, its applications, and the data contained on that device. If you encrypt all the data and file access, then, theoretically, you will not have to worry about these resources being shared, stolen, or inappropriately accessed by a threat actor.
Implementation of the edge security approach, as well as realization of the benefits, is not without its shortcomings and obstacles. Someone must configure these resources, set up encryption, enable management tools, and ultimately, for an end user, have some level of privileges to decrypt information for daily use. In addition, while data transmission can be completely transparent to the end user, the certificates and configuration of encryption also require privileges to configure. This is where zero trust and just-in-time privilege management can help significantly. Nonetheless, a gap remains in edge security for authentication.
Privileged account(s) are always needed to build out the edge security infrastructure, and they must be secured like any other highly sensitive privileged account. If not, a threat actor’s best chance of success at cracking edge security becomes the accounts used to manage edge security in the first place. Traditional privileged access management requires that these accounts become persistent, or always-on, and have a quantifiable risk based on time of exposure. What is needed is to apply an ephemeral method to control authenticated access to edge devices.
When does Edge Security make Sense?
Edge security is not for everyone. Many organizations will only adopt edge security on mobile devices, IoT, or for connectivity to the cloud. When implemented in a hybrid architecture, edge gateways commonly serve as a component to provide connectivity from hardened and encrypted environments to traditionally managed resources. Similarly to edge devices everywhere, these gateways must be configured and managed using some form of privileged account. A successful threat actor will most likely target these devices (if exposed to the Internet) for misconfigurations, vulnerabilities, or poor privileged account management in order to infiltrate an edge security hardening environment.
Managing or Accepting Edge Security Risks
Those of you who do adopt an edge security stance need to be mindful that the attack of the management tools, utilities, and certificates used for encryption and hardening present a genuine risk. And, unless you fully embrace a universal approach to privileged access management, an edge security approach will not be worth the considerable risks. If a threat actor obtains access to any of the applications used for managing edge security, or edge gateway security, then it is “game over”. These edge technologies themselves become the keys to the kingdom and must be guarded with the utmost diligence. So, what privileged access management disciplines will apply here? The answer—all three:
- Privileged Password & Session Management – The ability to centrally store, encrypt, retrieve, and automatically change passwords based on a workflow, and to enable access based on ephemeral, just-in-time criteria. Every session involving privileged access should be fully monitored and granularly managed.
- Secure Remote Access – The ability to manage remote access between any two edge systems and fully record visible screen activity, indexing of issued commands, and the ability to automatically identify inappropriate activity.
- Endpoint Privilege Management – The ability to remove administrative rights on any edge device and implement the concept of least privilege—regardless of device.
When applied to edge security, this universal privilege management strategy can mitigate the following four risks:
- Edge security applications, devices, and resources are placed under credential management. All passwords and certificates used for management are automatically rotated and unique per device. This ensures all management access is appropriate and approved, helping to prevent inappropriate lateral movement between edge devices due to reused passwords.
- All edge security device access for management is monitored and secured. Whether the access is from an internal resource, or a trusted vendor, access is encrypted, secured, and monitored per session to ensure all activity is appropriate and that no inappropriate changes are conducted.
- Privilege Access Management can be implemented to enforce zero trust and just-in-time access to privileged accounts used for edge management to ensure all privileged access is only granted for a limited time, and only when appropriate.
- Encryption works when a threat actor has no chance to decrypt the information. With the correct privileged account, certificates, and access, all encrypted data can ultimately be decrypted into a usable form. If not, it would serve no purpose. Organizations go through great pains to segment data and encrypt portions to prevent reassembly. Consider the PCI DSS requirements for credit card information as an example. If you can remove administrator rights to applications and the certificates used for encryption, you can mitigate some of the risk of a rogue administrator attempting to tamper with these controls. Endpoint privileged management is designed specifically to address this security risk. Applications that need superuser credentials to operate are obfuscated from the end user, and management can be conducted with explicit restrictions in order to constrain inappropriate access. A potential threat actor no long has the privileges to undermine the solutions used to manage and monitor edge security.
There is a lot of merit to, and legitimate use cases for, edge security, but only if your enterprise first has a strong security foundation, including privileged access security. The technology used to manage edge security resources suffers from the same flaws as almost every other information technology solution when privileged attack vectors are involved, including blockchains. Without proper privileged access management to control privileged access, manage passwords, monitor for inappropriate activity, and remove administrative rights, the edge security solution could itself could be the source of the breach.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook. | <urn:uuid:1c77efcb-0221-4b01-8422-7799f597b06e> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/edge-security-privilege-management-what-you-need-to-know | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00188.warc.gz | en | 0.923604 | 1,630 | 3.203125 | 3 |
If you’re not careful and you don’t use anti-malware software, you might end up with various viruses, Trojans and worms on your computer. But, according to Bitdefender researchers, you might even get saddled with a hybrid or two of this different types of malware.
The researchers have dubbed these hybrids “frankenmalware”, and out of some 10 million detected and analyzed malicious files, they identified over 40,000 of these “malware sandwiches”.
How does it happen, you ask?
“A virus infects executable files; and a worm is an executable file,” explains Loredana Botezatu. “If the virus reaches a PC already compromised by a worm, the virus will infect the exe files on that PC – including the worm. When the worm spreads, it will carry the virus with it. Although this happens unintentionally, the combined features from both pieces of malware will inflict a lot more damage than the creators of either piece of malware intended.”
To explain how the symbiosis works, she shares the example of the Virtob virus/ Rimecud worm “collaboration”.
The Rimecud worm spreads via file-sharing apps, USB devices, Microsoft MSN Messenger and locally mapped network drives. Besides that, it also steals passwords by injecting itself into the explorer.exe process, opens a backdoor that will allow it to download additional malware from a C&C server and – if the computer has remote control software installed – allows cyber criminals to access it and control it.
As it turns out, Bitdefender has recently begun spotting the Virtob virus attached to the aforementioned worm. The virus – which also opens a backdoor, contacts IRC C&C servers, modifies a host of files – infects executable files and, as the worm itself is an executable, it is also likely to be infected.
Now, apart from the unfortunate fact that a computer hosting this piece of “frankenmalware” is now contacting two C&C servers, has two backdoors open, two attack techniques active and various spreading methods at its disposal, there is also the problem of whether AV solutions will be able to detect and remove both – or either one.
“Imagine that a worm is infected by a file infector (virus),” posits Botezatu. “And an AV detects the file infector first and tries to disinfect the files, which include the worm. In some rare cases disinfecting compromised files leaves behind clean files that are at the same time altered (not identical to the original anymore). They maintain their functionality but are slightly different in form. As most files are detected according to signatures and not based on their behavior (heuristically), an altered worm (disinfected along with other files that have been compromised by a file infector and disinfected by an antivirus) may not be caught anymore by the signature applied to the original file (that had been modified after disinfection). Disinfection might this way lead to a mutation that can actually help the worm.” | <urn:uuid:86693144-9d92-4c2a-a08c-029580808bde> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2012/01/25/frankenmalware-active-in-the-wild/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00188.warc.gz | en | 0.940766 | 652 | 2.875 | 3 |
Do Business Rules Define the Operational Boundaries of an Organization?
Have you heard business rules described that way? You certainly have to stop and think about it for a minute.
Without additional explanation the term “boundaries” could easily be misleading. For example the term could be interpreted incorrectly as suggesting business rules define scope – i.e., the operational edges of an organization or an initiative. They don’t.
Let’s revisit the meaning of “business rule”. Business rules are guides or criteria used to:
Respectively, yes, business rules therefore establish “boundaries” for distinguishing:
- shape conduct or actions
- form judgments of behavior
- make decisions
But these “boundaries” (delimitations) are a result of applying business rules, not what business rules fundamentally are (i.e., guides or criteria).
Furthermore, these boundaries (delimitations) are not as rigid as the term “boundary” suggests. Depending on enforcement level, many business rules can be overridden with proper authority or explanation, or even ignored (e.g., guidelines). Level of enforcement is an organizational decision. Note that the transgressing behavior in such cases is nonetheless still within the scope of organizational operations.
My bottom line re “boundary”. Describing business rules can …
- acceptable or desirable conduct or actions from what is unacceptable or undesirable.
- proper behavior from what is improper behavior.
- correct or optimal decisions from what are incorrect or suboptimal decisions.
But otherwise sure, business rules do set up boundaries (delimitations) for how activity in the organization is undertaken. And that’s especially important when you start thinking about smart processes – as my recent posts suggest.
- wrongly suggest scope.
- incorrectly assume all business rules are strictly enforced.
Tags: boundaries, business rules and smart processes, business rules as boundaries, definition of business rule, enforcement level, level of enforcement, scope, smart processes
Ronald G. Ross
Ron Ross, Principal and Co-Founder of Business Rules Solutions, LLC, is internationally acknowledged as the “father of business rules.” Recognizing early on the importance of independently managed business rules for business operations and architecture, he has pioneered innovative techniques and standards since the mid-1980s. He wrote the industry’s first book on business rules in 1994. | <urn:uuid:eb9f9a09-48bb-4e58-8d3e-bb3343176e93> | CC-MAIN-2022-40 | https://www.brsolutions.com/do-business-rules-define-the-operational-boundaries-of-an-organization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00188.warc.gz | en | 0.923725 | 508 | 2.5625 | 3 |
Try Free for One Month
Unlock insights in unstructured data with computer vision and text mining. Create ML pipelines with assisted modeling.
Supervised vs. Unsupervised Learning; Which Is Best?
Supervised and unsupervised learning models work in unique ways to help businesses better engage with their consumers.
Smart technology is everywhere, permeating almost every aspect of daily life. Consumers have come to expect more information, more automation, faster, all at the click of a button. To keep up, companies must continue to adapt and implement the latest technologies or risk falling behind.
The advancement of Artificial Intelligence (AI) in business has compounded this need. Security systems can turn fingerprint and face scans into biometric data to unlock doors and smartphones. Banking systems can detect unusual purchasing patterns and automatically send a message for human verification of transactions. Voice assistants on smartphones use Natural Language Processing to process audio and return answers to a wide range of requests. All these remarkable technologies are constantly becoming more advanced through the use of Machine Learning (ML) algorithms.
Machine learning is a subset of AI. More specifically, it is an application of artificial intelligence that provides systems with the ability to learn and improve from data. Much the same as humans learn from everyday experiences, ML gradually improves predictions and accuracy over multiple iterations. For ML models, training data is provided from IoT devices, collected from transactions, or recorded from social media. Data science algorithms help sift through, classify, and group information together based on various parameters for these machines. With the data processed and combined, ML can then create models that accurately predict certain human behavior patterns and initiate responses accordingly.
For example, when a customer browses online to purchase their next mobile phone and has narrowed down their choices, the site offers them comparisons with other phones or accessories that purchasers can make at the same time. This model of response is created from data that was processed from earlier similar purchases, enabling the machine to create a model that will help new customers make similar, informed choices.
ML functions on three types of algorithms — Supervised, unsupervised, and reinforced. In reinforcement learning, machines are trained to create a sequence of decisions. Supervised and unsupervised learning have one key difference. Supervised learning uses labeled datasets, whereas unsupervised learning uses unlabeled datasets. By “labeled” we mean that the data is already tagged with the right answer.
The supervised learning approach in ML uses labeled datasets that train algorithms to classify data or predict outputs precisely. The model uses the labeled data to measure the relevance of different features to gradually improve model fit to the known outcome. Supervised learning can be grouped into two main types:
A classification problem uses algorithms to classify data into particular segments. An everyday example is an algorithm that helps reject spam for a primary e-mail inbox or an algorithm that lets a user block or restrict someone on social media. Some common classification algorithms include Logistic Regression, K-Nearest Neighbors, Random Forest, Naïve Bayes, Stochastic Gradient Descent, and Decision Trees.
This is a statistical and ML method that uses algorithms to measure the relationship between a dependent variable and one or more independent variables. With regression models, the user can make cause-effect predictions based on various data points. In a business, for example, this could involve predicting advertising revenue growth trajectory. Some common regression algorithms include Ridge Regression, Lasso, Neural Network Regression, and Logistic Regression.
With unsupervised learning, ML algorithms are used to examine and group unlabeled datasets. Such algorithms can uncover unknown patterns in data without human supervision. There are three main categories of algorithms:
Based on similarities or differences, unlabeled data is grouped using clustering techniques. For example, if a business is working on market segmentation, the K-means clustering algorithm will allocate similar data points to groups that represent a set of parameters. This could group based on location, income levels, age of purchasers, or some other variable.
If a user wants to identify relationships of variables within a dataset, the association method of unsupervised learning is useful. This is the method used to create the prompt — “other customers also looked at”. It is a method ideally suited for recommendation engines. 15 customers purchased a new phone, and they also purchased the headphones to go with it. Therefore, the algorithms recommend headphones to all customers who put a phone in their shopping cart.
Sometimes a dataset has an unusually high set of features. Dimensionality reduction helps reduce this number without compromising the integrity of the data. This is a technique that is commonly used before processing data. An example is the removal of noise from an image to enhance its visual clarity.
Differences Between Supervised and Unsupervised Learning
Once the principles of supervised and unsupervised learning are understood, it is simple to understand the differences between them.
The distinction between labeled and unlabeled datasets is the key difference between the two approaches. Supervised learning makes use of labeled datasets to train classification or prediction algorithms. The labeled “training” data is fed in, and the model iteratively adjusts how it weighs different features of the data until the model has been fitted appropriately to the desired outcome. Supervised learning models are far more precise than their counterpart approach. However, they require humans to be involved in the data processing procedure to ensure the labels on the information are appropriate.
An example is that a supervised learning model can predict flight times based on peak hours at an airport, traffic congestion in the air, and weather conditions (besides other possible parameters). But humans have to intervene to label the datasets to train the model on how these factors can affect flight timings. A supervised model depends on knowing the outcome to conclude that snow is a factor in flight delays.
In contrast, unsupervised learning models are constantly working without human interference. They find and arrive at a structure of sorts using unlabeled data. The only human help needed here is for the validation of output variables. For example, when someone shops for a new laptop online, an unsupervised learning model will figure out that the person belongs to a group of buyers who buy a set of related products together. However, it is the job of a data analyst to validate that a recommendation engine offers options for a laptop bag, a screen guard, and a car charger.
Results vs Insights
The goals with supervised and unsupervised learning are different. While the former is about the prediction of outcomes for new data that is introduced, the latter is about getting new insights from massive amounts of new data. In supervised learning, a user will know what results they can expect, whereas in unsupervised learning, they hope to discover something new and unknown.
Models created from supervised learning are ideally suited to help with spam detection or processing sentiment analysis. These models are also used for things like weather forecasting or predicting changes in pricing. Unsupervised learning is perfectly suited to look for anomalies and outliers of any kind. Supervised learning works well for recommendation engines and understanding customer profiles.
When working with supervised learning for model creation in ML, the tools needed are quite simple — often programs such as R or Python suffice. However, unsupervised learning requires computational power to work with massive amounts of unlabeled data.
Disadvantages of Supervised and Unsupervised Learning
As with any technology, both supervised and unsupervised learning models have their disadvantages.
Supervised learning can take a long time to train, and it requires human expertise for label validation — both for inputs and outputs. Working on the classification of big data poses enormous challenges in supervised learning, but once labeled, the results are dependable.
Unsupervised learning sometimes produces completely erroneous results unless there is some form of human intervention to validate the results. Quite in contrast to supervised learning, unsupervised learning can work on any amount of data in real-time but, since the machine teaches itself, transparency on classification is lower. This increases the chances of poor results.
Choosing Between Supervised and Unsupervised Learning
So how does an organization figure out which is the best option for them? The answer lies in the exact context of their requirements and how the data scientists they work with evaluate and organize the bulk of their data. If an organization needs to implement data processing structures, they first need to think about the following things:
They must examine the data and assess if it is labeled or unlabeled.
Does the organization have the time and internal expertise to validate and label? Is the outcome even known?
What are the goals that the organization wants to achieve?
Do they want to solve an existing recurring problem, or would they like the algorithm to discover and solve an unknown problem?
What are the algorithm options?
Does the organization have algorithms of identical dimensionality where they know the attributes of each feature and how many features exist? Can they determine if these features will provide the necessary support to their data volume and its structure?
The decision of whether or not to opt for supervised or unsupervised ML approaches is subject to context, the basic assumptions that can be arrived at on the data on hand, and its final application. The use of either form can change over time as the needs of the organization change.
While an organization may begin training with unlabeled data and therefore use the unsupervised approach, with time, the correct labels will be identified and the machine can switch to supervised learning. This can happen over various stages of labeling. On the other hand, the supervised learning data approach may not be providing the insights required, and unsupervised learning may discover unknown patterns and give deeper insight into business mechanisms.
Getting Started With Machine Learning
So many organizations simply aren’t using ML to their full advantage. The Alteryx Machine Learning Platform™ is a powerful no-code, low-code tool that automates data processing to help you deploy supervised and unsupervised models. Easily and quickly build complex ML models to solve complex business problems. Get started today and turn your big data into actionable insights and predictions. | <urn:uuid:f6d32847-bd98-4315-961f-86eeb682e20d> | CC-MAIN-2022-40 | https://www.alteryx.com/glossary/supervised-vs-unsupervised-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00388.warc.gz | en | 0.926837 | 2,112 | 3.328125 | 3 |
The public sector is a target for hackers. Here’s how a robust security plan can mitigate potential harm to public health and safety.
On Earth Day, companies might reflect on their carbon footprints, especially as we become acutely aware of the threats of climate change worldwide. While operational changes like energy conservation are essential planet-saving strategies, ramping up your company’s cybersecurity plan can also have a significant, positive impact on the environment.
Read on to learn why the energy industry is especially vulnerable to breaches and hacks, and how cybersecurity can mitigate environmental risks and improve health and safety in the public sector.
New tech, more threats
Like in most industries, critical infrastructure companies have undergone a digital transformation. Now, these organizations rely heavily on cloud-based technology and remote operations.
While this modernized operational technology (OT) is a boon to the industry in many ways, it has also enabled threat actors to interfere with critical systems that manage resources like water, power, and gas.
Not your average hackers
The utilities and energy sector ranks third in the world for the most cyberattacks, and these attacks can have serious implications. Those behind the attacks, sometimes representing a nation-state, aren’t just after data—they seek to disrupt public access to essential resources.
Additionally, organizations in the utilities and energy industry are more likely to pay the attackers’ ransom because of the resources that are at stake.
In 2020, the Cybersecurity and Infrastructure Security Agency (CISA) issued an advisory to make the industry aware of the threat to internet-accessible OT. The alert pointed to the likelihood of malicious attacks on civilian infrastructure by foreign powers and hackers’ ability to interfere with physical processes in the public sector.
In February of 2021, unidentified hackers accessed a Florida water treatment plant’s remote network and manipulated the software that controls the water treatment process. They increased levels of sodium hydroxide, a corrosive chemical, in an attempt to poison the water supply, yet employees were able to thwart the attack before any harm was done.
Though the public was spared, the implications of the attack remain devastating, as communities rely on access to safe drinking water.
The same year, hackers managed to successfully cut off the supply of gas to the entire East Coast with a ransomware attack on the Colonial Pipeline. Natural gas shortages and price surges led to the use of highly-polluting coal, a giant step backward for the government’s clean energy goals.
Other major attacks on critical infrastructure include:
- Russia’s infamous 2015 attack on Ukraine’s power grid.
- An attack on a petrochemical company in Saudi Arabia designed to trigger an explosion (which, thankfully, failed) in 2018.
- The SolarWinds cyber attack in 2020.
- The ransomware attack on the world’s largest meat processor, which led to shortages in the supply chain.
Last year, the U.S. government issued an executive order on cybersecurity in response to supply chain attacks and threats to national security. The Biden administration pledged to invest more resources in securing IT and OT, and implementing zero-trust architecture for the federal government’s cybersecurity plan (which Dashlane’s platform is also built on).
Similarly, the CISA’s advisory from 2020 urged organizations to limit internet-accessible OT where possible. For devices that must remain connected, they recommend public sectors strengthen their networks by:
- Enabling a VPN for communications with all remote devices
- Encrypting network traffic using a VPN
- Enforcing a strong password security policy
- Requiring periodic password updates
- Enforcing two-factor authentication for remote connections
Get access to all the right tools with Dashlane
Dashlane provides trusted tools for organizations in the public sector to protect their networks, such as built-in multi-factor authentication, a VPN, a strong password generator, and reminders to update passwords periodically. These simple steps can make all the difference in protecting systems that control our nation’s vital resources.
Protect your organization today. Get started with a free trial, or download our playbook for utilities and energy providers. | <urn:uuid:86b56012-818e-4b41-934d-ec6cfa6795c1> | CC-MAIN-2022-40 | https://blog.dashlane.com/how-cybersecurity-can-save-environmental-health/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00388.warc.gz | en | 0.9268 | 857 | 2.703125 | 3 |
Our data is a precious commodity and there are plenty of people who would like to get their hands on it, from spouses and marketing teams to crooks and state-sponsored spies. Because of that, tools like Tor and Virtual Private Networks (VPNs) are growing in popularity. But while both tools can enhance your online anonymity, they're as different as apples and
What is Tor?
The Tor (The Onion Router) network protects users from tracking, surveillance, and censorship. It is based on free and open-source software and uses computers run by volunteers. Onion routing was created in the 1990s by US Naval Research Laboratory employees to shield national intelligence communications. Later, it was enhanced further by the Defense Advanced Research Projects Agency (DARPA) and patented by the Navy. Since 2006, development of Tor has been conducted by a nonprofit organization called The Tor Project.
The Tor network can be used to access the regular Internet, where it hides your IP address from everyone, including the people operating the Tor network itself, or the Dark Web, where everyone's IP address is hidden from everyone else.
How does Tor work?
When you use Tor, your traffic connects to the Internet through a "Circuit", a collection of three computers, or Tor "nodes" that is changed every ten minutes. Your traffic is protected by multiple layers of encryption. This prevents anyone from snooping on your it, including most of the Tor network itself. Each computer in a Circuit peels back one layer of encryption, to reveal information that only it can see. They work like this:
- The Entry Guard is where your traffic enters the Circuit. It can see your IP address and the IP address of the middle node.
- The middle node can see the IP addresses of the Entry Guard and Exit Node.
- The Exit Node is where your traffic leaves the Circuit. It can see the IP address of the middle node and your traffic's destination. The Exit Node behaves a bit like a VPN, so any service you use on the Internet will see the Exit Node's IP address as the source of your traffic.
- If you are using the Dark Web, both you and the service you are connecting to have their own circuits, which meet at a Rendezvous Point.
How do I use Tor?
The most uncomplicated way to use the Tor network is through the Tor Browser. All you have to do is download and install the latest version from the official website and use it like a regular web browser. There is no learning curve; the Tor browser is based on Firefox and is as easy to use as any browser.
Is Tor illegal?
Tor is not illegal in most countries, including the United States. No one in America has been charged by law enforcement purely for using the network. However, Tor use may raise some eyebrows because it’s one of the most popular ways to access the Dark Web.
What is the difference between Tor and a VPN?
To understand the difference between Tor and a VPN, you must answer questions like, what is a VPN? A VPN routes traffic from your device to a VPN provider, through an encrypted tunnel. The encrypted tunnel prevents your ISP, rogue WiFi access points, or any other interlopers, from spying on your traffic before it reaches your VPN provider.
Your traffic joins the Internet from the VPN provider and uses your VPN provider's IP address, so it appears to originate there.
Here are some important differences between the two technologies:
- There are many VPN services to pick from, there is only one Tor network.
- A VPN assumes you trust your VPN provider.
- Tor assumes you do not trust the operators of the Tor network.
- Your VPN provider aims to provide a connection that is fast and stable.
- Tor aims to provide a connection that is resistant to advanced attacks.
- VPN service providers are usually run by businesses answerable to local laws.
- Tor is run by volunteers who can't see what is passing through their servers.
Should I use a VPN with Tor?
The Tor Project discourages the use of both technologies together:
Generally speaking, we don't recommend using a VPN with Tor unless you're an advanced user who knows how to configure both in a way that doesn't compromise your privacy
What is better, VPN or Tor?
The choice of which technology is better is determined by your threat model, which is will vary from one person to another. Broadly speaking, you can expect Tor to be slower than a VPN, but more secure against a wider range of threats, including threats that many Internet users are unlikely to encounter.
A good VPN service that uses the latest VPN protocol and provides multiple servers can offer speeds that are fast enough for gaming or video streaming, while bypassing geo-blocks, masking your IP address, and protecting you from rogue WiFi hotspots, ISP logging and other similar threats. | <urn:uuid:55014a75-95d0-4de2-bce2-eba3e8d8f2e4> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2021/09/tor-vs-vpn-what-is-the-difference | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00388.warc.gz | en | 0.952773 | 1,011 | 2.84375 | 3 |
This article is contributed. See the original author and article here.
The Max Degree of Parallelism, or MAXDOP, is one of the most known settings in the SQL Database Engine. There are guidelines of how to configure it that have intricate dependencies on the type of hardware resources you’re running on, and there are several occasions where someone might need to veer of those guidelines for more specialized workloads.
There are plenty of blogs on these topics, and the official documentation does a good job of explaining these (in my opinion). If you want to know more about the guidelines and ways to override for specific queries, refer to the Recommendations section in the Configure the max degree of parallelism Server Configuration Option documentation page.
But what does MAXDOP control? A common understanding is that it controls the number of CPUs that can be used by a query – previous revisions of the documentation used this abstraction. And while that is a correct abstraction, it’s not exactly accurate. What MAXDOP really controls are the number of tasks that can be used in each branch of a query plan.
For most use cases, talking about the number of CPUs used in a query, or the number of concurrent tasks scheduled won’t have a practical difference. But sometimes it’s relevant to know the full story, to go beyond the generalization, especially when troubleshooting those ginormous complex plans that involve dozens of operators – you know who you are :smiling_face_with_smiling_eyes:
Defining a few terms
So before we dive deeper into it, let’s make sure we’re all on the same page regarding a few important terms that will be used in this post, and you find in documentation. These my simplified definitions:
- A request is a logical representation of a query or batch an application sends to the Database Engine. Requests can be monitored through the sys.dm_exec_requests DMV, can be executed in parallel (multiple CPUs) or in a serial fashion (single CPU), have a state that reflects the state of underlying tasks, and accumulate waits when resources needed are not available like a page latch or row lock.
- A task is a single unit of work that needs to be carried out for the request to be completed. A serial request will only have one active task, whereas a parallel request will have multiple tasks executing concurrently. Tasks can be monitored through the sys.dm_os_tasks and sys.dm_os_waiting_tasks DMVs, also have a state (running, runnable, or suspended) that reflects up to the owner request state.
- A worker thread, a.k.a worker, a.k.a thread is the equivalent of a CPU thread (see Wikipedia). I’ll use these terms interchangeably. Tasks that need to be executed are assigned to a worker, which in turn is scheduled to run on the CPU. Workers can be monitored through the sys.dm_os_workers DMV.
- A scheduler is the logical equivalent of a CPU. Workers are scheduled to actively carry out the task assigned to them, and in SQL Server most scheduling is cooperative, meaning a worker won’t cling to the CPU, but instead yield its active time (called the quantum, in 4ms chunks) to another worker waiting to execute its own task. Schedulers can be monitored through the sys.dm_os_schedulers DMV.
- DOP (Degree of Parallelism) designates the actual number of schedulers assigned to a given request (more accurately, the set of tasks belonging to a request).
- The MAXDOP server or database configuration, as well as the MAXDOP query hint, determine the DOP ceiling, the maximum number of schedulers that can be used during a request lifetime. It doesn’t mean they’ll all be used. For example, in a very busy server, parallel queries may execute with a DOP that’s lower than the MAXDOP, if that number of schedulers is simply not available. Hence the term “available DOP”.
- A parallel query plan branch. If you think of a query plan as a tree, a branch is an area of the plan that groups one or more operators between Parallelism operators (a.k.a Exchange Iterators). You can see more about the Parallelism operator and other physical operators in the Showplan Logical and Physical Operators Reference.
Bringing it together
Let’s get on with the example. My SQL Server is configured with MAXDOP 8, CPU Affinity set for 24 CPUs across two NUMA nodes. CPUs 0 through 11 belong to NUMA node 0, CPUs 12 through 23 belong to NUMA node 1. I’ll be using the AdventureWorks2016_EXT database, and have enlarged the tables in the query 50 fold, to have the time to run all the DMV queries before the following query was done:
SELECT h.SalesOrderID, h.OrderDate, h.DueDate, h.ShipDate
FROM Sales.SalesOrderHeaderBulk AS h
INNER JOIN Sales.SalesOrderDetailBulk AS d ON h.SalesOrderID = d.SalesOrderID
WHERE (h.OrderDate >= '2014-3-28 00:00:00');
Here is the resulting actual execution plan, divided into its 3 branches:
In this example, looking at the plan properties, we can see more information about how many threads SQL Server will reserve to execute this plan, along with the worker’s placement on the NUMA nodes:
The Database Engine uses information about the plan shape (which allows for 2 concurrent branches in this example – more on this further ahead) and MAXDOP configuration (which is 8), to figure out how many threads to reserve. 2 x 8 = 16.
The threads can be reserved across all NUMA nodes, or be reserved in just one NUMA node, and this is entirely dependent on scheduler load at the moment the reservation is made at runtime. In this case, the reservation was split between both NUMA nodes. But a few minutes later, when I executed the query again, thread reservations were all on NUMA node 1, as seen below from the actual execution plan:
Back to the 1st execution: if there are 3 branches in the execution plan, why can only 2 branches execute concurrently? That’s because of the type of join in this case, and the Query Processor knows this. A hash join requires that its build input be available before starting to generate the join output. Therefore, branches 2 (build input) and 3 (probe input) can be executed concurrently (for more details on hash joins, refer to the documentation on Joins). Once the build input is complete, then branch 1 can start. Only at that point, can branches 3 and 1 be executed concurrently.
The live execution plan gives us a view of this with branches 3 and 1 in flight, and 2 completed:
Ok, so how about tasks? Didn’t I say MAXDOP limits how many tasks are spawned for each branch? Let’s query the sys.dm_os_tasks DMV and find out what’s happening:
SELECT parent_task_address, task_address, task_state, scheduler_id, worker_address
WHERE session_id = 100 -- my session ID
ORDER BY parent_task_address, scheduler_id;
With the following result:
Notice there are 17 active tasks: 16 child tasks corresponding to the reserved threads (8 for each concurrent branch), and the coordinating task. The latter can be recognized because the column parent_task_address is always NULL for the coordinating task.
Each of the 16 child tasks has a different worker assigned to it (worker_address column), but notice that all 16 workers are assigned to the same pool of 8 schedulers (5,8,10,11,12,18,20,22) – the MAXDOP limit. The fact they’re on the same schedulers is by-design: once the first set of 8 parallel tasks on a branch was scheduled, every additional task for any branch will use that same schedulers.
The coordinating task can be scheduled on any NUMA node, even on a node where no threads were reserved. In this case it was on scheduler 3, which is in NUMA node 0.
So, we’ve seen how a single request can spawn multiple tasks up to the limit set by reserved worker threads – 16 in this case. The threads reserved per branch are limited by MAXDOP – 8 in our example. Because each task must be assigned to a worker thread for execution, the number of tasks that can be running concurrently is therefore limited by MAXDOP. In other words, the MAXDOP limit is enforced at the task level, per branch, not at the query level.
If you want to know more, refer to the SQL Server task scheduling section in the Thread and Task Architecture Guide.
Pedro Lopes ( @SQLPedro ) – Principal Program Manager
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC. | <urn:uuid:bfcfd6b6-0b9d-4259-8274-0a1cf51a23ac> | CC-MAIN-2022-40 | https://www.drware.com/what-is-maxdop-controlling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00388.warc.gz | en | 0.91077 | 1,958 | 2.515625 | 3 |
Navigating Data Soup in the Aerospace Industry
Internet of Things
The Internet of Things (IoT) is defined as a network of physical objects embedded with sensors, software and connectivity. It’s also considered an age when the number of devices on the internet exceeds the number of humans. By that measure we already have an internet of things today; but there are expected to be between 25 and 50 billion devices on the Internet of Things by 2020 (versus fewer than 10 billion people). But we’re not ready. It is not yet easy (or possible, in many cases) for humans to understand, leverage or act upon the trillions of bits of data that will inevitably be upon us. The “things” in the IoT are indiscriminately throwing data onto the internet, and its piling up unused.
Commercial aircrafts are arguably the most sensor-rich environments existing today. The aircraft itself is outfitted with thousands of sensors that make safe flight possible as part of an avionics system that includes communications, navigation, monitoring, flight-control, collision-avoidance, weather, and aircraft management. Additionally, hundreds of humans (passengers and crew) carry their own suite of sensors. These include computers, smartphones, portable games and wearable health monitoring devices. The quantity of data on, in, with and around a plane is staggering. Today much of that data is stored somewhere - maybe on the plane, maybe off - or transmitted someplace in between. However, little of that information is interconnected or unified, and the presentation for human consumption and decision making is limited. There is almost no synthesis happening–to identify insights at the intersections of disparate data sources. For the IoT to provide value for people, we’ll need to find ways to act upon all that data - ostensibly to improve the human condition. Let’s imagine that all the sensors on the aircraft are networked and we can look at all that data. Every sensor on the exterior and interior of the plane (flight deck, galley, cabin, avionics, smartphone, portable game and wearable technology) is collecting and transmitting information wirelessly into a soup of the data that surrounds this flight.
Making Sense Out of Soup
CIOs and CTOs everywhere are working to understand that data soup and to provide storage and access to it. But without a purpose, it’s a large daunting task where it would be easy to overspend and under-deliver. IoT becomes more interesting (and fun!) when the data can be synthesized to derive useful insights, and then rendered so that people can get real-time value from it. There is an approach that every CIO should have in their back pocket for IOT-an architectural approach coupled with Human-Centred Design (HCD).
When taking an architectural –or information-centric–approach, strip away anything that isn’t information and create a fundamental architectural model of the information space. This exposes the basic concepts in the domain, not at the particular software or hardware solution. These concepts-things like weight, airspeed and temperature are not “owned” by any of the people, corporations or computers they are the basic facts we all share. This foundation serves as a backdrop to look at what information can do for people.
HCD can help determine how to derive value from all that sensor data for people. HCD is an approach to problem solving that focuses on the behaviors, wants and needs of users in order to create solutions. The practice requires frequent user engagements, interdisciplinary collaboration and rapidly testing ideas.
HCD has been around for a long time, and no doubt most products you use today were developed this way. But for an IoT application we also need to take an information-centric approach to the idea generation. Determining what we do with the data can’t simply be based on what users ask for–because they might not know yet what is possible. For an IoT application, user insight must be balanced with an understanding of the information. With this approach, technology becomes secondary as the information is prepared, structured and unified; making it easy to access, explore and navigate. To “pre-organize” it so it can be put to use. This allows users to make and test hypotheses and to take action based on facts.
It allows us to build systems that leverage both the skills of humans (good at making judgments) and computers (good at making calculations).
“IoT becomes more interesting when the data can be synthesized to derive useful insights, and then rendered so that people can get real-time value from it”
Using Data to Improve Experiences
Back to the data soup in an airplane. Consider a flight attendant in the kitchen doing prep work. What information could be available on the aircraft to help make his job easier? Look first at the sensors and data creating an information model. Then start asking questions about user experience. What might help the flight attendant succeed, make the passengers’ and pilot’s experience better, optimize efficiency and ease transitions?
● What if food service was timed to arrive after the last threat of turbulence?
● What if the dehydrated man in 13c was given water first?
● What if the sleeping woman in 23a put in a request to not be awoken for food?
● What if the attendant knew which rows need language translation?
● What if the attendant could order ahead from the re-supply truck?
● How might we never have extra, or run out of, food and drink?
The aerospace industry is rich with sensor data and networking capabilities. What’s needed is a focus on people and information to determine exactly what can be done with the data soup. | <urn:uuid:658d25f4-18c2-4fa1-b0f9-d47e69fdf36f> | CC-MAIN-2022-40 | https://aerospace.cioreview.com/cxoinsight/navigating-data-soup-in-the-aerospace-industry-nid-8800-cid-5.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00388.warc.gz | en | 0.941143 | 1,191 | 2.84375 | 3 |
‘Digital Twin’ Initiatives Will Advance with Quantum Computing
(Entrepreneur) Will the digital twin initiatives – to better understand the physical world around us –get a boost from the advancing quantum computing? NOTE: A” digital twin is a virtual model that mirrors a physical object or process throughout its lifecycle. Providing a near real-time bridge between the physical and digital worlds, digital twin technology enables you to remotely monitor and control equipment and systems”.
The biggest benefit expected from quantum computers is on the ability to perform complex computations, which even the supercomputers of today cannot perform. Qubits are the magic ingredient to deliver the high degree of computational power.
The real-time two-way communication and synchronization between the physical twin and the digital twin enables a variety of prognostics and diagnostics opportunities. The digital twin journey typically starts with simulations of critical assets or individual components. The larger goal is to simulate an entire process or a set-up (e.g., a manufacturing plant, smart city or a highway network), in the virtual world.
Classical computers will run out of steam in simulating and postulating millions of variables interacting with each other. This is an area of digital twins to immensely benefit from the quantum computing. Coupled with the processing power of quantum computers, the future digital twins could simulate billions of scenarios in the least time, and guide the humans on the most optimal strategy. | <urn:uuid:477a622c-09a8-4c60-b25c-4e1f6836f08e> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/digital-twin-initiatives-will-advance-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00588.warc.gz | en | 0.8936 | 295 | 3.15625 | 3 |
Quantum Phenomena in Ordinary Activities & Objects
(Forbes) Quantum phenomena can be found find in surprisingly ordinary activities. Author Chad Orzel describes three quantum phenomena and their manifestation in our daily life.
Quantum Tunneling: One of the most surprising predictions of quantum physics is the phenomenon known as “tunneling,” in which quantum particles have some probability of turning up in places that, classically speaking, they just don’t have enough energy to reach. This has a surprisingly direct application to an everyday technology, though, in the form of a smoke detector.
Photons: The particle nature of light– which was conclusively shown to have wave nature in the early 1800s– was one of the most controversial features of quantum physics once it was introduced by Max Planck and Albert Einstein. Taking and sending pictures today is a quantum process that relies on the particle nature of light, because modern telecommunications networks rely on light pulses sent through optical fibers.
The Uncertainty Principle: The continued existence of solid objects is ultimately due to the Pauli exclusion principles, the weird phenomenon associated with spin that prevents multiple electrons from occupying the same state. This prevents solid objects from imploding to infinitesimal points with infinite negative potential energy. | <urn:uuid:8e9b7acc-afba-43aa-aa30-2e6c6fa574f0> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-phenomena-ordinary-activities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00588.warc.gz | en | 0.938197 | 258 | 2.703125 | 3 |
As more of our work and personal lives have become digital, we’ve seen a staggering growth in the amount of data we’re generating, storing and accessing. According to various studies, Google processes 3.5 billion searches every day, while 4.3 million videos are watched on YouTube. More than 350 million photos are uploaded to Facebook every day. By 2025, it’s estimated that 463 exabytes of data will be created each day globally. And with around 40% of the world’s population still to be connected online, the amount of data we’ll need to store and manage will skyrocket further.
Data is now the common denominator which sits across everything organisations do. Whether it’s driving the day-to-day activities we all take for granted or providing the new insights which shape our thinking around some of humanity’s biggest questions, data augments and empowers human intelligence.
With all of this in mind, it’s likely that we’ll need to fundamentally reconsider the current data storage technologies that we have on hand. The staggering amount of data we’re generating is already causing challenges, with data centre technologies requiring significant power and cooling, as well as ongoing maintenance and monitoring.
We could be moving towards a huge bottleneck in the capabilities that are available, as both the volumes and speed of access to data increase further. What’s more, hardware such as servers, hard drives and flash storage can degrade. It seems unlikely at first, but there’s much we can learn from the natural world about data storage. The medium here is DNA, and when it comes to preserving and archiving vital information, it has an unbeatable track record.
Nature’s Storage Medium
One alternative to our current storage devices could be DNA-based data storage. Being ultra-compact and easy to replicate – thanks to its primary role in creating life – gives DNA two big advantages. One gram of DNA could potentially hold as much as 455 exabytes of data, according to the New Scientist. That’s more than all the digital data currently in the world, by a huge margin.
And while DNA is itself quite fragile, when stored in the right conditions it can be incredibly stable. Thousand-year-old fossilized remains have been found with DNA still intact. The longevity of cassettes and CDs just doesn’t compare, and so from an archiving and backup perspective, it could be the perfect material.
Progress on the technology has been extremely promising, with Microsoft and University of Washington researchers last year developing the world’s first DNA storage device that can carry out the entire process automatically. Using the device, researchers encoded the word ‘hello’ on to DNA, and were able to convert it back to data readable by a computer.
From DNA to Glass
In the race to find the data storage medium of the future, glass is another material in the running. Microsoft’s Project Silica, for instance, is a proof of concept that uses quartz glass as a storage medium. Lasers permanently change the structure of glass, making it possible to store data that can then be read by machine learning algorithms. By taking up a fraction of the space, and not requiring the climate-controlled storage or other regular maintenance of typical storage mediums, it holds immense promise for archiving and backup activity.
But while techniques might be steadily improving, the time and cost of decoding the information needs to come down before DNA data storage can be used commercially. While scientists have been experimenting with storing digital data in DNA since 2012, for example, it took 21 hours for that 5-byte ‘hello’ message to be written and then read back out. However, progress is steady – it cost $100m in 2001 to sequence a human genome, today all it takes is two days and $1,000.
The business of backup could be transformed by DNA. Archives and data centres, and their immense physical footprints, could be eliminated. The sum of the world’s knowledge may well one day be stored on something you need a microscope to observe. And as we generate even more data, and reach the limit of our current storage technologies, the value of powerful alternatives will only become greater. Today’s complex backup efforts could be reduced down to a single record, created once, that lasts well beyond any living memory. The next generation of storage technology is in some ways already here – we just need to learn how to harness it. | <urn:uuid:f5125085-cbf2-401c-b098-4b5f6f99d383> | CC-MAIN-2022-40 | http://arabianreseller.com/2020/05/05/is-dna-the-next-generation-of-data-backup/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00588.warc.gz | en | 0.941017 | 927 | 3.34375 | 3 |
One of the biggest consequences to education in the last year has been how schools have struggled to build effective distance learning programs, while students have simultaneously struggled to adapt to learning from home.
But as the pandemic progressed, educational institutions and teachers began to gain comfort with the tools they needed to become effective online educators. Education technology (EdTech), already being adopted in various educational environments across the globe (albeit somewhat slowly), began a rapid ascent.
EdTech is an assembly of IT tools available to educators allowing them to enhance both in-person and remote learning. With EdTech solutions, educators can create more individualized learning experiences geared to engage students in ways that will resonate with them. When combined with the right infrastructure and equipment access, EdTech may also promote a more inclusive learning experience.
Not surprisingly, IT and cybersecurity learning programs have been generally receptive to EdTech. But the remote learning requirements of the Covid pandemic, coupled with the growing shortage of cybersecurity professionals, have renewed interest in how to use EdTech to facilitate cybersecurity instruction.
Why EdTech for Cybersecurity Training?
For years, there has been a shortage of trained cybersecurity professionals. While the shortage is good for professionals who have completed their training (cybersecurity has had 0% unemployment for nearly a decade), the shortage poses a threat to many industries and organizations.
Educational institutions and providers need to find innovative ways to increase the speed and accessibility of cybersecurity education. Edtech allows these institutions to broaden their student bases, provide engaging and effective training and shorten the training cycle compared to traditional four-year college degree programs. More students and shorter programs equals more talent in the cybersecurity job pool.
Improving Cybersecurity Instruction with EdTech
As remote learning became more widespread educators faced difficulties in maintaining student engagement. In addition, some educators found that communication with remote students was strained and ineffective. EdTech has a number of features and tools that not only minimize or correct these issues, but also are especially advantageous for cybersecurity training.
Anytime, anywhere training is a tremendous benefit of EdTech. With it, educational institutions and providers can reach a broader range of students than with courses having standard learning hours. On-demand training also allows students to train at times that are most convenient for them, and at times when they learn most-effectively.
Keeping students engaged in fully remote learning environments has been problematic at best. It is difficult enough for students to pay attention to a teacher lecturing when they are in the controlled environment of a classroom. It is that much more difficult when the students attend remotely and are surrounded by the innumerable distractions of their home.
One EdTech solution teachers can apply in both in-person and remote environments is gamification. And gamification is not just for younger students; it is equally effective for adult training. Studies indicate that gamification of training at work results in substantially higher motivation during training (83% versus 28% for non-gamified training) and far less boredom (10% verus 49% for non-gamified instruction).
How could gamification work for cybersecurity instruction? Consider, for example, a series of challenges requiring students to identify potential cyberattacks and decide on the best measure for addressing them. Need to train students to test web vulnerability scanners? According to cybersecurity expert Mark Preston of Cloud Defense, web vulnerability scanners are “software that will automatically scan web applications and various websites to identify security issues, like potential vulnerabilities to specific attacks.” You can present students with a few scanner options and target sites and make testing and selection a game with goals and rewards.
Each vulnerability identification and solution can be assigned points which the educator can use in several ways. For instance, students with the highest number of points could exchange them for free passes from additional homework assignments for extra credit points, or for extra credit.
With many college-age students having grown up as frequent players of interactive video games such as Fortnite, they are more open to competitive gaming where results are known to all. Badges, achievements and leaderboards now are likely to enhance learning motivation for these students, whereas in more traditional settings, disclosure of students’ relative positioning is considered detrimental to motivation, learning and self-esteem.
There is a growing recognition in the high-tech and IT industries that the traditional four-year degree program may be a relic. Companies such as Google, Microsoft and IBM have started to move away from degree requirements for many high-tech positions.
These companies recognize that many people have built substantial skills through personal experience by the time they leave high school. Minimizing the importance of degree requirements also broadens the pool of potential candidates for job openings – an important consideration now given the drastic shortage of trained cyber professionals.
Organizations are beginning to realize that training involving hands-on experience may be more important than structured learning. Cybersecurity bootcamps are taking their place as recognized sources of well-trained cybersecurity specialists. Hands-on bootcamp activities such as training in ethical hacking develop skilled cybersecurity practitioners, whether or not they have a degree.
EdTech allows educators to more effectively engage in role-based learning, which targets practical activities and instruction towards specific job activities. Role-based training has numerous benefits in the context of cybersecurity instruction, as well as more generally in skills-based education.
Because role-based learning tends to be more practical than theoretical, it enhances student engagement. Engaged students are more likely to complete a training program and to get more out of it. And with practical exercises under their belt, students of role-based learning environments are more ready to hit the ground running once they start a job.
Ease of Update
Cybersecurity is an area that evolves rapidly, with new attacks and new defenses popping up at dizzying speed. It is therefore important that educational programs are adapting instruction just as quickly. It is also easy to update EdTech systems with respect to non-substantive changes, for example, changes in data privacy laws or advances in cybersecurity tools.
Updates are far simpler to effect (and probably far less costly) in the EdTech environment than in traditional instructional environments. Thus, students and educators alike can be certain that they aren’t working with outdated information.
Online skills assessments provide educators with valuable instantaneous information about each student’s progress and areas where the student needs focused learning. This allows the educator to more easily develop individualized learning plans for each student. Students also benefit from receiving instantaneous feedback and can tailor their own studies towards areas of weakness.
EdTech is a promising solution for providing effective remote-learning solutions during unexpected events like the Covid crisis. EdTech’s benefits, however, extend far beyond that narrow application.
Whether in-person or remote, EdTech provides a suite of features and tools that educators can use to invigorate the cybersecurity learning experience, train the next generation of cybersecurity professionals, and begin to narrow the existing talent gap. | <urn:uuid:8a8b2be0-f9c6-40c0-afcd-0d1f38cdf0bc> | CC-MAIN-2022-40 | https://www.cybintsolutions.com/6-edtech-tools-and-features-that-are-transforming-the-process-of-cybersecurity-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00588.warc.gz | en | 0.961859 | 1,437 | 2.984375 | 3 |
What the Web hides from us
Consider a walk through the jungle before machetes were invented and afterward. Before, the jungle would appear as a landscape that was mainly impassable except for the natural breaks that allow passage. After the machete, it looks like a sea of vegetation that can be traversed somewhat freely, albeit with great effort. The "breaks" now look like the long way around that slow passage, not the affordances that enable it. And, after the machete, the act of moving through the jungle now violently alters it according to our will.
This is a pretty crude example of what it means to talk about technology as revealing the world in particular ways, an idea that comes from Martin Heidegger's essay, "The Question of Technology," published in 1954. In that essay, the German philosopher—or "thinker" as he would have said, which might sound like a more humble phrase but which he means as far grander-uses examples like dams and power plants. For Heidegger, those technologies reveal nature as something at the command of humans, a "standing reserve," or what we today would call a "resource." This for Heidegger-and for most of us-diminishes the true power and beauty of nature.
The Internet also reveals the world in particular ways. But every revealing also conceals much else, which is another idea from Heidegger. Since so much time has been spent talking about what the Net reveals, let's ask what the Net conceals. Here are three examples.
Those who think of the Net as a series of "echo chambers" in which people only encounter other people with the same interests, beliefs and values think that the Net conceals the fact there are opposing viewpoints. But I think that that's not quite right. In many of the most rigorous echo chambers, the participants fully understand that there are opposing viewpoints. People in an echo chamber supporting a particular candidate or political cause certainly understand that they live in a world that contains people who disagree with them. Even people in echo chambers committed to crazy conspiracy theories organize themselves around the belief that they are the defenders of the truth in a world too close-minded to wake up and smell the extraterrestrial coffee. In short, echo chambers not only know there is disagreement in the world, they frequently define themselves by that fact of disagreement.
So, in an odd way, I think the echo chamber nature of the Net may actually be hiding the similarities among us more than the differences. When it's your echo chamber against everyone else's, the sense of there being an overall culture, or even an overall humanity, can get lost. The Net hides sameness more than it hides differences.
This is exacerbated by the fact of a second thing that the Net hides: our physical being. Online all we get are some words or images. Worse, frequently those words and images have no context beyond themselves. When you are with someone in physical space, you cannot avoid the fact that the person is a complete being who is literally coming from somewhere and will be going somewhere else. Physical presence makes manifest the continuity of our stories. Reading a drive-by comment does not. This makes it easier not only to misunderstand what's being said, but also to invent your own context for the commenter that perhaps assumes that s/he's stupid, racist or a troll. That makes it easier to write off what they've said, and lets you write a corrosive reply. The presence of an embodied person makes that harder to do, if only because the person can react and reply. So the Net hides not only the context but that the context is actually the story of someone's lived life.
The Net also hides the authority of what's being said. It used to be that simply by being published, a work could be assumed to have some authority, especially if the publisher were prestigious. After all, would Oxford University Press have published a book if it had no merit? Of course, that authority was often overstated, and a lot of junk got published by serious publishers. Still, on the Net, being published indicates exactly zero about the merit of the content. Things that look authoritative—a serious font, a set of scientific diagrams, a domain name that sounds important—may be the mental droolings of a liar and an idiot. There is a constant jostling for position, to be noticed and to be believed, that turns the old orderly signifiers of authority into noise. The old system of authority was too narrow and way too smug, but the new system hides authority under bluster and self-assertion.
Those are just examples. The Net of course hides much more. And what it truly hides is so hidden that we can't even notice its absence. | <urn:uuid:c4e71aa2-3c4a-44e2-8ec0-19c5ef11e4d6> | CC-MAIN-2022-40 | https://www.kmworld.com/Articles/Columns/Perspective-on-Knowledge/What-the-Web-hides-from-us--91327.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00588.warc.gz | en | 0.967475 | 986 | 2.71875 | 3 |
Ransomware attacks just keep increasing: Vet practices aren’t immune
This cyberattack scheme isn’t new, but it has become increasingly common over the past several years. Many of the viruses lurking out there steal data to be used for nefarious purposes. The goal has long been to access important financial and personal data that can be sold off. For example: Credit card numbers that can sold and used to buy things. Social security numbers that can be sold to be used to create fake identities. In the case of many viruses, victims may never even be aware their data has been accessed. Typical malware and spyware tries to go undetected. Not ransomware. Ransomware generally does not access your data to sell off to criminals. Instead, the virus kidnaps your data until you pay ransom.
Ransomware stops you from using your PC, files or programs. It holds your data, software, or entire PC hostage until you pay a ransom to get it back. When an attack occurs, you suddenly have no access to a program or file – A screen appears announcing your files are encrypted and that you need to pay (usually in bitcoins) to regain access. In some cases there may be a nerve wracking clock ticking down to the deadline for the ransom payment. Some versions are so sophisticated they even have mini-call centers to handle your payments and questions.
Ransomware stands out from most viruses in that you really have no option once an attack has been made. You either pay up, or lose the data. The only sure answer is a safe, clean backup. In that case, you are stuck with the nuisance of restoring your data with the backup, but you aren’t out any money. However, this comes with a caveat: your backups have to be clean. The problem with ransomware viruses is that just making backups may not be sufficient to protect your data, as the backups can be infected also. The only answer is to be aware that these viruses are out there and that you have to make careful, specific plans to protect your data. It is important that your backup and disaster recovery plans are designed with a ransomware attack in mind. When it comes to making data security and disaster recovery plans you should consider bringing in experts with a strong background in this field. Lost data is not something any medical practice can easily recover from. | <urn:uuid:19469993-341c-4e89-a839-4745df2d5068> | CC-MAIN-2022-40 | https://compu-gen.com/ransomware-attacks-just-keep-increasing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00588.warc.gz | en | 0.958594 | 477 | 2.78125 | 3 |
Many challenges of the emerging future are unprecedented and have no known solution. Yet.
In addition, many solutions applied successfully at this moment may cause unanticipated challenges. These are called “wicked problems.” Although wicked problems are not new, the speed of emerging wicked problems occurring globally has increased exponentially. This is why the ability to adapt and innovate is needed more now in every industry and every single sector of society.
This, in-turn, calls for people with diverse abilities and sets of skills ,as well as the willingness to learn in every phase and area of life. Going beyond his initial theory, “survival of the fittest,” Darwin pointed out that adapting is an essential element of survival. Adapting requires learning. And we often learn new things in life, not because we wanted to, but because we had to.
“It is not the strongest of the species that survives, nor the most intelligent; it is the one most adaptable to change.” – Charles Darwin
Current escalations of global situations (e.g. the rapid development of advanced technologies and climate change) are forcing more people to learn new things more often and with greater urgency. Although there are several kinds and areas of education for everyone that will help, two float to the very top of the list: AI Ethics and Climate Sustainability.
AI Ethics: responsible use of artificial intelligence technologies. AI Ethics education is slowly becoming more widely available across sectors for people of all ages. Understanding the importance and knowing how this affects us is not just for engineers. Available AI Ethics courses and programs are increasing for adults and educational materials are even being created for young children, with universities like MIT developing programs for kids k-12.
For anyone who says , “I don’t need to deal with AI Ethics…I don’t use artificial intelligence.” they might want to think about how many times a day they use a GPS or SIRI, or their organization tracks the trends of their clients, or each time we click “accept” on a website’s privacy terms.
For those who say “Our IT people handle all our technology,” that’s not enough anymore. Although it is beginning to change, I was surprised to discover that most engineers didn’t get AI Ethics education as part of their training. Also, consider that many new technology and software developers are self-taught. When unintended consequences occur as a result of the design, development, or use of technology, it will be the leaders, managers, and users who will feel the consequences and/or be held responsible.
Climate Sustainability: a societal goal that focuses on the effects of climate change on humans ability to safely co-exist and thrive on planet Earth. This includes sustainability goals (e.g. UN SDGs, ESGs, B-Corps) and Regenerative Design.
For those who think, “My life or my industry is not affected by climate change,” consider the materials we need in every single area of our lives that are sourced or produced in an area of the world that is not functioning because of astronomical temperatures, fires, increased flooding, or contaminated water supplies. Not understanding the effects of our human behaviors’ puts us at a serious disadvantage, increasing our reactivity— and making it much harder to adapt in the most beneficial ways.
For those organizations and businesses that say “We’re doing ESG reporting, so we’re doing our part,” that doesn’t apply to the scope of factors that determine global well-being. Understanding the differences in the frameworks is vital. The SDGs are global goals set out by the United Nations to support the well-being of humans, living beings and the planetary environment, whereas ESG is a rating system used by companies to measure their environmental and social credentials. Being conscious of the differences impacts how effectively they are used.
There are several other types of education that will support flourishing in the future that are not new, but have been vastly ignored—even though many industry gurus have discussed them for years. These include skills and abilities that differentiate humans from machines. These will increase in demand as the world becomes increasingly dependent on technology (Multiple Intelligences: social, emotional, creative, etc). And, although identical machines can be produced, each human is unique, with potential to discover unique solutions.
These skills and abilities also apply to effective leadership. Adaptive Leadership, a model and practical approach, for leaders solving issues and navigating in constantly changing situations and landscapes. The foundational pillars of the model needed for effective Adaptive Leadership are:
- Emotional Intelligence
- Organizational Justice
- Development (i.e. continuous learning)
- Character (i.e. applying a wide range of character strengths)
Created by Harvard professors, Ronald Heifetz and Marty Linsky, the model speaks to motivation and effectiveness, breaking down challenges into two problem-solving categories (technical and adaptive), acknowledging that ability to adapt is at the core of any thriving organization.
“The improvisational ability to lead adaptively relies on responding to the present
situation rather than importing the past into the present and laying it on the current situation like an imperfect template.” ~ Ronald A. Heifetz
When we actively and consciously continue to learn, it can be quite exciting. When we commit to learning something new, the challenge can result in an incredible reward. This is what renowned psychologist, Mihaly Csikszentmihalyi, described as the state of Flow. The experience of Flow supports one of the ultimate human desires, to have a sense of engagement and meaning, something we see lacking in so many sectors of life.
“Being in Flow is being completely involved in an activity for its own sake. The ego falls away. Time flies. Every action, movement, and thought follows inevitably from the previous one, like playing jazz. Your whole being is involved, and you’re using your skills to the utmost.”- Mihaly Csikszentmihalyi
The great news? Education can take place everywhere and at every stage in life!
No need to wait. What will you learn today?
By Marisa Zalabak
Educational Psychologist and Leadership Coach | <urn:uuid:754ba4cc-1ac3-4b11-b646-fbc10b4ffdca> | CC-MAIN-2022-40 | https://enterpriseviewpoint.com/education-for-the-future-isnt-just-in-school-and-isnt-just-for-kids/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00588.warc.gz | en | 0.953453 | 1,311 | 3.03125 | 3 |
Phishing is one of the most enduring security risks IT professionals face. It’s been a common avenue for credential theft for years — but despite all the notoriety, phishing campaigns continue to fool even the most vigilant among us. As attacks become more sophisticated, the security that prevents them is evolving as well, moving beyond network-based tools to an identity-based model.
Historically, phishing has been conducted by sending mass email campaigns designed to collect credentials to a broad group of people. The logic is that if a hacker can reach enough people with a phishing campaign, statistically someone will take the bait.Modern phishing is much more targeted, and social engineering is often involved. Through social engineering, attackers gather information about their targets through meticulous research and manipulative interactions. If they’re able to piece together enough information about an organization’s infrastructure and employees,
they may be able to pose as a legitimate user and infiltrate the organization’s networks. The hacker may seem unassuming and respectable, possibly claiming to be a new employee, helpdesk contact, or contractor and may even offer credentials to support that identity.This targeted form of social engineering is called spear phishing. Like any other phishing attack, the goal of spear phishing is to acquire sensitive information, install malware or steal credentials. Unlike other phishing attacks, however, spear phishing takes advantage of uniquely human traits — like habits, personal motivations and incentives — to encourage their targets to fall for the attack.
Most modern phishing attacks occur in several phases — hackers start by gathering information about their targets to gain initial unauthorized access into an organization’s networks, and then escalate privileges as they traverse the networks.
80% of breaches involve brute force or the use of stolen passwords, per Verizon’s 2020 Data Breach Investigation Report (DBIR).
Hacking and phishing tools, along with documentation on how to use them, are readily available online — so launching an attack is easier than ever.
Over the years, phishing has become harder to spot. Rather than casting a wide net, attackers now target specific individuals.
Phishing is an insidious problem where attackers prey on human nature. [...] By relying on socially engineering their victims by using methods such as dark patterns, the attackers utilize their access to purloined credentials for financial gain in many cases. Security controls to limit the access of an attacker are essential to shore up enterprise defenses. Read the Customer Story— Dave Lewis, Global Advisory CISO, Cisco
Because it targets the unpredictable human element of security, phishing sounds scary — but it doesn’t have to be. With a few best practices in place, organizations can achieve phishing resistance and prevent unauthorized access.
Requiring multi-factor authentication (MFA) significantly reduces risk of unauthorized data access — but not all authentication methods are equal. Using WebAuthn or FIDO2 security keys provides the highest level of assurance for secure access. Additionally, Verified Duo Push provides an extra layer of security by requiring users input a unique code from the login device in the Duo Mobile app.
Single sign-on serves as a unified visibility and enforcement point for application-specific policies, while also enabling seamless access to multiple applications with a single set of credentials. With fewer credentials to remember, users are less likely to reuse or create weak passwords that can easily be targeted by hackers.
It’s hard to prevent access from devices you don’t know about. Visibility into all the devices accessing your resources is the first step in ensuring every access attempt is legitimate.
With many different devices accessing company resources, it’s important to ensure they’re all healthy and up-to-date. Compliant devices are less likely to create gaps in security, making them more difficult for hackers to exploit.
Ensure that the right users, with the right devices, are accessing the right applications. By creating granular security policies, you can enforce a least-privilege access model and ensure that users and their devices meet rigorous standards before they can login to critical resources.
Utilize behavioral analytics to monitor the unique access patterns of your users. This practice helps you spot suspicious activity — and stop breaches before they happen.
Learn more about modern phishing attacks and what you can do to prevent credential theft.
Hackers can’t steal a password if there’s no password to steal. Passwordless authentication is becoming a viable and attractive way to reduce credential theft.
Phishing attacks depend on human behavior to be successful — so verifying user identities with strong MFA is the first step in preventing a breach.
Assigning access permissions by application ensures that your most critical resources are also your most protected.
Find out how targeted phishing attacks and breaches happen through one popular company's real-life experience, and see how the breach could have been prevented. | <urn:uuid:82e23244-027e-4670-85d6-01fdafa026f8> | CC-MAIN-2022-40 | https://supermagicfunland4.duosecurity.com/solutions/phishing-prevention | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00588.warc.gz | en | 0.926581 | 999 | 2.9375 | 3 |
The Gutenberg printing press, which was invented around A.D. 1440, truly revolutionized the world. It allowed more people to have access to books, which until that time had to be manually copied by hand. Today the world is seeing another revolutionary advance in printing technology, but this time in 3D printing.
The concept of 3D printing isn’t that far removed from the traditional printing technique of basically casting ink on a page. It just adds multiple layers that build up an object in 3D. In some cases, it exists around multiple compounds that bond together to create a 3D object.
It isn’t actually new — it has existed for more than a decade. However, researchers at the Vienna University of Technology have developed a 3D printer with nanoprecision that uses two-photon lithography.
Essentially, the 3D printer uses a liquid resin that is actually hardened with a focused laser beam. This results in a solid polymer that is just a few hundred nanometers wide, making for an intricately structured sculpture as tiny as a grain of sand, and printed out very quickly.
What this means is that 3D objects can be printed in orders of magnitude faster than other devices, opening up new possibilities for applications.
From Vision to Real-World Applications
Although 3D printing is considered on the cusp of real-world mainstream design applications, it has really been an innovative technology with somewhat limited uses, with many industries relying on 3D printers for development of prototypes.
While many objects have been created in 3D in clay or foam, these have been crafted by hand, and whereas a 3D model could be designed on a computer it still lacked the real-world model. The innovation in 3D printing has allowed the two worlds to merge.
One industry that has relied a lot on 3D printing has been the bicycle industry, to create models for frames that are later be crafted in carbon fiber.
“Aerodynamics have become a leading development consideration in the world of bike and bike component design, and many of the top manufacturers use 3D printers to create multiple variations of a component or frame for use in wind tunnel testing,” Matthew Pacocha, U.S. editor of BikeRadar.com, told TechNewsWorld. “Companies like Trek, Specialized, and Scott have 3D printers on site, and Trek even uses theirs to make full-scale samples of their new frames to prove graphic placement and color schemes.”
3D printing may help design the products consumers use, but more importantly for the researchers at the University of Vienna, the technology could have life-altering benefits.
“The major area for application is medicine,” Florian Aigner, science editor at the Vienna University of Technology, told TechNewsWorld. “With this technology, it should be possible to create scaffolds for living cells to attach and create a tailor-made tissue — and for this, accuracy on a very short-length scale is required. This is why our chemists are currently trying to find resins with better biological properties.”
The printers can now print out at what could be considered lightning speed, but development was not so quick.
“At TU Vienna, we develop resins explicitly for about seven years. However, we could use our knowledge from synthesizing resins for stereolithography, which we develop much longer,” said Jan Torgersen, a researcher in additive manufacturing technologies at Vienna University of Technology.
The technique already showed good applicability for fabricating 3D environments for cells, Torgersen told TechNewsWorld.
“These so called scaffolds are polymeric constructs where cells can stick onto,” he added. “The technique is precise enough to make parts resembling the 3D biological environment. Depending on the geometry of the scaffold and the type of cell, custom-made biological tissue can be made.”
New Layers to the Technology
The use of the resin and laser technique could also change the way that 3D printing evolves.
“As this technique is the first true 3D lithography, it is not necessarily limited to a layer-by-layer fabrication process, and it is thus capable of embedding and connecting objects in 3D,” said Torgersen. “We already successfully fabricated waveguides in an existing matrix. These waveguides are very promising for various optoelectronic applications.”
This could open the door to 3D taking off in the home — perhaps in ways 3D TV did not.
“I’m starting to see home 3D printing moving from ‘Wow! I can print out things in three dimensions, like bottle openers and miniature models of my head,’ to ‘Wow! I can print out useful things in three dimensions, like pendulum clocks and quadcopter robots,'” Mark Frauenfelder, editor-in-chief of MAKE magazine told TechNewsWorld. “Eventually, 3D printers will be commonplace in homes, offices and schools, because they’ll prove to be so useful. I’m jealous of my kids!” | <urn:uuid:d6ffbf37-230f-467b-8fb7-7563129b7c41> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/3d-printing-gets-a-speed-injection-74629.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00588.warc.gz | en | 0.947082 | 1,079 | 3.640625 | 4 |
What is Credential Stuffing?
Credential stuffing is a subset of account takeover attacks, where bots are deployed to constantly try different username/password combinations at scale until a match is found. With this information readily available to fraudsters for purchase - due to personal information being exposed from years of data breaches - credential stuffing plays a key role in carrying out account takeover (ATO) attacks.
Credential stuffing has become a vital ingredient of successful fraud attacks, which makes stopping it so crucial. Fraudsters will often use automated scripts to power credential stuffing, such as to fill in login forms en masse with validated credentials. They also utilize human sweatshops to launch these attacks, which enables them to evade defenses that specifically look for signs of automation, or even conduct the attacks themselves for high-value accounts.
Once credential stuffing is used to successfully compromise an account, the fraudster has a wide range of options, from simply stealing money or information from the account, using it for downstream fraud such as for phishing or sending spam messages, laundering stolen money, or reselling the successfully validated credentials as a list on their own right.
What Are the Steps of a Credential Stuffing Attack?
There are three main steps to a credential stuffing attack:
Fraudsters first need the raw material - a list of valid emails, usernames and passwords - to work with before carrying out credential stuffing attacks. These are most commonly and easily acquired on either the dark or public web via lists sourced from various data breaches. Credentials can also be stolen using methods such as phishing, malware, or social engineering attacks.
After data is acquired, fraudsters use credential stuffing techniques to find the right combinations and gain access to accounts. To do this at scale, fraudsters commonly use bots: they simply enter the list of stolen credentials into a tool, configure the proxies, define the target, and sit back and launch the attacks. As noted earlier, for more nuanced attacks, human sweatshops may be used or even lone fraudsters may do it themselves.
This can be done in a number of ways, such as fraudsters stealing money or utilizing the information from the account itself to carry out further attacks. Fraudsters can also resell lists of known validated credentials after the credential stuffing attack determines the right username-password combinations. The attack will help extract a list of credentials that have been verified against a specific website, which can then be resold on the dark web.
Why Are Credential Stuffing Attacks on the Rise?
Simply put, there are so many usernames, emails, and passwords available for purchase on the web that fraudsters can easily obtain them, and launching credential stuffing attacks is then as simple as inputting data into an automated program. With a minimal amount of effort and money, fraudsters can launch credential stuffing attacks at scale, testing thousands - or even millions - of combinations in minutes.
The combination of easily available data, and cheap and easy-to-use bots, makes for an environment ripe for credential stuffing attacks. According to the FBI, a whopping 41% of all financial sector attacks between 2017 and 2020 were due to credential stuffing, resulting in the theft of millions of dollars. The FBI further notes that this situation is exacerbated by consumers continually using the same login credentials.
"When customers and employees use the same email and password combinations across multiple online accounts, cybercriminals can exploit the opportunity to use stolen credentials to attempt logins across various sites," says a Sept 2020 memo. According to a 2020 survey conducted by a data analytics firm, nearly 60% of the respondents reported using one or more passwords across multiple accounts. "When attackers successfully compromise accounts, they monetize their access by abusing credit card or loyalty programs, committing identity fraud, or submitting fraudulent transactions such as transfers and bill payments."
All these factors add up to a scenario where credential stuffing attacks are frequent and often successful.
Limitations of Current Fraud Defense Approaches
Clearly, identity-based fraud detection efforts are obsolete now. This is because not only can fraudsters easily obtain a good user's login credentials and use them to appear to be 'good' themselves, but they can also quite easily hide their true location and intent. They do this with tactics such as IP spoofing, device obfuscation, and many others.
Most data-driven risk decision engines are geared towards extremes, looking for users that display clear 'trust' or 'mistrust' signals. They, therefore, struggle with the new reality in fraud, where digital identities have been corrupted and intent can be faked.
There is an increasingly gray area due to unpredictable behavior from good customers and sophisticated spoofing and cloaking techniques from fraudsters leveraging stolen personal data. If one factor is off for a good user, for any number of legitimate reasons, then it can throw the whole fraud prevention model off-gear. Fraudsters understand how these fraud defense systems work and use this knowledge directly against the businesses they attack. The balance of power, in these scenarios, is unfortunately with the bad guys.
Smarter Authentication with Targeted Friction
Rather than constantly playing a losing cat and mouse game with fraudsters, businesses need a long-term approach to disrupt fraud and put a stop to these large-scale attacks. That's why Arkose Labs combines risk-based decisioning with intelligent step-up and clarifies whether or not a good customer's digital footprint has been corrupted by the fraudsters.
The Arkose Labs platform performs sophisticated real-time analysis of traffic to look for even the most subtle indicators of fraud. However, this is done without collecting large sets of personal information, as they can cause a privacy and compliance headache. Instead, the platform focuses on behavior, device, and network characteristics and how they are connected.
Classify and Triage
Multiple risk scores can become difficult to action. Instead, Arkose Labs classifies and segments traffic based on the risk profile. Triaging traffic, based on whether it is likely to be legitimate, a bot, or human sweatshop, provides actionable intelligence that can inform the system of any secondary screening required and the type of enforcement required.
Challenge and Interact
To understand the intent of traffic in a deterministic way, secondary screening must be paired with risk assessment. Arkose Labs' custom enforcement challenges test high-risk traffic using interactive technology that causes all automated attacks to fail. Graduated risk-based challenges can also frustrate fraudsters by increasing the amount of friction they experience, leading them to abandon their attacks.
Arkose Labs clients can reap the benefits of a fraud prevention system, which combines risk assessments with challenges, by leveraging a continuous feedback loop to improve fraud detection rates, while decreasing challenge rates for good users. Embedded machine learning will provide advanced anomaly detection and evolving protection, taking the burden away from in-house teams.
Businesses need to take a robust stance against fraud to safeguard their revenue streams and maintain customer trust. This multi-step approach ensures comprehensive fraud protection that safeguards businesses long term. Click here to learn more about how Arkose Labs can help fight fraud without impacting the user experience. | <urn:uuid:82803ddb-e4bb-4cf8-8d6b-d538df32642e> | CC-MAIN-2022-40 | https://www.arkoselabs.com/explained/credential-stuffing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00588.warc.gz | en | 0.929696 | 1,477 | 2.5625 | 3 |
The on-again, off-again relationship between Apple and Microsoft began in earnest in the late 1970s, during the dawn of the PC era. The hot and cold periods were often tied to the personalities of the companies’ founders and longtime leaders: Steve Jobs and Bill Gates. Today, Apple and Microsoft are much different companies, with new leaders who don’t lug along the baggage that comes as a result of nearly 40 years of fierce competition.
The following events, from periods of harmony and acrimony alike, define the long, rocky relationship between the two modern-day technology giants.
Youthful innocence of the early ’80s
During the first Macintosh’s development and early years of production, Microsoft was a critical Apple ally. The software pioneer created important programs for Apple’s PC in the early ’80s. “We had more people working on the Mac than [Jobs] did,” Gates said of the early years, according to Walter Isaacson’s biography, Steve Jobs.
At an Apple event in 1983, Gates told attendees Microsoft expected to earn half of its revenues selling Macintosh software the following year. And when Jobs asked Gates if he thought Mac would become another standard in personal computing, Gates praised the platform: “To create a new standard it takes something that’s not just a little bit different, it takes something that’s really new and really captures people’s attention. And the Macintosh, of all the machines I’ve seen, is the only one that meets that standard.”
Isaacson’s book also detailed just how quickly the relationship between the two companies soured when they started to develop competing OS software with graphical interfaces. Jobs lashed out at Gates during a meeting later that same year and equated Microsoft’s plans for Windows to theft. (Xerox PARC originally developed graphical interfaces, not Apple, but for Jobs that was beside the point.)
Isaacson also described the Microsoft founder’s response to Jobs’s accusations. “Gates just sat there coolly, looking Steve in the eye, before hurling back, in his squeaky voice, what became a classic zinger. ‘Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbor named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it.'”
12 years of Apple without Jobs
Before the end of 1985, Windows 1.0 was released, and Jobs was ousted from Apple, the company he founded nine years earlier. Microsoft then went on to dominate the PC industry while Jobs founded NeXT, where he spent the following 12 years building computer workstations for higher education and business.
Jobs didn’t pry significant market share away from Microsoft during his time with NeXT, but he did continue to regularly lob insults at Microsoft. “The only problem with Microsoft is they just have no taste,” Jobs said in the 1996 “Triumph of the Nerds” TV documentary. “They have absolutely no taste. And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products,”
Jobs also said Microsoft’s applications for Mac were “terrible” and called the company’s products “third rate.” He eventually spoke to The New York Times about the documentary and his decision to call Gates and apologize for his comments, but his statement felt disingenuous. “I told him I believed every word of what I’d said but that I never should have said it in public,” Jobs told the newspaper.
An awkward period of harmony
When Apple acquired NeXT in 1997 and brought Steve Jobs back into the fold, the company was in disarray amid growing uncertainty about the future of Microsoft Office for Mac. During his keynote address at the Macworld Expo that year, Jobs extolled the virtues of partnering with industry leaders and spoke of the need to improve Apple’s partner relations.
Jobs suggested Apple needed help from others and that destructive relationships weren’t helping any tech companies. Then he announced a new pact between Apple and Microsoft, which was received with a chorus of boos from the audience.
The previously unimaginable deal had many components, including a perpetual cross license for all existing patents and patents issued during the next five years; IE became the default browser on Mac; Microsoft said it would release Office for Mac for the next five years; and Gates’s company invested $150 million in Apple.
“Microsoft is going to be part of the game with us as we restore this company back to health,” Jobs said before asking Gates to address the crowd via satellite. The “kumbaya moment” continued when Gates told attendees that some of the most exciting work of his career was done with Jobs on the Mac.
“We think Apple makes a huge contribution to the computer industry,” Gates said. “We think it’s going to be a lot of fun helping out.”
When Gates finished and the attendees’ collective blood pressure dropped, Jobs tried to frame the deal in a way Mac users and Apple fans could appreciate. “We need all the help we can get. If we screw up and we don’t do a good job, it’s not somebody else’s fault, it’s our fault,” Jobs said. “I think if we want Microsoft Office on the Mac we better treat the company that puts it out with a little bit of gratitude. We like their software. The era of setting this up as a competition between Apple and Microsoft is over as far as I’m concerned.”
However, history hinted at what was to come, as Apple inched toward dominance and eventually overtook Microsoft in the computing market by 2010.
A full decade at odds
By the mid-2000s, Jobs had returned to his old ways, and he chastised Microsoft for “copying Google and Apple” despite having a $5 billion annual research and development budget.
A seminal moment occurred in 2007 when Gates and Jobs jointly took the stage for an interview at the D5 conference. The appearance is considered one of the most significant moments in the history of technology. But before the panel with Gates, Jobs couldn’t resist taking a jab at Microsoft during a one-on-one interview with journalist Walt Mossberg. “We’ve got cards and letters from lots of people that say iTunes is their favorite app on Windows,” Jobs said. “It’s like giving a glass of ice water to somebody in hell.”
Later, the joint interview kicked off with a question about the companies’ contributions to the PC and technology industries. Jobs leaned forward, scratched his head, and said: “Bill built the first software company in the industry … Bill was really focused on software really before anybody else had a clue it was the software.”
Gates offer clear praise in his response. “What Steve’s done is quite phenomenal,” he said.
Jobs warmed up eventually and acknowledged the long, rocky relationship between the two men when asked to describe the their greatest misunderstanding. “I think of most things in life as either a Bob Dylan or a Beatles song, but there’s that one line in that one Beatles song — ‘You and I have memories longer than the road that stretches out ahead’ — and that’s clearly very true here,” he said.
Opportunities in enterprise rekindle old flames
A new era of partnership buoyed by opportunities in the enterprise blossomed during the past couple years. With new leaders at the helm of both companies came an end to the insults and grandstanding … for now, at least. The past month saw three separate events that demonstrate the companies’ new commitment to compromise and combined efforts.
At Apple’s September 2015 new product event in San Francisco, the company invited a Microsoft executive on stage to demonstrate Office 365 apps working in split-screen mode on an iPad Pro. At Salesforce’s Dreamforce conference a week later, Microsoft CEO Satya Nadella demoed the company’s iOS apps on an iPhone. And finally, during a keynote at cloud-storage company Box’s BoxWorks conference in late September, when asked about the company’s renewed relationship with Microsoft Apple CEO Tim Cook said he doesn’t believe in holding grudges.
Microsoft’s surprise on-stage presence at an Apple product event showed just how cordial relations have become between the two tech giants. When Nadella used an iPhone on stage at Dreamforce, he acknowledged such a thing would have been unheard of in the past but also used the opportunity to pump up his own company. “I like to call it the iPhone Pro because it has all the Microsoft software and applications on it … It’s pretty amazing.”
Such friendly exchanges between frequent foes might never have taken place if not for the opportunities both companies see in the business market. Apple’s new pursuit of enterprise, paired with Nadella’s penchant for a partnership-friendly corporate culture, let this new relationship flourish.
“If you think back in time, Apple and IBM were foes. Apple and Microsoft were foes,” Cook said at BoxWorks. “Apple and Microsoft still compete today, but frankly Apple and Microsoft can partner on more things than we could compete on, and that’s what the customer wants.”
Apple and Microsoft might not ultimately share the same sentiment on the consumer side of things, but the leaders of both companies determined the enterprise is an area in which they want to cooperate and better cater to customers. However, nothing in the enterprise is ever a sprint, and the hatchets will need to remain buried for years to come if the two companies hope to realize the fruits of their collective labor. | <urn:uuid:97ebe196-bda6-4a18-8833-43798c813604> | CC-MAIN-2022-40 | https://www.cio.com/article/242299/history-of-apple-and-microsoft-4-decades-of-peaks-and-valleys.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00588.warc.gz | en | 0.969184 | 2,139 | 2.703125 | 3 |
You already know that Pandas is a power tool for data munging. In this tutorial, I will show you how to explore a data set using Pandas, Numpy and Matplotlib.
My goal for this project is to determine if the gap between Africa/Latin America/Asia and Europe/North America has increased, decreased or stayed the same during the last two decades.
So let’s get started.
Loading Files Into IPython Notebook
Using the list of countries by continent from World Atlas data, I am loading countries.csv file into a Pandas DataFrame using pd.read_csv, and I name this data frame as count_df.
I am loading gapminder.xlsx file as a pandas Data Frame.
Transforming the data
In this section, I am going to transform complete_excel data frame to have years as the rows and countries as the columns.
I will explain what is happening in the code line by line:
complete_excel[complete_excel.columns] will return the first column of complete_excel data frame, and then I am setting the column gdpc2011 as the index of my data frame. But I dont want my index and the first column to be the same, so I am going to delete this column. I am deleting this column using drop command.
transfrom = complete_excel.drop(complete_excel.columns, axis = 1)
After deleting gdp pc column, I am converting year values from float to integers. If you want to know how map statement applies to a data frame, you can read my detailed explanation here.
Now I transpose this data frame:
transfrom.columns = map(lambda x: int(x), transfrom.columns)
Plotting a Histogram
I am plotting a histogram for the year 2000. Here I am using dropna to exclude missing values for the year 2000. Also, .ix enables me to select a subset of the rows and columns from a DataFrame.
I am using log scale to plot the values.
Merging data frames
I am using merge function to merge two data frames(data1 and count_df).
Using Box plot for further exploration
I am generating box plots to explore the trends for the years 1900, 1990 and 2003. I encourage you to explore the trends for the years 1950, 1960, 1970, 1980, 1990, 2000 and 2010; you can use years = np.arange(1950, 2010, 10) statement to do that .
If you explore the changes from 1950 to 2010, you can see that in most continents (especially Africa and Asia) the distribution of incomes is much skewed: most countries are in a group of low-income states with a fat tail of high-income countries that remains approximately constant throughout the 20th century.
Now that you know how to explore data using Python, you are ready to start. You know everything from how to load data into python to how to clean and visualize, and draw insights from data.
Here is a simple exercise for you to improve your data exploration skills.
Consider the distribution of income per person from two regions: Asia and South America. Estimate the average income per person across the countries in those two regions. Which region has the larger average of income per person across the countries in that region? (Use the year 2012). Also create boxplots to see the income distribution of the two continents on the dollar scale and log10(dollar) scale.
If you have any additional questions please let me know. | <urn:uuid:3e29b96d-7807-46df-983b-22e10233e992> | CC-MAIN-2022-40 | https://www.crayondata.com/exploratory-data-analysis-in-python-using-pandas-matplotlib-and-numpy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00588.warc.gz | en | 0.837697 | 756 | 2.828125 | 3 |
Are you looking to learn ethical hacking fast? Check out the realistic online simulations you can play right now to build your skill set!
Does looking for clues in digital crime scenes or finding ways to break into networks sound exciting? Then ethical hacking might be the perfect job for you!
Ethical hacking plays a significant role in keeping our digital world safe. Take your first steps towards learning a skill that can benefit your everyday life and open the door to many fascinating careers!
Read on and discover how to learn ethical hacking online in 2022.
What is ethical hacking?
Ethical hacking is the act of breaking into digital systems such as networks and computers to:
- Test a person or organisation’s digital defences
- Uncover potential security weaknesses
- Gain access to a digital crime scene or criminal data
Ethical hacking is normally carried out in a professional setting under strict controls and is deemed ethical because it improves cyber security and can help catch criminals.
How to learn ethical hacking online in 2022
The best way to learn ethical hacking is by jumping in and giving it a go! CyberStart allows you to try ethical hacking straight away in a fun and safe environment.
As you play through CyberStart’s gamified challenges, you’ll learn how to ethically test for software vulnerabilities and investigate crimes by breaking into cyber criminals’ systems.
Check out the free ethical hacking challenge below that you can play right now by signing up for a free CyberStart account - no commitments or payment details required!
Intern L03 C02 - Maths at Light Speed
The brief in the Maths at Light Speed challenge says we need to break into a warehouse that’s believed to contain clues to the whereabouts of a cyber criminal gang.
Before we can enter the warehouse, we must bypass the security system.
But, there’s a catch. You only have 0.1 seconds to answer the question asked by the gateway!
Top tip to solve Maths at Light Speed
To solve this challenge, you’ll need to access the source code.
Once you’ve found the source code, see if you can spot what happens to the code when you ‘spin for question’.
Maths at Light Speed shows you how to analyse source code to get past security systems.
You’ll gain practical ethical hacking experience while uncovering how cyber criminals may use this technique for a cyber attack.
Want more ethical hacking challenges?
You’ll find hundreds more challenges, three bigger bases and full access to the Field Manual when you upgrade your CyberStart account.
Check out one of the ethical hacking challenges you’ll encounter in HQ base when you upgrade your CyberStart account.
HQ L04 C11 - Cookie Jar
In the Cookie Jar challenge, you’ve just logged into the Choppers Gang’s intranet.
See if you can log in as an admin to find more information that the Choppers might be hiding.
Top tip to solve Cookie Jar
You’ll need to change the session cookie to log in as an admin user. See a cookie anywhere on the page? Try clicking that!
The challenge brief will give you more information on what you need to change the cookie value to. | <urn:uuid:4a1cddc7-01a9-4a45-a3d6-fa95ef3f139b> | CC-MAIN-2022-40 | https://cyberstart.com/blog/how-to-learn-ethical-hacking-online-in-2022/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00788.warc.gz | en | 0.871158 | 674 | 3 | 3 |
GPS is an entirely different technology compared to GPRS and GSM. GPS is used for satellite navigation, whereas GPRS and GSM are mobile communications technologies. However, since these acronyms sound a bit similar, it is possible to get confused between them.
GPRS or General Packet Radio Service is a cellular technology enhancement in 2G GSM networks that enables mobile data on our cell phones; GPS or Global Positioning System is a satellite navigation system that allows users with a GPS receiver to know their exact location through satellite signals.
In straightforward terms, GPRS is a mobile network technology that allows cell phone users to access the mobile internet on GSM phones, whereas GPS is a navigation system that uses satellite technology to help people navigate their way globally.
|Stands for Global Positioning System||Stands for General Packet Radio Service||Stands for Global System for Mobile Communications|
|Satellite navigation system||Cellular technology enhancement||Cellular technology|
|GPS is not a mobile communications technology||GPRS is an enhancement in 2G GSM mobile networks||GSM is the most widely deployed second-generation (2G) technology standard|
|Enables navigation on smartphones and other receivers||Enables low-speed mobile internet on 2G GSM phones||Allows voice, SMS and mobile internet on 2G GSM phones|
GPRS and GSM offer mobile coverage and GPS offers navigation
GPRS and GSM are technologies that deal with mobile communications services, whereas GPS is about satellite navigation services. GPRS and GSM can be accessed via GSM-compatible mobile phones, including smartphones and feature phones. GPS can be accessed via purpose-built GPS receivers, including smartphones.
GPS stands for Global Positioning System, a satellite navigation system that allows users with satellite receivers (GPS receivers or Sat Nav) to know their exact location through satellite signals.GPRS is a packet-switched mobile data (internet) technology that allows second-generation GSM networks to offer mobile internet on 2G phones.
GSM or Global System for Mobile Communications is the most widely deployed second-generation (2G) cellular technology. Other second-generation (2G) cellular technologies include Digital Advanced Mobile Phone System (D-AMPS) and Interim Standard 1995 or IS-95 (cdmaOne).
GPS uses satellite technology to provide navigation services
GPS or Global Positioning System utilises satellite systems to deliver navigation services through GPS receivers. Most smartphones nowadays have GPS receiving technology built into them, allowing them to communicate with the satellites and run computations to identify the phone’s location.
GPS receivers are also available in the market as purpose-built GPS units or SatNavs to help us get driving directions on the go. The purpose-built GPS receivers are called Sat Nav in the UK and GPS in the rest of the world, including the US. The cost of the GPS receivers has come down significantly, which has made the technology more affordable and accessible to the vast majority of people.
Satellites transmit timing signals and their orbital parameters, which helps receivers calculate the distance between the receiver and the serving satellite based on the time it takes to receive the transmitted signal. So the GPS receiver’s location is always related to the position of the satellites it is communicating with.
The US military initially introduced GPS in 1978, and it is a constellation of satellites that can provide two-dimensional and three-dimensional locations worldwide. A constellation of 18 satellites is needed to give a 2-D location, i.e. longitude and latitude, whereas a constellation of 24 satellites can provide 3-D locations, i.e. longitude, latitude and altitude.
GPS receivers need simultaneous signals from at least three (3) satellites to calculate the two-dimensional location and at least four (4) or more satellites to calculate the three-dimensional location.
As far as the GPS capability on smartphones is concerned, it does not directly depend on the cellular technology that the phone uses. However, the in-built GPS receiver on the mobile phone needs the internet to load the map (e.g. Google maps).
When you are outdoors, i.e. without your home Wi-Fi network, the internet on the phone is enabled by mobile data through cellular technologies like GSM, GPRS, EDGE, UMTS, HSPA, IS-95, cdma2000, EVDO, LTE or NR. The positioning of your current location on the map is controlled by satellite technology.
GPRS enables mobile internet in GSM-based 2G networks
GPRS, or General Packet Radio Service, is an enhancement that was introduced towards the end of the 1990s in the most widely deployed second-generation (2G) GSM networks. GPRS introduced packet-switched mobile data (mobile internet) services in GSM networks.
GSM is a cellular technology standard introduced in Europe in the early 1990s to start the digital era in mobile communications. GSM has seen worldwide network deployments and is the most commonly available 2G standard in the world. GSM networks have a circuit-switched part that delivers voice calling and text messaging.
The packet-switched part is introduced in GSM through GPRS. Before GPRS, another technology called HSCSD (High-Speed Circuit-Switched Data) was employed by GSM networks to offer limited data services. However, HSCSD was based on conventional circuit-switched technology and was not efficient enough for providing data services.
Since GPRS is based on packet-switched technology and does not engage radio network resources permanently, the charging is not based on connection duration but on the amount of transmitted data.
The latest cellular technology today is New Radio (NR) which enables the fifth-generation (5G) of mobile networks. However, 5G NR does not replace the earlier cellular technologies in the short term. 5G will co-exist with 4G LTE (Long Term Evolution) for a long time, and the 2G and 3G cellular technologies, including GSM, UMTS, IS-95 and cdma2000, are still present in most parts of the world.
GPRS is shown on a mobile phone screen as “G”, whereas its enhanced version EDGE (Enhanced Data rates for Global Evolution) is shown on the phone screens as “E”. GPRS offers a peak download data rate of up to 171.2 kbps which moves to 384 kbps with EDGE. 4G is shown on the mobile phone as 4G, 4G+, LTE or LTE+.
The average data rates are around 30-50 kbps with GPRS and 130-200 kbps with EDGE. I have written a dedicated post on the data speeds with 2G, 3G, 4G and 5G networks that goes into more detail.
Here are some helpful downloads
Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you:
Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc.
Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap. | <urn:uuid:89ded7e0-5b7f-408e-98f0-f2db3a35febb> | CC-MAIN-2022-40 | https://commsbrief.com/gps-vs-gprs-vs-gsm-how-is-gps-different-from-gprs-and-gsm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00788.warc.gz | en | 0.92051 | 1,778 | 3.640625 | 4 |
Guiding Eyes for the Blind keeps extensive records on how dogs are bred and raised and needed to shine a light on the dark data for a clearer view of why dogs succeed or fail.
Using natural language processing (NLP) on structured and unstructured data, the system is trained to find correlations to success among myriad genetic, health, temperament and environmental factors.
10% point expected increasein graduation rates, helping the organization meet growing demand for guide dogs
Reduced costs expectedthrough greater efficiency in the guide dog program, from breeding to placement with a human in need
New insights into the personalitiesof dogs and matching to the volunteer puppy raisers
Business challenge story
Unstructured data keeping insights hidden
Guiding Eyes spends USD 50,000 on each dog that goes through its 20-month training program, producing exceptional service animals that have been bred, selected, socialized and trained for high loyalty and intelligence. But the program has a modest success rate, with less than half of 500 puppies graduating each year. With demand for service animals on the rise, Guiding Eyes needs to better understand what makes dogs succeed and boost graduation rates. Though the organization has spent years collecting detailed information from its program, including genetic maps, medical records and questionnaires filled out by trainers and foster families, it never had the ability to properly analyze the data. The nonprofit needed a way to tap into large volumes of unstructured data and unlock meaningful insights that could help improve its program.
Dramatic advances with cognitive computing
Guiding Eyes is working with researchers at San Jose State University (SJSU) to make unprecedented advances in the training of service animals, using cognitive computing to uncover meaningful insights about the variables that contribute to a guide dog’s success. The organization began by moving half a million medical and genetic records and more than 65,000 temperament records to the cloud, making years of structured and unstructured data available for sophisticated analysis. Now researchers are using advanced cognitive technology to evaluate key characteristics of the dogs so that it can more effectively pair animals with people. They are also training the system to recognize correlations between certain markers or clues in the text and a dog’s level of success. The organization can correlate these insights with genetic and medical data to gain a deeper understanding of how multiple variables impact the ultimate success of the service animal. The solution learns from an ever-expanding corpus of knowledge, constantly revising and updating its understanding of the interrelationships among genetics, health, environment, personality and training approach.
Program efficiency improved
The Watson AI solution is expected to help Guiding Eyes improve graduation rates by 10 percentage points—from 49 percent to 59 percent—and meet the growing demand for guide dogs. The organization also expects to reduce costs by improving the efficiency of the program, from breeding and selection to training and placement. This represents a dramatic transformation in the way dogs are bred and trained, with more accurate insights into their personalities and temperaments.
Game-changing outcome—Using natural language processing (NLP) to examine personality traits, Watson AI is transforming the breeding and training of guide dogs into a more exact science, finding the unseen correlations among canine genetics, health and temperament—and the humans who help train them. These capabilities help Guiding Eyes improve its breeding and training processes, ultimately increasing graduation rates for the dogs it raises.
Before-after impact—Much of the organization’s valuable information is captured in questionnaires filled out by trainers and foster families—unstructured data that, in the past, one person had to collect and synthesize manually, which meant that valuable insights went unnoticed. Now, with linguistic analytics and NLP, the system discovers hidden connections in what used to be dark data, helping the organization optimize its program for higher success rates.
Systems of cognitive discovery, engagement or decision making—The solution creates a system of cognitive discovery and decision making by finding meaning in large volumes of unstructured data, helping Guiding Eyes make small but consequential improvements in its program and find unexpected insights about how dogs perform under different circumstances.
Guiding Eyes for the Blind
As a 501(c)(3) nonprofit entity, Guiding Eyes for the Blind provides guide dogs to people with vision loss as well as service dogs for children with autism. The organization is passionate about connecting exceptional dogs with individuals and families for greater independence. Throughout its 70-year history, the organization has trained and graduated more than 7,300 guide dogs. | <urn:uuid:f7e32eda-32ed-462e-91f5-de26fbb193d3> | CC-MAIN-2022-40 | https://www.ibm.com/case-studies/guiding-eyes-for-the-blind | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00788.warc.gz | en | 0.928276 | 924 | 2.890625 | 3 |
Internet users are breathing a sigh of relief from a malware attack that struck late last year.
In November 2011, the U.S. Federal Bureau of Investigation discovered a malware attack called DNS Changer that affected close to 600,000 people worldwide.
In addition to arresting the perpetrators, the FBI transferred affected users to two new privately operated servers. The agency also directed the targeted people to a website explaining how to remove the malware.
Starting at 12:01 a.m. EDT on July 9, the FBI switched all those affected back to the original servers. Despite fears that many would be without internet access, in the end most of the 200,000 or so computers still infected did not lose online access, according to The Associated Press.
“The notion that somehow there was going to be an Internet Armageddon today was always overdone,” Stewart Baker, a former assistant secretary at the Department of Homeland Security, said to The Wall Street Journal. “It was a pretty small number of machines that hadn’t been taken care of by their owners that were going to be shut off. They were unlikely to be central to any institution’s functioning.”
Part of the issue the FBI faced in the fix was in education, as many of those affected know little about how to block applications to avoid malware like this. Additionally, some internet users feared the FBI’s efforts were for spying.
Even though this threat is now over, some experts say this could be the tip of the iceberg in terms of what to expect going forward.
“We should treat this as a bit of an exhibition game,” Frank Cilluffo, director of the Homeland Security Policy Institute at George Washington University, said to the WSJ. “We had time in this case. Steps were taken, which we won’t necessarily have in a no-notice kind of attack in the future.”
Were you affected by this situation? What are your layered security techniques for preventing malware attacks? | <urn:uuid:8acfc0e5-9a4a-4b48-9c60-a3ddccfa19b8> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/no-internet-armageddon-after-fbi-servers-go-offline | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00788.warc.gz | en | 0.977816 | 413 | 2.515625 | 3 |
As technological infrastructure continues to evolve, the world around us has become more connected than ever before. The Internet of Things (IoT) has spawned a network of interconnected devices and sensors that are revolutionizing the way we carry out everyday tasks. Smart cities, smart homes, smart retail, connected cars, and wearables bear testimony to how the connected devices are disrupting the status quo leading to the creation of an efficient, automated planet.
The Internet of Things is booming and the predictions are promising-Statista has estimated that there will be 75 billion connected devices by 2025. https://www.youtube.com/embed/7kpE44tXQakInterestingly, these IoT devices don’t offer any major benefit on their own-it’s the data gathered by them that can translate into meaningful information and pave way for the advancement of IoT. Cloud computing services facilitate instantaneous, on-demand delivery of computing infrastructure, databases, storage and applications needed for the processing and analysis of data points generated through hundreds of IoT devices.
No wonder 96% of the organizations have adopted cloud in one form or the other. And with the emergence of the likes of Amazon Web Services, Google Cloud Platform, Microsoft Azure and IBM Cloud, the growth prospects of IoT appear even brighter.
In this article, we will discuss why cloud computing is integral to the growth of the Internet of Things (IoT). So, let’s begin without further ado.
Based on the principles of scalability and agility, the cloud is hailed as a revolutionary technology across the globe. Cloud solutions can catalyze the large-scale adoption of IoT initiatives. Here are some key reasons why the cloud is indispensable to the success of IoT.
Remote Computing Power
With rapid strides in 5G and internet speed, cloud technology is getting mainstreamed allowing businesses to access remote computing services at the click of a mouse. By reducing the need for maintaining on-premises infrastructure, the cloud has enabled organizations to transcend the conventional applications of IoT (e.g. in home appliances) and opened the doors for large-scale deployment of IoT in hitherto unexplored territories.
Security and Privacy
The mushrooming of IoT devices may have allowed organizations to automate tasks, but it also poses grave security concerns. The cloud with its wide range of controls can be a viable solution here. Cloud solutions ease the implementation of foolproof security measures-it allows establishments to adopt robust encryption and authentication protocols. With top-notch cloud solutions, it’s possible to manage and secure the identity of users accessing the IoT devices.
As IoT continues to gain prominence, establishments have begun experimenting with connected devices to extract real-time information on key business processes. While these devices improve operational efficiency and optimize costs, they also generate gargantuan data that are too cumbersome to process even for their analytical platforms.
Cloud-based solutions come in handy here-cloud systems with their robust data integration capabilities handle massive volumes of data emanating from multiple sources. As a result, data from both enterprise systems and connected devices get stored, processed and analyzed in the same place.
Low Entry Barrier
Innovators in the IoT domain seek hassle-free hosting solutions. Cloud hosting solutions are quite appropriate in such situations. With cloud hosting solutions, IoT players harness the power of remote data centers in India without having to install cumbersome on-premises hardware and software. Besides, these cloud services operate on a ‘pay-as-you-go’ model where the user is billed as per the resources consumed by him. As a result, companies are able to avoid large upfront costs.
With the emergence of innovative cloud hosting solutions, the entry barrier for most IoT-based businesses is getting minimized, allowing them to implement large-scale IoT initiatives in a seamless manner.
Cloud computing solutions are known for their agility and reliability. Cloud services sit on the top of a network of servers that are housed in multiple locations. Their systems store copies of your data in multiple data centers in India.
As a result of this redundancy, IoT-based operations continue to function even if one of the servers goes offline for some reason or the other. Plus, there is no risk of data loss.
In addition to communicating with us, IoT devices and services need to connect with each other. Cloud solutions facilitate seamless communication between IoT devices. They enable many robust APIs such as Cloudflare and Dropstr and allow interaction between connected devices and smartphones thereby paving the way for the growth of connected technologies.
Pairing with Edge Computing
Edge computing or the practice of processing data near the edge of the network where the data is being generated is commonly employed in IoT-based solutions to trim down response time and expedite data processing. IoT deployments often employ a combination of cloud and edge computing to get the best of both worlds.
To understand this better, consider a large factory equipped with thousands of IoT sensors. In such a scenario, it makes sense to aggregate the data close to the edge before sending it to the cloud for further processing. Doing so reduces the direct connections reaching the centralized cloud repository preventing it from getting overburdened.
Edge data centers in India expedite real-time processing of data allowing the decision-makers to act faster. But an edge-only approach doesn’t offer a holistic picture of business operations. In the absence of a cloud solution, the factory may be able to monitor each piece of equipment individually but won’t be able to assess how these devices are working in relation to each other. Only an optimal mix of cloud and edge can help businesses derive maximum value from their IoT initiatives.
With the rising adoption of multi-cloud and hybrid cloud environments, establishments are waking up to the indispensability of cloud solutions. The IoT landscape is also witnessing a remarkable shift. Newer devices and sensors are being experimented with every single day, necessitating the adoption of more innovative cloud computing solutions.
In the wake of these developments, it’s safe to conclude that cloud computing will continue to offer avenues for the advancement of the Internet of Things (IoT). Connectivity, reliability and computing power from the cloud will usher in a revolution in the IoT space. | <urn:uuid:5fdff73c-15f5-4721-8bc4-dfe79d6aec20> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/role-of-cloud-in-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00788.warc.gz | en | 0.921339 | 1,274 | 2.75 | 3 |
Every individual who has online accounts to access services or applications invariably has had to establish answers to security questions. We logon to a new bank account, social media service, or check out via our favorite online paid service, and we are required to enter initial responses to security questions. The purpose of these questions is to periodically re-affirm our identity, or to regain access if we forget our password, by providing our personal secret answers.
Here are examples of some common security questions:
- In what city were you born?
- What is the name of your favorite pet?
- What is your mother's maiden name?
- What high school did you attend?
- What was the name of your elementary school?
- What was the make of your first car?
- What was your favorite food as a child?
- Where did you meet your spouse?
- What year was your father (or mother) born?
Why Are Common Security Questions a Problem?
The problem with common security questions (and with our answers) is they become a liability when the results are leaked online, such as through a data breach, or become public knowledge via outlets like social media. Why? Because many (in fact, thousands) of sites potentially use identical security questions. The variation from site-to-site is low, and questions for each user frequently, and inevitably, overlap across their many accounts. This standardization of security questions creates a substantial, but unnecessary, risk.
The threat of common security questions is comparable to reusing passwords. Security pros, and end users, should know they should never reuse a password across accounts. This is because, if one account is compromised, the password is no longer secret and is associated with your credentials/identity and could be used for future attacks against any account you own that has the same (or similar) usernames. When passwords are re-used across dozens of accounts, the compromise of just one account could potentially lead to the compromise of all the other un-related accounts and ultimately your identity.
How Do I Make My Security Questions Stronger?
While we do usually have control over the passwords we choose, as individuals, we do not have the power to change the security questions these websites and services require. However, we can answer these questions in creative ways to make our accounts more secure and eliminates the threat of multiple accounts being compromised. Here is some basic guidance on how make security questions stronger:
1. Choose Different Security Questions Across Sites: As much as is possible, do not select the same security questions across multiple sites. Keep your selections unique when the site allows you to pick your own questions. This will help limit the fallout and compromise of other accounts if the security question/answer is ever leaked. This is especially important for public figures whose history may be a part of public record or biographies posted on websites. For example, we all know the city our favorite musician or actor was born in, right?
2. Use Special Characters in Your Answers: Do not answer security questions in plain English (or your native language). That is what is expected, but it’s a security misstep. Treat your answers like passwords and introduce complexity in your response and its characters. For example, let’s say I was born in Little Rock, Arkansas. The security question for, “what city where you born in” would require the response, “Little Rock”. Now, add some password complexity. The new entry could therefore be, “L!ttl3 r0ck”. This answer is more difficult to guess or crack through automated tools and provides a simple layer of obfuscation to protect your security question responses. And, if anyone ever asks, you can honestly state your mother’s maiden name does have numbers and symbols in it. Doesn’t yours? I think you get the point—a little obfuscation can go along way to secure your answers.
3. Use Fictitious Information: In many instances, the best course of action is to provide fictitious information to these questions to keep them unique. You could use a personal password manager to populate the answer fields with password-like responses. Next, store each question and response in your password manager. For example, for an ecommerce site, you could create the entry “ecommercesite.com/question_birthcity” as the account and then enter a random, recommended password as the security response. This provides the secure storage you need in case of a password problem, while keeping your answers to same security question completely random and unique across sites and applications.
Why Mitigating the Use of Common Security Questions Is Important
Security questions were designed with the intent of strengthening identity validation for access to applications and websites, particularly in the case of a password issue or other fault. However, just as with password reuse, reusing security question and answer pairs across multiple sites has enabled threat actors to compromise many accounts associated with an identity. Typically, this requires a hacker to compromise a secondary application as well, like email or SMS texting, to pair a password reset with the knowledge of these security questions. Unfortunately, some websites and applications do not even go that far, and knowledge of a security question answer is sufficient to compromise the account, if the user provide the correct response
For IT and security professionals interested in a more rigorous and thorough examination of all the ways identities (corporate and personal) are at risk or under attack, and the best strategies for protecting them, check out: Identity Attack Vectors: Implementing an Effective Identity and Access Management Solution, a book co-authored by Darran Rolls, former CTO at SailPoint, and I. You can also watch our joint webinar on-demand here: Deconstructing Identity as a Cyberattack Vector.
This blog has been updated since it was originally published on June 4, 2020.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook. | <urn:uuid:837a6459-d22f-47d1-8e58-ce53a95cef23> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/reused-security-questions-can-pose-a-high-risk-learn-tips-tricks-to-mitigate-the-threat | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00788.warc.gz | en | 0.956272 | 1,427 | 2.84375 | 3 |
The term “artificial intelligence (AI) in healthcare” has become the current hot topic in health IT circles. Like with previous hot topics (“big data in healthcare,” “interoperability,” “patient engagement”), AI in healthcare will become a standard, basic element of every product on the market. Of course, the term means different things to different people, so when AI in healthcare is discussed in forums, or on blogs online, there are many different things that are really being referenced. At Flow Health, we have looked into how to apply insights from medical and genomic knowledge for precision medicine, and have examined how close we are to using AI for automated assistance in healthcare.
Some element of machine learning and automated assistance will become a routine part of every successful healthcare enterprise – from documentation and diagnosis assistance, to individualized medication suggestions, to plain-language explanations of personalized medical questions for consumers/patients, to treatment suggestions (such as for cancer care), to imaging interpretation assistance. All these activities will involve some element of AI as this technology matures.
A more fundamental issue arises as AI in healthcare becomes more robust, learns patterns from larger data sets, and uncovers insights previously hidden. That issue is this: what is the difference between evidence-based medicine (the gold standard for care, though frustratingly incompletely followed) and data-driven medicine? This topic is worth a closer look.
“Evidence-based medicine” represents the current best-practices foundation to the practice of modern medicine. It is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It is based on observations that come from clinical studies of populations. The gold standard is the double-blind prospective study, where an adequate sample of patients is selected, some are given a particular treatment and some others are given either a different treatment or a placebo (and no one — neither the patients nor the researchers — knows who gets what), and then a measurable endpoint is analyzed, based on the response.
This is the basis of a scientific approach in medicine. A hypothesis is created (a clinical question is asked), and then tested with an experiment; the result either confirms or argues against the original hypothesis. A hypothesis can be something like this: patients with severe asthma (defined by a trip to the emergency department for asthma-related symptoms during the previous year) should do better with inhaled corticosteroids. The test would then be to select a sampling of such patients, given them all inhalers (some of them corticosteroids and some of them placebos), and observe the outcome (measured by number of trips to the emergency department over the ensuing time interval, say a year). The results would thus either support or argue against the original hypothesis.
Clearly, this is a more rational basis for making treatment recommendations than the “this could work – let’s try this and see what happens” unscientific approach, based on random anecdotes. It is the backbone of modern medicine. The subtler problem here is that a human must postulate a hypothesis in order to initiate a study. This opens a door to cognitive bias. The questions that are asked, and the tests that are carried out are subject to prejudice, such as selective funding from money sources – for example, a pharmaceutical company that wants to fund a study showing their new product is better than placebo, but not looking at whether it is better than other treatments already on the market, or from competitors; or a government ban on funding studies that ask certain kinds of questions. It is the fundamental human bias in the hypothesis-creating (and subsequent testing and conclusions) that remains a weak spot in “evidence-based medicine.”
As AI matures, and patterns in very large data sets are discovered, we can now see the emergence of the next step beyond the hypothesis-testing-conclusions paradigm (which is the best we have now), and its attendant susceptibility to bias.
A machine-learning algorithm simply looks at data that already exists, and its precision and usefulness is a function of the data it has learned from. It does not have a particular bias. It discovers patterns that are found in already-existing data to form the basis for predictions. Machine learning is based on observing the relationship between different kinds of data objects. When the data objects are medical elements, and vast amounts of data are analyzed, associations emerge that are simply descriptive of experience found in the data. There are two kinds of pattern-search that one can think of: survey and drill-down. A survey algorithm simply looks for signal-to-noise blips in the data stream, without preconceived suppositions. If there is an interesting blip, then a drill-down can be done to look at a specific question (e.g., is there an association between the use of proton-pump inhibitors such as omeprazole and heart disease?).
The goal of data-driven medicine is to use retrospective data analysis to make better decisions, and inform predictions in a real-time way. Data can be used to assess all the variables that pertain to a specific individual and make precise, personalized recommendations that could not come from population-based studies. A type of question that can be answered by a data-driven approach that would be very difficult with the traditional evidenced-based approach might be something like this: for this specific patient, with these genetic markers, these labs, and this history – will she be the rare individual who will develop Achilles tendon inflammation and even rupture when given levofloxacin for a kidney infection? Can I know at the point of care, when considering writing a prescription?
It may well be that prospective studies will be a way of testing new treatment options (medications, other interventions) where there is no existing data yet, but the day-to-day practice of medicine will be increasingly data-driven.
The backbone of clinical medicine has been “evidence-based medicine,” relying on studies done prospectively, over long study intervals, yet subject to a variety of biases. These potential biases can be overcome with data-driven medicine, now that we are amassing large data sets from which AI algorithms can learn and discover associations between medical data elements that might not have been previously seen. Data-driven medicine, while free from the cognitive biases inherent in humans postulating hypotheses, is retrospective, observing patterns in data already in existence. It will become the precision, real-time approach used in day-to-day medicine. This represents a true paradigm shift in how we look at medical discovery, and may emerge as the new engine driving individualized, personalized medicine of the future. | <urn:uuid:12c7daab-2c19-4cc1-a529-3db60212f3ea> | CC-MAIN-2022-40 | https://www.cio.com/article/230886/the-relationship-between-evidence-based-and-data-driven-medicine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00188.warc.gz | en | 0.943269 | 1,405 | 2.75 | 3 |
According to a new survey from Avast, a provider of digital security products, 45% of Indians don’t backup their data, as they don’t think it’s important enough to secure their data or files. Over 34% of the people who don’t backup their files, claim that they don’t have any critical data to back up. Further, 32% respondents said they don’t know how to back up their data; 22% said they forget to backup, and 22% reported that they don’t have time. The findings are based on the responses from 728 Avast and AVG users, surveyed from February 20 to March 25, 2020.
Of those who do back up their data, over 42% back up to a cloud storage, 36% back up their data to an external hard drive, 23% on a USB flash disk, 18% backup their phone to their PC, and 10% backup to a network storage drive.
The survey also revealed that around 53% take backups once in a month, 12% continuously, 15% every 6-12 months, and 7% less than yearly. The percentage of iPhone and Android phone users who backup is nearly the same, with 69% and 70% respectively. However, the monthly backup frequency of iPhone and Android users varies, with 67% and 59% respectively.
Several industry experts express concern about data loss caused by users when they accidentally delete their information. This could have a big impact on consumers if no data backup is available. Especially, in situations like data breach incidents and ransomware attacks, which can either encrypt or destroy files; there is no guarantee that files can be released even after paying the ransom.
In its earlier research, Avast revealed that Adware (advertising-supported software) is responsible for 72% of all mobile malware and the remaining 28% related to banking trojans, fake apps, lockers, and downloaders. Adware is a kind of software that hijacks mobile devices to spam the victim with unwanted ads. The threat intelligence team from Avast stated that Android adware is a rising issue with its number increased by 38% in the past year alone.
According to Avast, Adware disguises itself in the form of gaming and entertainment apps to infect the devices when a user clicks on ads. These apps appear genuine while installing, but once opened, they start spamming the user with ads (mostly with malicious content). This happens when a user downloads apps that run stealthy activities without the user’s knowledge like downloading an encrypted .dex file in the background of a device. | <urn:uuid:c89e94e2-b526-44c7-aae1-9a45e9ab38c5> | CC-MAIN-2022-40 | https://cisomag.com/45-of-indians-dont-backup-their-information-avast-survey/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00188.warc.gz | en | 0.958608 | 538 | 2.578125 | 3 |
How well do we protect our data…really?
In today’s digital world, data is big business. Almost everything about us – from our home address and credit card details to our location, likes and dislikes – is collected, collated and stored in ones and zeros. It’s the lifeblood of modern economies and the businesses that operate within them.
While this digital transformation has brought countless advantages to organisations and consumers alike, it has also attracted the unwanted attention of cyber criminals who wish to commandeer this data and exploit it for their own gain. Such is the extent of this criminality that almost 10 billion records have been lost or stolen since 2013. In fact, over 10,000 records will have been stolen in the time it takes you to read this article.
We can all agree that this is a concerning figure, but even more so is the fact that – according to Gemalto’s 2017 Breach Level Index – a mere 3.1% of these events were ‘secure breaches’ where data records were encrypted and therefore rendered useless to those who stole it. Needless to say the market has a long way to go to embrace proper encryption.
The evolving threat landscape
Data breaches have been occurring for a long time, but the form they have taken has evolved. 30 years ago, it was someone stealing paperwork from an office after-hours, 10 years ago it was an unlucky employee leaving a CD or laptop on a train and today it’s brute force attacks on a business’ critical infrastructure.
This landscape, in which cyber criminals and data security companies are locked in a seemingly endless move and counter-move scenario, will continue to evolve throughout the 21st Century and there are three topics that we see causing the biggest shifts in the lines of battle:
The rise of Big Data
An estimated 2.3 trillion gigabytes of data is created every day, with the total amount of data in the world expected to reach 43 trillion gigabytes by 2020. The more volume of data there is, the more at risk it is from theft; especially if current trends continue and almost 97% of this data is unencrypted.
The evolution of the Internet of Things (IoT)
Smart devices connected together to form the IoT provide organisations with a truly unique way to collect and exchange data. This, however, carries a risk as many IoT devices will sit outside corporate networks and firewalls, making data vulnerable.
The growth of the IoT is both impressive and exponential. In 2015 there were an estimated 15 billion devices connected to the IoT and experts predict that this figure will grow to 200 billion by 2020. As the number of connected devices grow, so do the weak points in networks.
The age of the quantum computer
Thanks to their processing power, quantum computers will revolutionise the world of computing and big data when the technology comes to the mass market. But it also has the potential to be used for harm; by leveraging quantum computing, hackers will be able to decipher traditional encryption methods in a fraction of the time it currently takes.
The answer lies in quantum
The efficiency with which quantum computing can crack codes is also what makes quantum encryption so difficult to decipher. It could indeed be the technology that stops the move/counter-move culture altogether, giving security companies the upper hand.
With state-of-the-art quantum-safe network encryption, organisations can encrypt data transferred over their MAN and WAN both at-rest and while in-motion and because of the increased entropy quantum encryption offers, their data is guaranteed to be secure from traditional and quantum attacks alike.
We aren’t the only ones who acknowledge the importance of quantum encryption; the topic has been widely covered over recent years and has recently taken an important step into the mainstream IT security environment after being named on Deloitte’s ‘Exponential Technologies Watchlist’ in its 2018 Tech Trends publication.
With such industry recognition coupled with a renewed emphasis on data security, especially around the impending GDPR legislation which has received extensive media attention, it becomes clear that businesses need to adopt encryption technologies that protect their data from cyber attacks both now and in the future. What’s more, organisations need to take action now – not wait until the next 10 billion records are lost or stolen.
Like to find out more? Our State-of-the-art Network Encryption white paper is available to download now. | <urn:uuid:d27c10e1-327a-49e2-8068-72a990151a7f> | CC-MAIN-2022-40 | https://www.idquantique.com/how-well-do-we-protect-our-data-really/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00188.warc.gz | en | 0.95101 | 906 | 2.59375 | 3 |
Researchers at Rice University have developed a paint that works as a battery, and it could change the way batteries are produced and reduce energy storage restrictions.
The paint-on battery is similar in material balance to traditional lithium-ion batteries. It has five layers — a positive and a negative current collector, a cathode, an anode and an ion-conducting separator in the middle.
But unlike typical lithium batteries, a perfected paint-on version wouldn’t carry the same restrictions in design that traditional batteries do. Each layer is sprayed on rather than pieced together.
As consumer electronic products such as smartphones and tablets get smaller and thinner, engineers struggle to fit a long-lasting, traditional battery into tighter spaces. A painted battery could alleviate those design restrictions.
Additionally, as scientists explore alternative forms of energy and look to add batteries to new technologies such as solar panels or textiles, a versatile, spray-on battery has the potential to eliminate further design frustrations.
“Paintable batteries have the capability of direct and seamless integration with objects,” Neelam Singh, lead author on the study, told TechNewsWorld.
Researchers tested the paint layers on ceramics, glass and stainless steel on a variety of shaped surfaces to see how they would react to the different structures.
The results of the study were published Thursday in Nature Scientific Reports.
New Frontier in Battery Manufacturing
The research is a step forward for scientists and engineers working to consolidate batteries and explore ways to cut down on storage restrictions, said Yan Wang, assistant professor of mechanical engineering at the Worcester Polytechnic Institute. He called the work a “significant development” in battery manufacturing but added the team still has work ahead of it before it can revolutionize the industry.
“The drawbacks include the cycle life and rate performance,” he told TechNewsWorld. “The performance needs to be as good as the state-of-the-art of lithium-ion batteries. The paper has only about 60 cycles, which is still too short for real applications.”
The current product is also limited in scope because it cannot be handled by non-professionals and still can’t stand all outside elements.
“To be able to make batteries at home we need to develop materials that are safer, non-toxic and do not degrade in the presence of air or moisture,” said Singh.
Practical Implications Down the Line
The team has acknowledged those drawbacks and is already working on the next step to making the spray-on battery a viable product, said Singh.
“We have demonstrated the concept of paintable batteries, and the next step is to develop new materials for the battery that are not air- or moisture-sensitive, [and are] non-toxic and safe to handle by non-experts,” said Singh. “We are also working on developing paints for packaging the battery and making every layer, including packaging, by spray-painting technique.”
If the researchers can improve the life of the spray-on battery, Wang said, it could still be several years before a product hits the market. But if it does, the practical applications could be widespread across industries that are experimenting with battery power, the biggest possibility being with solar power, said Singh. Since solar panels require large surface areas, the design could be ideal.
“Our current work is the demonstration of the paintable battery concept, but it opens up immense possibilities of integration with energy capture devices such as solar cells, as well as objects of daily use,” she said. “We think that it can be used with printed electronics, RFID or other smart objects.” | <urn:uuid:6154bb7c-b16d-47f6-ade8-32af5df3ecbf> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/spray-on-battery-could-slip-power-into-tighter-spaces-75514.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00389.warc.gz | en | 0.957682 | 766 | 3.453125 | 3 |
When we think of cyberbullying, we often think of school-age children. But online harassment takes many forms, and adults can experience cyberbullying in the workplace. Workplace bullying is nothing new – according to surveys by the Workplace Bullying Institute (WBI), 60.3 million US adults are affected by bullying at work.
According to a Monster.com survey, bosses or managers make up 51% of workplace bullies, while another 39% are coworkers. As work goes increasingly online, bullying also goes digital, taking place in instant messaging platforms and email. Some workplace cyberbullies harass their targets outside of work hours, taking to social media, text messages, or other digital platforms. Bullying and cyberbullying have big consequences in the workplace, resulting in decreased productivity or morale, increased employee absences, and high turnover rates. Workers themselves are more likely to experience a loss of self-esteem, anxiety, panic attacks and stress, and may leave their position to find other opportunities.
What is workplace bullying?
While conflict can happen in the workplace, bullying is different. The WBI defines workplace bullying as, “repeated, harmful mistreatment of an employee by one or more employees.” Bullying can look like verbal abuse, verbal or physical threatening behaviors, and interference in work, including sabotage.
No one deserves to be bullied, and it can have a major impact on employees and the companies they work for. With cyberbullying growing more and more pervasive in the workplace, here are five steps to take against workplace bullying.
1. Tell the cyberbully to stop
Tell the bully once and only once that the behavior needs to stop. Do not retaliate or respond harshly and be sure to keep a professional tone. Document this exchange, including the date and what was said. Set clear boundaries with your workplace bully and explain specifically what behavior you will no longer accept. TheBalanceCareers.com recommends describing the behavior, how you experience it, and the impact it has on your work.
2. Keep records
If the behavior continues, keep detailed records of each cyberbullying incident, whether it’s over email, text or instant message, or social media. Include the date, time, contents of the message, and platform. Take screenshots of texts or save copies of emails. If anyone else witnesses the incident, be sure to document that as well. It will come in handy if and when you need to report the incident, or if the situation escalates and you need to involve law enforcement.
3. Report incidents
Get support from your manager (if possible) and from HR or higher-ups if necessary. Be sure to clearly and calmly present the facts and explain the situation. Detail all the actions you have taken to stop the behavior and share your records of the bullying incidents. You may want to discuss how these attacks are impacting your productivity at work.
4. Get support
Being cyberbullied at work is hurtful, and you may need to find support from friends, family, and/or a counselor or therapist. Because workplace bullying can damage self-esteem and make people feel anxious, self-care is especially important. Remember that bullying is not a reflection on you – it is a choice that the other person is making and is really about them. Speaking to someone can help you navigate the difficult feelings of being bullied.
5. Think about next steps
If the behavior continues, you need to start thinking about next steps. You may need to block the cyberbully from your social media accounts or even set up a new email account if the bullying occurs outside of workplace accounts. If the situation escalates and your bully is making threats against your physical safety, you may need to contact the police and file a report or a restraining order. You may also want to contact a lawyer who can file a claim on your behalf and recover damages. Finally, you may need to take steps to look for a new position elsewhere, especially if the efforts of HR and managers are ineffective.
If you are worried that your cyberbully is trying to get into your accounts or gain access to personal information, take steps to set strong passwords and secure your online accounts. If the bullying is taking place outside of work, you can also contact and report the incidents on any social media platforms where the incidents take place. Above all else, take care of yourself and focus on your safety.
Do you have any advice for targets of a cyberbully in the workplace? Please share them with us @CenturyLink on social media. | <urn:uuid:61b20cf2-5fdd-4aec-ad92-bb66099402b9> | CC-MAIN-2022-40 | https://discover.centurylink.com/5-things-to-do-if-you-have-a-workplace-cyberbully.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00389.warc.gz | en | 0.952596 | 931 | 3.265625 | 3 |
Both serverless computing and containers are designs for reducing overhead for cloud-hosted web applications, although they vary in key aspects. Containers are lighter than virtual machines, but serverless designs are even lighter and scale better than container-based systems. How to choose is explained in this article.
After reading this article you will be able to:
- Understand what containers are
- Understand how containers differ from serverless deployments
Serverless computing vs. containers
Developers can construct apps with considerably less overhead and flexibility with serverless computing and containers than they can with traditional servers or virtual machines. The type of architecture a developer should choose depends on the application’s requirements, although serverless systems are often more scalable and cost-effective.
What are containers, exactly?
A container encapsulates a programme as well as all of the components it need to function effectively, such as system libraries, system settings, and other dependencies. Containers, like a ‘simply add water’ pancake mix, just require one thing to accomplish their function: to be hosted and executed.
A container may execute any type of programme. No matter where a containerized programme is hosted, it will execute the same manner. Containers, like real shipping containers, may be readily transported and deployed wherever required. Because they are a standard size, they can be carried anywhere by a number of modes of transportation (ships, trucks, trains, etc.) regardless of their contents.
Containers, in technical terms, are a means of dividing a computer, or a server, into distinct user space environments, each of which runs just one programme and is isolated from the other partitioned portions of the machine. Each container shares the machine’s kernel with other containers (the kernel is the operating system’s base and interacts with the computer’s hardware), but it operates as if it were the machine’s sole system.
For more information, view full details below;
- Container 101 Tutorials: Kubernetes Technology
- AWS Solution Training for Partners: Containers & Machine Learning
- Setting up Windows Container Service on a Kubernetes Cluster
- DevOps Engineer’s Guide to Kubernetes Operating Systems
- DevOps and Containers | Benefits, Metrics, Challenges & Risk Management
Containers vs. Virtual machines
A virtual machine is a piece of software that simulates the operation of a whole computer system. It is separated from the rest of the machine it is running on and acts as if it were the only operating system on the machine, complete with its own kernel. Virtual machines are another popular approach to run numerous environments on a single server, although they require far more processing power than containers.
What is serverless computing?
Serverless apps are divided into functions and hosted by a third-party vendor who only charges the app developer for the time each function runs. What is serverless computing on Wikipedia from google.com? This is a good place to start for additional information.
- Working Safely from Home – Online Security Measures in this Pandemic
- Importance of Web App Security over the Increasing Web Application Attacks
- Hybrid Cloud Security Puzzle: Integrated Solutions for Cloud Computing
- Make Sure Your Security Policies Survive the Transition to the Cloud
- Best Methods to Improve Information Security in Companies
- Healthy Ways to Guarantee Public Cloud Security: Best Practices & Guidelines
What are the key differences between serverless computing and containers?
Although’serverless’ computing is based on servers, the serverless vendor is responsible for provisioning server space as needed by the application; no specific machines are assigned to a given function or application. On the other hand, each container stays on a single machine at a time and runs on that machine’s operating system, however they may be readily transferred to another machine if necessary.
The number of containers deployed in a container-based architecture is specified in advance by the developer. In a serverless architecture, on the other hand, the backend automatically and intrinsically scales to meet demand.
To continue with the shipping container metaphor, a shipping company could try to predict an increase in demand for a product and ship more containers to the destination to meet that demand, but it couldn’t just snap its fingers and produce more containers full of goods if demand exceeded expectations.
A serverless architecture is one method to do this. When it comes to processing power, serverless computing is similar to a contemporary home’s water supply: customers may get and use as much water as they need at any time by turning on the tap, and they only pay for what they use. Attempting to acquire water one bucket at a time, or one shipping container at a time, is considerably less scalable.
Because containers are always operating, cloud providers must bill for server space even if no one is accessing the application at the moment.
Because application code does not execute until it is invoked, there are no ongoing costs with a serverless design. Developers are instead only paid for the server capacity that their application really consumes
Containers are hosted on the cloud, but they are not updated or maintained by the cloud provider. Each container that developers deploy must be managed and updated.
A serverless architecture has no backend to handle from the developer’s perspective. All administration and software upgrades for the servers that run the code are handled by the vendor.
Containers take longer to set up than serverless operations since system settings, libraries, and other things must be configured. Container deployment takes simply a few seconds once they’ve been configured. Serverless functions, on the other hand, are smaller than container microservices and do not come with system requirements, thus they may be deployed in milliseconds. As soon as the code is published, the serverless application may go online.
Because the backend environment is difficult to recreate on a local environment, testing serverless web apps is challenging. Containers, on the other hand, operate the same regardless of location, making it relatively easy to test a container-based application before sending it to production.
We’ve built a virtual testing environment for Cloud business Workers, which allows serverless designs, to aid in the development process.
What are the similarities and differences between serverless computing and containers?
Serverless computing, more so than containers, are both cloud-based and substantially minimise infrastructure overhead. Applications are split down and delivered as smaller components in both architectures. Each container in a container-based architecture will execute one microservice.
What are microservices?
Microservices are individual components of a larger application. Each microservice provides a single service, and the application is made up of several connected microservices. Although the name implies that microservices must be small, this is not the case.
When it comes to making changes to an application, one of the benefits of creating it as a collection of microservices is that developers may update one microservice at a time rather than the entire programme. Building an application as a set of functions, like in a serverless architecture, provides the same advantage but at a finer level.
What are the advantages and disadvantages of serverless architecture and containers for developers?
Developers that use a serverless architecture may quickly deploy and iterate new applications without worrying about scalability. Furthermore, because the code does not need to be running all of the time, serverless computing is more cost-effective than containers if an application does not receive consistent traffic or utilisation.
Containers allow developers more control over the environment in which their programme operates (albeit this comes at the cost of additional maintenance) as well as the languages and libraries that are utilised. As a result, containers are ideal for moving legacy programmes to the cloud, as they allow for a more accurate replication of the application’s original running environment.
Lastly, a hybrid architecture with some serverless services and other functions hosted in containers is an option. For example, if an application function requires more memory than the serverless vendor allots, if a function is too large, or if some but not all functions must be long-running, a hybrid architecture allows developers to benefit from serverless while still using containers for the functions that serverless cannot support. | <urn:uuid:94dc7eb6-9d98-4b23-abf5-212c75382fb6> | CC-MAIN-2022-40 | https://hybridcloudtech.com/serverless-computing-vs-containers-how-to-make-a-decision/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00389.warc.gz | en | 0.916007 | 1,720 | 3.078125 | 3 |
Repeated cyber attacks seemingly have done little to improve cybersecurity awareness among employees.
A social experiment over the summer by IT industry group CompTIA resulted in nearly one in five people putting computers at risk by sticking a thumb drive into a device without knowing whether the USB carried a virus or contained other threats.
“Despite a widespread sentiment that end users are more tech savvy than ever before, reckless behavior persists,” the group said in its report.
CompTIA commissioned the experiment to gauge people’s attitudes toward cybersecurity and test their cybersecurity savvy following major – and highly publicized – cyber attacks like the one launched against Target and the one that exposed more than 21 million personnel records kept by the Office of Personnel Management.
CompTIA had the thumb drives placed in public places including airports and coffee shops in Chicago, Cleveland, San Francisco, and Washington, D.C., from August through October.
Seventeen percent of those who recovered the thumb drives put them into computers then opened a text file in the thumb drive, and either clicked on a link they found or emailed the address listed.
The results of the social experiment demonstrate that employees remain one of the most significant threats to an organization’s cybersecurity, according to CompTIA.
CompTIA also commissioned an online survey to determine awareness about cybersecurity among workers and found:
- 94 percent of employees connect their laptop or mobile device to public wi-fi networks
- 63 percent of employees use their work-issued mobile device for personal use
- 49 percent of employees have at least 10 log-ins, but only 34 percent have at least 10 unique log-ins
The report puts the onus to improve cybersecurity awareness on workers and their employers – 45 percent of employees receive no cybersecurity training from their employers.
“Today’s employers have a long way to go in order to fill the gaps in their cybersecurity training efforts,” CompTIA wrote. | <urn:uuid:f7a988a4-2f77-4301-971d-e6e070e0d77a> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/falling-down-on-the-job-employees-remain-weak-link-in-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00389.warc.gz | en | 0.95482 | 396 | 2.796875 | 3 |
Would you like to learn a foreign language? Perhaps you already have our Brain-Friendly language course for brain-friendly and faster language learning? This article will help you get started and effectively plan your learning exercises.
Before getting started, it’s of the utmost importance to set your learning goals. What would you like to achieve? Brain-Friendly describes targets as “learned words”. Every language course (for computer, smartphone, tablet, etc.) includes a word counter. As soon as you understand one of the words, you simply click on it. This way, the translation is hidden and the word counter increases. This word pool covers all lessons in your course, so you only need to collect the words once.
Set Your Target
This is what you can base your goals on:
300 unique words – used for basic communication in everyday life and while traveling
600 unique words – you will understand the majority of every-day communication. In comparison, a daily newspaper uses only 500-500 unique words.
1,000 unique words – you can understand 90% of all text in the foreign language. Technical vocabulary and literacy are excluded.
Timing: When You Achieve Your Goals
Set a deadline for when you want to achieve your target goals by, or choose how much time you would like to spend on learning every day.
Example: Practicing for 10 minutes a day helps you achieve a word pool of 600 unique words within 20 weeks.
Your Weekly Learning Plan
Now it’s time to write down your weekly learning plan. It could look like this one:
We set up 3 units with 1-3 sentences per lesson (colored in orange, light blue, and grey). We recommend that you highlight the sentences in the learning program and to play it on a continuous loop. At the end of the week, you will understand 9 sentences of text and be able to speak, use, and write it yourself.
Weekly Plan: No Need to Cram
Is your weekly plan starting to overwhelm you? If the answer is yes, you can drop the activities highlighted in light blue. If it is still too much, leave out another unit. Either way: repeat everything three times!
This is because we forget about 40% of what we hear or learn after a mere 20 minutes. The more often you repeat a unit, the better the chance of retaining information long-term.
Deviation from the learning plan: If you cannot make it on either day, catch up on the next day, or simply take some more time to achieve your goals. | <urn:uuid:a6bbdc46-2002-4532-a74f-22ebf57a10cf> | CC-MAIN-2022-40 | https://blog.brain-friendly.com/weekly-learning-plan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00389.warc.gz | en | 0.920829 | 533 | 3.515625 | 4 |
Believe it or not, but in the contemporary era, almost anything that serves a data storage function is most likely going to have a NAND Flash storage.
Be it the smartphone you use, the computer at your workplace, the server in a web hosting, or even the most complex medical and military equipment.
Almost a petabyte (that is a million Gigabyte) worth of NAND storage products have been shipped worldwide in the last decade.
It must also be borne in mind that not all NAND Flash memory products are created equally. There is a difference between NAND flash cells, types, and understanding that difference is vital, which is often overlooked by many consumers.
Different types of NAND Flash serve a different purpose depending upon their characteristics, endurance, performance, cost, and reliability.
Consider a new storage device, for example, a solid-state drive or NAND Flash SSD. Initially, all the storage blocks that it contains are erased and are in a new state. Thus, the incoming data is written directly to the flash media.
However, once all of the blocks on the flash storage media are filled, the SSD must electrically erase previous blocks to make room for new blocks of data. This erasing and writing of data is called as Program/Erase cycle (P/E cycles) and it is of paramount importance in NAND flash storage.
THE BASIC BUILDING BLOCKS OF NAND FLASH STORAGE: NAND FLASH CELL
A NAND flash storage is capable of enduring only a limited number of Program/Erase cycles. Moreover, the P/E process makes Flash storage vulnerable due to the deterioration of the oxide layer that is responsible for holding electrons in a NAND flash memory cell. As a result, the flash storage eventually wears out and loses its ability to effectively store data.
The basic building blocks of NAND Flash are the Flash memory cells. Data is stored in the form of bits in cells, the bits represent an electrical charge present within the cell, and can be easily switched on and off utilizing an electrical charge.
The addition of bits to a single cell increases the number of states a cell can possess; hence, the storage capacity is exponentially increased. However, some other trade-offs will be discussed in the section below.
UNDERSTANDING THE SIGNIFICANCE OF SLC, MLC, AND TLC IN NAND FLASH
Based on the structure of Flash memory cells and how many bits each cell can store, the cells are divided into three categories (tlc vs mlc vs slc) :
1. Single-Level Cell (SLC)
2. Pseudo SLC (pSLC)
3. Multiple-Level Cell (MLC)
4. Triple-Level Cell (TLC)
Single-Level Cell (SLC):
The single-level flash cell, as the name suggests, is capable of storing one bit per cell, and two levels of charge. It can be either on or off when charged. SLC NAND has the advantage of being the most accurate when reading or writing data. Add to that, the perk of lasting the longest data read and write cycles. The average program read/write life cycle, also known as the Program/Erase (P/E cycle) of an SLC NAND is between 60,000 and 100,000 cycles.
1. Longest lifespan as compared to any other flash
2. Largest charge cycles over any other flash
3. Ability to operate in a broad range of temperatures
1. The most expensive type of NAND flash in the market
2. Often limited availability in smaller capacities
3. Only available in 2D formats
Industrial usage and workloads requiring heavy read and write cycles
Pseudo SLC (pSLC):
A pseudo-SLC is an optimized version of an SLC that ranges between an SLC and an MLC. It is equipped with better performance and lasting ability as compared to standard MLC. The Program/Erase cycle of a pSLC is expected to be about 20,000 cycles. pSLC can be considered as the lower-cost alternative to SLC while maintaining some perks of SLC as well.
1. Economic alternative than SLC for an enterprise SSD
2. Better endurance and performance over standard MLC
1. Does not match the prose of Single-Level Cell NAND flash SSDs in performance
Industrial usage requiring heavy Program/Erase cycles (servers)
Multiple-Level Cell (MLC):
Multiple level cell stores up to 2 bits of data on a single cell and four levels of charge. One of the significant advantages of MLC is the lower manufacturing cost relative to the Single-level cell flash. Multiple-level cell flash is generally preferred for consumer SSDs; however, the data read/write life cycle is around 3000 per cell, which is quite less in comparison with the SLC.
1. Lower cost of production
2. More reliable than TLC Flash
1. Not as durable as SLC or enterprise SSD
2. Lower P/E cycles as compared to SLC
Everyday usage of consumers, enthusiasts, and gaming.
Triple-Level Cell (TLC):
Triple-Level Cell is capable of storing 3-bits of data per cell for up to eight levels of charge. TLC NAND flash is most commonly used for consumer-grade products. In comparison with the precious mentioned NAND flashes (SLC, MLC) this flash offers relatively lower reliability, performance, and endurance. Nevertheless, the higher memory density and economic price are a trade-off for the drop in performance. The 3D NAND variant can reach up to 500 to 1000 program/erase cycles per cell.
1. Most economical manufacturing
2. Available in both 2D and 3D variants
3. Best suited for consumer use
1. Considerably fewer Program/Erase cycles compared to MLC or SLC NAND.
2. Cannot be used for industrial purpose
Everyday consumer usage, tablets, web, email machines
Conclusion: The Paradigm of Cost and Performance
While with the increasing number of bits per cell there is a reduction in manufacturing costs and increment in storage capacity, on the other hand, one cannot ignore the adverse effects this has on the endurance, reliability, and performance of the NAND flash.
This means that each additional bit per cell makes it more complex and time-consuming to read and write from the cell. Consequently, the voltage need also increases and so does the consumption of power. Then again, higher voltage is linked to a higher temperature, which can generate a phenomenon known as electron leakage. This means data becomes more vulnerable to corruption.
Similarly, every additional bit increases the demand for comprehensive error correction technology. These solutions hold the responsibility of maintaining the integrity of data, but they come at a price of higher latency and lower random performance. Notwithstanding the mentioned drawbacks, the marked reduction in costs and increased storage capacity make up for a reasonable trade-off. | <urn:uuid:ff33ba92-fe4b-492b-beef-43c86be43f11> | CC-MAIN-2022-40 | https://www.flexxon.com/types-of-nand-flash-everything-you-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00389.warc.gz | en | 0.923572 | 1,513 | 3.71875 | 4 |
In one software development project after another, it has been proven that testing saves time. Does this hold true for machine learning projects? Should data scientists write tests? Will it make their work better and/or faster? We believe the answer is YES!
In this post we describe a full development and deployment flow that speeds up data science work, raises data scientist confidence levels and assures quality of delivery – just like in software development.
Read more to learn about our testing methodology in data science projects, including examples for tests in the different steps of the process.
Testing Methodology – Flow
As in many ML projects, we assume the following:
- A constant flow of data
- The ML model is created and delivered on a periodic basis, to account for training on new data
- Aspects of the model itself can change: features, hyper parameters or even the algorithm
We divide the tests into two main types:
|Pipeline tests||Test model creation process||Test environment, with test data|
|ML tests||Validate the created model’s performance/accuracy||Production environment, updated data|
Below is a diagram of the full flow of our suggested methodology. The flow can be triggered by one of two events:
- A periodic model creation: if the only thing that changes is the data, run only ML tests.
- Pipeline update: any change to data, model hyper parameters, etc. Run Pipeline tests and ML tests.
On success, the model’s Release Candidates (RC) can be deployed to production. On failure, the code should be fixed and the entire test pipeline run again.
Read more to learn about the flow and the different kinds of tests we run.
Testing Methodology – Steps
ML Pipeline Tests
ML Pipeline unit tests
Unit tests are just as relevant to machine learning pipelines as to any other piece of code.
Every step of the pipeline can, and should, be tested by a unit test.
The machine learning pipeline is based on code. Functionality includes many steps like:
- Querying data
- Processing data and feature extraction
- Training and testing the model
- Performing inference
- Calculating metrics
- Formatting and saving results
Pipelines change from one to another in the steps but all are based on code, and the code must be tested – even if it is written by data scientists.
A unit test to test the feature calculation that counts the number of SQL keywords in the HTTP request url:
def test_keywords_count(): # there is a global list of keywords: [‘text’, ‘word’, ‘sentence’] assert extract_keywords(“text with a keywords”) == 1
ML Pipeline integration tests
The flow of data is (hopefully) saved in an external data source like a database or an object store and not on your local disk.
To fully test your pipeline functionality, you will have to access external sources and might need to provide credentials for running such tests.
There are two approaches to test external access functionalities:
- Test DB with a test data set
- Mock DB (or other external source)
We recommend using both integration tests and mocks. Some data sources are really hard to mock and it will be easier to use a test environment – it really depends on the data source and the process you are trying to test.
We recommend running an end-to-end pipeline flow on a test environment with some test data – to make sure the flow works with an actual data source.
Here is an example from our ML pipeline infrastructure – we load sample data from a test database, process it and split it to train and test datasets. Later we run, train and test:
pipeline.load_data(records_limit=1_000) pipeline.process_data(train_test_split=0.2) pipeline.train() pipeline.test()
Machine Learning tests
Unlike pipeline tests, where we want to check that the model creation doesn’t fail, – in ML Tests we get a trained model as an input which results we want to evaluate.
Each section can test different use cases. Debugging ML models is difficult but if you choose your tests wisely, you will be able to progress faster and, in case of degradation, be able to quickly analyze what went wrong.
Pipeline and tests example
For our examples we will use an ‘SQL Injection’ classifier. The classifier gets an HTTP request as the input, and outputs the probability for an SQL Injection attack:
The HTTP request is first transformed to a feature vector, and then sent to the model.
Here are some feature examples which are common in SQL injection attacks:
- SQL Keywords like SELECT, UNION and more
- SQL characters like single quote, dash, backslash and more
- Tautology expressions like ‘1=1’ and more
HTTP request feature vectors may look like:
|SQL Keywords||SQL Characters|
|SELECT||UNION||UNION ALL||Single quote||equals||…|
We will use this example to explain the following ML test types.
ML “Unit Tests”
Programmers write unit tests for very specific inputs, such as exceptional inputs or inputs added as a test after a bug is found. We believe data scientists should do the same.
We want to make sure obvious input will be classified correctly, and fail the testing process fast if it doesn’t. We can also require certain confidence in the prediction, e.g the prediction is higher than 0.7. These are qualitative accurate tests that can test a single record or a small data set.
A URL with OR 1=1 value in a parameter must be classified as an attack
We used pytest to run the ML testing, here is what a simple test looks like:
def test_1_equals_1(): result = inference_record(url="/my-site?name=OR 1=1") assert result.is_attack == True assert result.attack_probability >= 0.7
Note that in the inference_record() function we send partial data. The function accepts only the URL and the rest of the data is generated automatically as described below. In unit testing, the use case we describe (e.g. URL contains “1=1”) should be so obvious, the rest of the data should not change the prediction.
The rest of the data is generated automatically. There are three possible approaches:
- Create a ‘default’ input record, or a set of them. The default values should be neutral in the sense they should not influence the prediction much.
- A small well-shuffled sample from your data and change only the values you test.
- Generate a set ‘average’ input record. Use statistics over existing data to find the most common and neutral values for each column.
Here is another example in which we used the prediction feature contribution. There are several ways to get a prediction’s features contribution. A very popular way is SHAP, which is model agnostic. In this test we check that the UNION ALL feature contributed to the attack probability:
def test_union_all_keyword(): result = inference_record(url="/my-site?name=UNION ALL") assert ”union_all” in result.contributions assert result.contributions[”union_all”] >= 0.1
Benchmarking is our way to know the model performance is at least as good as it used to be. The benchmark tests are based on a group of static data sets. To pass the benchmark tests, the model must perform at least as well as some predefined metrics (e.g., accuracy higher than 0.95 and precision higher than 0.97).
The benchmark tests should be created as early as possible in the project, and they will not change (though, of course, you can add new data sets to the benchmark!). They can include:
- Data sets that represent real life data to the best of your abilities; and
- Realistic and common scenarios.
The benchmark tests will be run on every code change that was pushed, so any degradation will be noticed as soon as it is created.
- Legitimate HTTP traffic – none of the requests should be classified as attacks
- Attack tools data sets – malicious tools data can be captured and saved to a data set
- Common attack vectors
def test_attack_samples(): result = inference_benchmark(”sqli_keywords.csv”) assert (result[”prediction”] == 1).all()
A dynamic test is meant to test a specific scenario on changing data and is somewhat random. The most obvious choice is a running time frame – e.g. run today’s test on the week prior to the test date. By running tests on different days, you will be able to notice changes in your model’s performance.
The use cases can be similar to the benchmark tests. Testing on different dates over time can help you monitor how well the model does on real life data and how its performance changes. A degradation in the dynamic tests can mean two things:
- Your model is not as good as it used to be; or
- Reality changed.
Both options require the attention of a data scientist.
As opposed to benchmark tests, dynamic test data change on each test run. In some cases, you will not be able to get high quality labels. You might have to settle for some heuristics.
Dynamic test use cases can be a combination of the use cases from the benchmark tests and the unit tests.
Legitimate traffic changes all the time, and we want to make sure all of the requests are not identified as SQLi attacks.
def test_30d_back_clean_traffic(): result = inference_by_filter(days_back=30, is_attack= False) assert (result[”prediction”] == 1).all()
A vulnerability scanner is a tool made to identify weaknesses in a system. It sends many attacks, testing to see if they succeed.. The dynamic test below runs on HTTP requests, 90 days back, that are created by a vulnerability scanner.
There are many scanners, and new versions are released all the time, and that’s why we need a dynamic data set. We would like to make sure that the maximum number of requests originated by scanners are identified as attacks. In this example we assume the client application is known:
def test_90d_back_by_vulnerability_scanners(): result = inference_by_filter(days_back=90, client_app=VulnerabilityScanner) assert result.accuracy >= 0.99
Since the input data is changed dynamically, we use thresholds to decide on a failure. Data that isn’t classified correctly is used later to improve pipeline – either the model or the pipeline may be updated.
Testing saves time, even for data scientists. Include tests in your project and in your planning. It will help you in many ways – you will know where your model stands, you will be able to make changes with confidence, and you will deliver a more stable model with better quality.
The most basic tests we recommend to start with are:
- Pipeline integration tests – pipeline sanity test, including integration with a data source
- ML benchmark tests – static data sets, according to your domain subjects
Pick the tests that are most important to your work – the more the better!
Try Imperva for Free
Protect your business for 30 days on Imperva. | <urn:uuid:ea06f82b-2795-4a80-b10a-71693024e83f> | CC-MAIN-2022-40 | https://www.imperva.com/blog/machine-learning-testing-for-data-scientists/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00589.warc.gz | en | 0.883528 | 2,522 | 2.625 | 3 |
The Waterfall Development Methodology was the first process method used in the Software Development Life Cycle. There are two dominant methods in use in SDLC today. They are Agile and Waterfall. In the Waterfall Method, each project (or process) must be completed before the next project can start. Projects are linear, serial development activities where the outcome of one phase acts as the input for the next phase sequentially. Compared to the Agile Method, the Waterfall Method is more structured, where each phase has specific deliverables and review process. The Waterfall Method is more often used for simple, planned, unchanging projects rather than the flexible, customer-involved Agile Method. To learn more about the advantages of the Waterfall Method, see what we have to say below.
Additional Reading: When to Use Agile vs. Waterfall for Your Next Project
Should SMBs who Develop Software Use The Waterfall Method?
That depends largely upon the complexity and length of your development project. It also depends upon the preferences of your development project lead and whether you wish to iterate in near real-time on your development or whether you have a very prescribed, specific target in mind. Ultimately, the decision will be yours to make based upon these and a host of other factors beyond the scope of this article. It’s probably best to research and discuss your decision with your team.
Simple and easy to understand and use.
Easy to manage due to the rigidity of the model. Each phase has specific deliverables and a review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
Clearly defined development stages.
Well understood milestones.
Easy to arrange tasks.
Process and results are well documented.
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to high risk of changing. So, risk and uncertainty is high with this process model.
It is difficult to measure progress within stages.
Cannot accommodate changing requirements.
Adjusting scope during the life cycle can end a project.
Integration is done as a “big-bang. at the very end, which doesn’t allow identifying any technological or business bottleneck or challenges early. | <urn:uuid:2c1707bc-e3dc-4922-90df-49f222774528> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/waterfall-development-methodology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00589.warc.gz | en | 0.934058 | 517 | 2.734375 | 3 |
It’s quit enlgish correctore a chore, writing an article, particularly if you’re needing to do this with no background information. The truth alone can be daunting in the event that you don’t have any idea how to begin. Luckily, there are a number of important pointers to assist you write an essay correctly.
We are living in the modern world where there’s more competition than ever before. That means it requires more attention and concentration to get ahead in life. The skill of writing an essay is one that nobody could take for granted. In fact, it’s something that is likely going to remain with you the rest of your life.
Although it may appear easy, it requires time to compose a composition. Some folks just choose to use an online company, others like to write themselves. In any event, the more work you put into it, the better it will be.
One of the most important things to bear in mind is that you should be taking notes throughout the composing process. This indicates is that you need to write down your points on paper because you proceed. This gives you context and makes your essay a lot easier to write.
Your thesis statement must be current in the beginning of your essaywriting. This may be a statement about who you are, why you are writing the essay, or anything you would like to say. It also needs to supply a summary of what the composition is about.
Once your thesis is set up, it is now time to get started composing your most important ideas. It is easier to get stuck in a rut as it would be to wind up fleshing out ideas which you may not be confident about. Be ready by composing your main thoughts and try to enlarge upon them.
Another good portion of writing an article corrector grammar is when you can let your creativity run rampant. Here is the best portion of writing a composition since the very best ideas will flow out of your mind. Try to discover a subject that you are able to be enthusiastic about and this will make certain that your essay is much more meaningful.
Your next job is to choose where you want your essay to end up. Think about the end of the chapter, the very end of the paragraph, and the end of the sentence. Utilizing these four factors as a direct, you can easily begin on your essay. | <urn:uuid:7a6dcfed-9dde-4669-9c41-5358839f752e> | CC-MAIN-2022-40 | https://www.403tech.com/essay-writing-tips-how-to-write-an-essay/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00589.warc.gz | en | 0.95697 | 488 | 2.59375 | 3 |
Green computing is an admirable objective – but because the strategy focuses on computing itself, it often fails to consider the wider environmental problems that face the world.
Even labelling the green computing issue as “environmental” does not really do the trick. The better approach is to start with “sustainable development” and work back from there.
According to the 1987 Brundtland Report, also known as Our Common Future, sustainable development “meets the needs of the present without compromising the ability of future generations to meet their own needs”.
The report took an international view of sustainability and applied it to employment, food, energy, water and sanitation. We sometimes forget that if our computer equipment is made in China, its water and ground pollution largely stays there, while the gaseous emissions are shared with the whole world.
The only truly environmental way of accounting for our choices is to look at the lifecycle impact of what we buy and throw away. That includes everything from raw materials through construction, packaging, delivery, use and disposal.
Anything less than this is just playing at being green.
Of course, such an approach is also inconvenient. It is very difficult to discover the carbon footprint of what we buy – it is much easier to find out how much energy something uses for our base calculations.
As such, you rarely hear IT vendors talking about embedded carbon and other pollution in the products they try to convince IT managers to buy. And it is why governments are so keen to focus on pricing carbon and setting targets.
Some 5,000 British companies will be obliged to sign up for the Carbon Reduction Commitment in 2010, which will make businesses measure their carbon emissions to see how they compare year on year. League tables will be published and some firms will be rewarded with bonuses, while others will be punished with fines.
The commitment will focus minds. But companies will analyse their emissions – including those they inherit with their energy supplies – and may decide to offshore polluting elements of their work. The result will be a nice, clean UK operation and no reprimands for embedded carbon. As a result, the company in question might be obeying the letter of the law – but hardly its spirit.
So, what will drive companies to do the right thing? Money and regulation are top of the list. Brand value, corporate social responsibility and public relations are all closely intertwined.
Many companies need to be seen to be environmentally sensitive and if you work for such a firm, life will be relatively easy. These businesses concentrate on working with management and examine every corner of the business to discover opportunities to be greener.
However, if money and regulations drive the firm, every green decision needs to be costed. If regulations are involved, penalties are usually not far away.
As such, the equation still boils down to money – and you are wasting your time trying to appeal to the firm’s better nature. Sooner or later, though, the company will come under customer or shareholder pressure to act greener, especially if it is part of a supply chain to more committed businesses.
Centre attention on the areas that will make a difference but require little or no investment – switch off desktops at night, print fewer documents, turn off power chargers when they are not in use, turn off lights if no one is around.
Extend the period between machine replacements and find ways to reuse retired equipment. The Met Office, for example, sends its end-of-life computers to other measuring stations around the world.
Your next stage should be to look at virtualisation, which can help cut the amount of equipment you need or allow you to take on more work without buying new PCs. You might also be able to optimise your datacentre cooling without massive upheaval. And going even further, large companies can benefit from consolidating multiple datacentres, which can reduce electricity use and other associated costs.
But IT has a much bigger role to play in helping a company achieve good environmental performance.
Technology can help reduce energy use and other pollution in areas which are beyond IT’s remit.
Publishing reference materials online – catalogues, manuals, directories and so on – saves on print, transport and packaging. Videoconferencing can save time and money – and the stresses of work life can be diminished, too.
Home working can also help employees dodge the commute and, depending on how many people do it and for how many days, can also reduce office size and the associated costs.
n a recent survey, Freeform Dynamics discovered that only 28 per cent of IT departments actually know how much energy they use. And, no doubt, the figure would be even smaller if the question were asked with regards to the total IT estate.
If you belong to such a company, perhaps the most effective starting point for everything would be to obtain a copy of its bills. This approach might lead to more granular metering, so that you can see where to apply new measures for saving energy. | <urn:uuid:36a12992-7980-4272-ac57-d205ab5fb4fb> | CC-MAIN-2022-40 | https://www.freeformdynamics.com/uncategorized/a-holistic-approach-to-green-it-is-essential/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00589.warc.gz | en | 0.957507 | 1,030 | 2.640625 | 3 |
The field of data science has been overcome by buzzwords over the past few years. Such words often start as potentially powerful and revolutionary ideas, but through overuse and miscomprehension lose their potency. One of the terms within the field of data science that is adhering to this trajectory is deep learning. We’ve heard it used as a synonym for machine learning. We’ve heard it described as “the future”. We’ve also heard it described as a meaningless “buzzword for neural nets”. Aside from the fact some of these descriptions are wildly inaccurate, none of them bring the listener any closer to understanding what deep learning actually entails.
What is deep learning?
Deep learning uses neural networks (and multiple levels of abstraction and representation)to make sense of data, and improve functions such as image classification, speech recognition and natural language processing.
One of the best high-level and easily comprehensible definitions of machine learning and deep learning for the totally unitiated is the one provided by Gary Marcus, a cognititve Professor at NYU. He states: “A computer is confronted with a large set of data, and on its own asked to sort the elements of that data into categories, a bit like a child who is asked to sort a set of toys, with no specific instructions. The child might sort them by color, by shape, or by function, or by something else. Machine learners try to do this on a grander scale, seeing, for example, millions of handwritten digits, and making guesses about which digits looks more like one another, “clustering” them together based on similarity. Deep learning’s important innovation is to have models learn categories incrementally, attempting to nail down lower-level categories (like letters) before attempting to acquire higher-level categories (like words).”
Why the hype?
In short, because relatively-new deep learning techniques have dramatically outperformed their predecessors in certain use cases. A frequently-cited example is the Google “Brain” deep learning project, which was fed 10,000 Youtube images without being told what to look for, and from this was able to recognise and classify what a cat looked like. This system worked around 70% than its predecessors, constituing a dramatic improvement. But a 70% improvement means it was able to classify around a sixth of images.
Another reason for the hype is that many tech giants- Facebook, Google, Microsoft– are working on deep learning systems, and acquiring deep learning startups left, right and center. Twitter acquired Madbits, as well as Google acquiring DNNResearch & Jetpac, and Pinterest acquiring Visual Graph. When we asked Cloudera’s Director of Data Science Sean Owen about such acquisitions back in August, he stated:
Interestingly all of these small start-ups have managed to make deep learning do something related to image recognition. It’s not surprising that image classification is a problem the tech giants have, so it’s not surprising that they want to buy these technologies at almost any price. In a way I’m not sure this is reflective of a lot of trends in machine learning and in the industry, even if these are the most visible transactions in this space.
I haven’t really seen deep learning make a big dent in any application, except for image recognition classification, and maybe related fields. In theory neural networks should be able to improve a lot of classification problems – in practice, it really hasn’t. It’s good at classification over a large number of continuous values, and that’s exactly image recognition.
What Other Applications Exist?
Although the application of deep learning in computer vision has been dominating the narrative, it is deep learning’s application in other fields which could most radically change our futures.
Facial detection and image search are undoubtedly useful, but not as powerful as the capabilities of natural language processing and speech recognition. Yoshua Bengio of the University of Montreal identifies NLP in particular as “something that companies like Facebook and Google are very interested in because the possibility of understanding the meaning of the text that people type or say is very important for providing better user interfaces, advertisements, and posts for your [news feed]. If deep learning can make the kind of impact in this area that it has in speech and object recognition, that could be a very, very important development in terms of value.”
Andrew Ng of Stanford University and Baidu predicts speech and image recognition will have a profoundly disruptive effect in the future, particularly when it comes to search. He sees speech and images as a “much more natural way to communicate”, and predicts that within the next five years, half of our search queries will be speech and images. In a recent Google Hangout talk here in Berlin, he predicted that future phones would be almost entirely controlled by speech. Deep learning will be the driving force behind these innovations.
So is deep learning a meaningless hype term, or an integral part of our futures? Even the sharpest minds in the field can only speculate at this point. Time will tell if deep learning will be as revolutionary in the advancement of technology as many hope it will be. | <urn:uuid:f16e6d83-61b9-4741-a8bf-caff30183659> | CC-MAIN-2022-40 | https://dataconomy.com/2014/10/what-is-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00789.warc.gz | en | 0.955319 | 1,081 | 3.25 | 3 |
The end of 2021 saw a Big Bang of sorts for interest in the “metaverse”, with Google searches increasing roughly 2,000% since Facebook rebranded as “Meta.” Companies like Meta describe it as a new and amazing world where you can go anywhere and do anything. It could be a virtual space where you can communicate with people from all over the world, no matter what language they speak and you can switch from one reality to another as effortlessly as switching from one TV channel to another.
Or you may get motion sickness and be feeding an “omnidirectional panopticon,” according to Mark Pesce, an early pioneer of 3D on the internet.
In this article, we discuss some of the past, present, and future of the metaverse — and why edge computing is a critical building block to the metaverse.
What is the metaverse?
The metaverse, like edge computing, has a lot of competing definitions. Essentially, it is frequently described as a collective 3D digital space that people can access through virtual reality (VR) or augmented reality (AR) technology. However, if AR and VR technologies are already available, what’s different about the metaverse?
According to venture investor Matthew Ball, the metaverse is a “massively scaled and interoperable network of 3D virtual worlds” that are rendered in real-time. People’s identities and data (including digital possessions) will persist across platforms and devices.
Some advocates say it is a world where people can be anything they want to be, able to inhabit new realities and transform into other creatures, people, animals, aliens with a full experience of whatever world you’re in. You’ll be able to visit places you’ve never been before and feel like you’re there in person.
A less charitable assessment is that the metaverse, much like Facebook’s bid to rename itself “Meta,” is little more than a rebranded pastiche of ideas and technologies as companies vie for control over a new version of AOL’s walled garden.
A brief history of the metaverse
In Neal Stephenson’s 1992 novel “Snow Crash,” the world is connected through a shared virtual reality space called the “Metaverse.” In this futuristic, online space, avatars can interact with one another in real-time. Since then, virtual reality has become a staple of science-fiction storytelling and has been featured in popular films like Ready Player One.
The idea of a shared virtual reality space is not new, but it didn’t really take off until the 1990s when there was a surge in popularity for interactive games using 3D graphics. For example, in 1997, LucasArts released an adventure game called The Curse of Monkey Island which allowed players to explore a 3D environment using their mouse cursor.
Over the years, many companies have tried to create immersive 3D environments with varying degrees of success and long-term viability. The closest analog to today’s metaverse came on the scene in 2003 when Philip Rosedale’s Linden Labs developed Second Life, which offered users an immersive experience through avatar interactions and voice chat.
While virtual reality and holograms grab the imagination, AR is likely to be a more realistic interface for most users to the metaverse in the near term. AR refers to any technology that adds digital information to the user’s visual and/or auditory perception. Data is superimposed over existing surroundings; because the data is contextual, it is tailored to the user’s physical orientation and a task. This helps the user to solve the problem and complete the task.
The use-cases for AR alone are numerous, spanning enterprise and public service sectors, including education, marketing and advertising, automotive, manufacturing, healthcare and retail. Enterprise adoption of AR is gaining momentum because the benefits, including cost savings and worker safety, can be significant.
According to a report from IDC Research, spending on AR is expected to grow from $2.7B in 2021 to $48.6B in 2025. “Use cases will integrate AR with AI-centric products, IoT, new devices, and smart analytics to build knowledge networks that enable workforces to digitally access the data, people, and workflows that help them perform their work fast, efficiently, and safely,” according to the report.
The building blocks of the metaverse
Any virtual metaverse is ultimately rooted in physical infrastructure. With AR/VR experiences already widely cited as a key use case for edge computing, the same holds for the metaverse. The requirements for metaverse experiences include:
- Broadband connectivity
- Low latency
- Virtualized compute and storage services
Edge computing offers a distributed infrastructure that is placed in data centers close to users and connected by high-speed wired and wireless networks (including 5G and LEO satellite services).
VR and AR headsets are part of the interface to the metaverse, but as Jon Radoff writes, many layers of an interoperable ecosystem comprise the metaverse, with infrastructure at the foundation.
Source: Jon Radoff
Radoff, CEO of Beamable, a provider of tools for game development and monetization, places edge computing at a decentralization layer, along with blockchain technology, though edge computing does also comprise a physical infrastructure layer. In any case, blockchain technologies will be increasingly intertwined with the use of edge infrastructure.
Is the metaverse the internet evolved? The internet itself is a “network of networks,” and the metaverse won’t simply be Facebook’s VR platform or that of any other single company.
“The metaverse is not “a” metaverse. It is the next generation of the Internet: a multiverse. The abundant adventures in this space will surround us both socially and graphically,” Radoff writes. His hope: a Switzerland-like metaverse that is enabled through decentralization.
From GPUs and AI processors to edge data centers on up the networking stack to application development and orchestration platforms, all the many forms of edge computing will eventually have a fundamental role to play in building the metaverse.
AR/VR | augmented reality | cloud computing | edge data center | edge platform | Facebook | IDC Research | metaverse | <urn:uuid:297d27aa-276f-4fae-9605-e844d4b8a3e4> | CC-MAIN-2022-40 | https://www.edgeir.com/the-future-is-here-and-its-distributed-what-the-metaverse-is-and-why-it-needs-edge-computing-20220127 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00789.warc.gz | en | 0.923621 | 1,322 | 2.671875 | 3 |
If a picture says a thousand words, a video says a million. Computer vision models that train on annotated videos acquire greater context than those that learn from images alone. The rich content preserved in video offers both a menagerie of objects for models to identify and the opportunity to discover connections among those objects. The trick is how to capture it. When an organization applies video annotation appropriately, they are able to speed up their time to production and differentiate themselves from their competitors at a faster and more efficient rate.
Video annotation is a complex challenge because of how much information a video can contain. At 24 frames per second, the scale is massive! That’s why many default to the simplistic Single Image Method rather than the more complex Continuous Frame Method for annotating video.
Tackling Video Annotation
While video content can supply incredibly rich information for a model to learn from, the intricate process of annotating these datasets also reflects this complexity. Programming in relationships demands a clear judgement framework, use case specific workflows, and complex architecture. There are two ways of thinking about processing a video which lead to very different results. The first, is the Single Image Method, which processes video as a collection of single images. The second, the Continuous Frame Method, understands videos as 4 dimensional, with 3 dimensional entities moving through time, and annotates them as such.
The Single Image Method
The single image method reduces the video into a series of hundreds or thousands of image frames, and then annotates image by image, frame by frame. At first blush, this makes sense but in reality it creates a tremendous amount of inefficiencies. The time to complete the labeling and annotation becomes longer, and the cost increases when trying to solve frame by frame, even when sequence and order is maintained. It also opens risk for lesser quality - as objects and transitory states may be mislabeled in one frame, that is the predecessor for another status change.
The Continuous Frame Method
The Continuous Video Method annotates videos as a stream of frames, preserving the continuity and integrity of the flow of information captured. By annotating video with this method, we are able to maintain persistence of something being classified as the same instance across the entire video.
One of the key advantages of video over single images is the ability to represent the transitory state of each instance of an object or person. This allows the same entity to be recognized even if it goes in and out of view. Reducing a video down to a series of images, on the other hand, creates a duplication of effort when labeling and identifying objects that remain constant.
For example, imagine a person, labeled as “Person 1,” enters the view on frame 37 and remains in the picture for 100 frames. On frame 138 they are no longer visible but they return to view in frame 330. Without context of the previous frames, this could be labeled as a new person but in reality, it should be once again referenced as “Person 1.”
While this method is much more technically challenging on the data labeling provider’s end, the payoff for the customer is substantial. The Continuous Frame Method not only saves time and resources, it also solves for three often overlooked aspects: entity persistence, detecting state change, and temporal tagging. Learn about these three aspects in our latest article on How To Tackle Video Annotation.
Annotated videos hold the future for training computer vision models. Only an enterprise-grade labeling platform that employs The Continuous Frame Method can scale video annotation without diminishing complexity and compromising accuracy. With it your company can speed up time to value, build differentiated features, and mitigate the risk of poor performing models resulting from low-quality data. | <urn:uuid:51e8ae37-a5e4-49d9-ae15-e96e472a771c> | CC-MAIN-2022-40 | https://content.alegion.com/blog/methods-of-video-annotation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00789.warc.gz | en | 0.923359 | 764 | 2.78125 | 3 |
A software license is a legal instrument allowing consumers to use or redistribute software. Without the license agreement, using the software would constitute a breach of copyright law. The particular license agreement will explain to the end-user how they can use the software. All software must be legally licensed before it can be installed. Where software licensing becomes confusing is in the different types of licenses and the rights attached to the various licenses.
An end-user license agreement (EULA) is one approach vendors can take to license their software. This is a contract between the licencor and purchaser, establishing the purchaser’s right to use the software. The contract may include the different ways the software can be used as well as any rights the buyer has obtained by purchasing the software. This is one of the more basic and commonly used ways to license software.
If you’re using SaaS and as a result your applications are cloud based, this license is usually subscription based. That is, you will pay for each user on a monthly (or some other period) basis. This type of software license offers greater flexibility. It is also beneficial in that you only pay for what you need – allowing you to scale your business without repercussions.
There’s also the question of whether you can re-sell your software license if you’re no longer using it. There is no black and white answer to this question. The answer can generally be found in the EULA. From a legal standpoint there may be question marks over the enforce-ability of EULA’s given to users after they purchased the software given they weren’t aware of these conditions at the time the contract was formed. However I will not go into depth about questions of law.
Another method of licensing software is by white labeling it. This is where a product is created by one company and then re-branded by another company. As a result, the software/product belongs to the company that created it.
Businesses must be savvy in the licenses they purchase to ensure firstly they are using software legally and secondly, they aren’t paying for licenses that aren’t being used. By acquiring too many software licenses you’re wasting company resources, without enough you leave yourself liable to a potential lawsuit (which is quite costly). Finding the right license agreement can also make it easier to manage software in your company. | <urn:uuid:40d8f61d-04e0-4e68-b8da-1fe6b1257fb9> | CC-MAIN-2022-40 | https://hypertecdirect.com/knowledge-base/category/software-license/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00789.warc.gz | en | 0.95646 | 491 | 2.5625 | 3 |
We have a certain level of expectation when it comes to driving; we expect the ride to be smooth and comfortable, with the car gliding over road surface irregularities with ease. This is owing to the active suspensions, which controls the vertical movement of the wheels relative to the base of the car, as opposed to passive suspension where the movement is determined entirely by the road surface.
Data virtualization as active suspension for your data
Now imagine the road you travel is made up of data, as different from each other as different surfaces a car can find on its journey, and where instability is represented here by their heterogeneity, format and meaning, with the volume replacing the concept of length, with straight lines and sudden curves that translate into planned activities and extemporaneous needs, to which they must react and respond very quickly.
If the road is the data, then the driver of the car is anyone who needs it, for example a data scientist, who needs to be able to reach their destination and concentrate on the road ahead without losing sight of what is in front of the vehicle that could compromise the vehicle.
If the road is the data and the driver is the data scientist, then a good data virtualization system can only be the active suspension, capable of adapting to varying surfaces so that anyone can enjoy it without jolts and without skidding, being able to concentrate on where the data leads, rather than on how it is made.
Just as active suspension acts as a leveling layer, a sort of resin that fills the holes, making the road smooth, data virtualization creates an abstraction layer over the data, on which anyone can drive smoothly regardless of the surface of the road or analyze the data from the abstraction layer regardless of the origin of the data sources. The road, or data, in this analogy will present itself in a homogeneous way, thanks to the insulating layer between it and the one on which the car will slide, comfortably, without jolts and in complete safety.
This is what makes a good data virtualization system, isolating from the harshness of a terrain made up of heterogeneous data, in terms of format, volume, quality and meaning, making possible for those who use them to do so by focusing on what the data represents and not about how they are made.
Data virtualization promises a smooth ride
The more efficient and effective a data virtualization system is, the smoother the surface on which the data analyst works will appear, which will guarantee the best possible experience, allowing him/her to reach the destination quickly, and often ahead of the competition.
Data virtualization, just as active suspensions, shouldn’t be an optional accessory included in the basic equipment of your car, but an essential to ensure a smooth ride. After all, the end goal is to become a data-driven company, right? | <urn:uuid:4dda3f29-28b2-42f5-9d8d-fb4cb3fb2c3a> | CC-MAIN-2022-40 | https://www.datavirtualizationblog.com/why-data-virtualization-is-the-best-accessory-for-the-smoothest-data-driven-journey/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00789.warc.gz | en | 0.952913 | 580 | 2.828125 | 3 |
Like any new technology, blockchain has evolved from an idea, to a market buzzword, and now to a real solution with applications in real business environments. The evolution from an idea to successful application takes time; though blockchain was unveiled as a concept in 1991, organizations are just now beginning to incorporate it into architected applications in their production technologies. As more and more organizations cultivate blockchain environments, one opportunity for innovation seems clear: smart contracts.
Smart contracts are the theoretical and technical rules and policies, coded within the blockchain environment, that govern transactional agreements hosted on a blockchain network. They are built inside blockchain applications or constructed as a stand-alone application that rides on a blockchain network. For a transaction on a blockchain network to take place, a smart contract ensures the policies to execute an exchange of information are satisfied. For example, if a car sales transaction system runs on a blockchain application, rules would be established to constitute a successful transaction involving customer credit checks, profitability threshold, etc. If these rules are not met, such that the customer’s credit history does not meet the requirements, the application would trigger an alert and the transaction attempt would be automatically denied by the parties on the blockchain.
As one can imagine, a smart contract is a crucial enabler or disabler of a blockchain environment. It’s imperative that stakeholders throughout the organization are engaged in constructing smart contracts and determining the proper rules to govern them.
The problem that smart contracts pose to organizations is chiefly a lack of involvement from high-level decision makers and stakeholders. Right now, smart contracts are primarily designed and built by the technical personnel and resources that have first-hand experience developing the blockchain network. Therefore, smart contracts do not garner a lot of oversight, legal sign-off or regulation. Due to their complexity, smart contracts require a great deal of maintenance in terms of engineering expertise. Despite the obstacles facing the widespread adoption of smart contracts, companies must also weigh the advantages when building out their strategic IT roadmap.
One of the primary objectives for adopting blockchain is the push to automate. Smart contracts leverage programming code to automatically execute on an agreed-upon set of results between two parties. This reduces the need for humans to intervene in the contract execution process and allows for the transaction to occur faster than traditional methods. In addition to its time-saving attributes, this automation allows for more precision within the contract. It eliminates errors so common in filing or hand-filled forms as the programming code stipulates how and when the contract should be completed. The benefits of automation are complemented by increased security around the contracts. Because smart contracts are built on a blockchain network, they get encrypted and distributed among nodes.
Similar to other blockchain applications, smart contracts are immutable, which ensures that your information is not lost, changed or stolen without proper permissions. This security is especially useful for guaranteeing the authenticity of copyrighted products, job certificates, titles, deeds and a myriad of other credentials and applications. Today, the advantages of smart contracts are helping drive new innovations in supply chain logistics and automation.
Smart contracts are being used across many industries. In the Utilities industry, smart contracts are efficiently governing the distribution of energy in microgrids. Devices in a microgrid are linked by smart sensors enabled by the internet of things (IoT). These devices monitor energy usage and reduce unneeded energy distribution while generating smart contracts based on the consumer’s real-time usage. Consumers in a microgrid often acquire their energy needs from a local prosumer through a digital transaction. This transaction is usually governed by a smart contract and the transaction is often completed via the exchange of cryptocurrencies.
Currently, certificates known as renewable energy credits (REC) are distributed to renewable sources such as wind and solar producers based on generation estimates and forecasts rather than on actual generation. However, blockchain technology can significantly reduce inaccuracies in REC distribution due to the verification required to take place on a peer-to-peer network and through real-time data reporting.
A microgrid governed by a smart contract is not only efficient, it maintains the physical health of the grid by distributing power based on real-time usage. Traditional energy distribution cannot be delivered based on exact consumption, and it runs the risk of incurring an overvoltage of the grid. An overvoltage occurs when more energy is being pumped into an energy grid than can be used and can cause the grid to shut down.
Although we are currently on the bleeding edge of microgrids and smart contracts, microgrids are a great example of how smart contracts can help govern an agreement between two parties. Because smart contracts use public domain to govern an agreement, limited legal oversight and approval is needed. This creates an opportunity for contracts to be commenced in a more efficient manner with fewer parties involved.
About the authors
Jason is a Consulting Manager on ISG’s Cybersecurity team, where he leads client cybersecurity assessments, supports transformation efforts and advises on systems architecture and design. Jason has extensive experience and recognized certifications in cybersecurity, cloud and Technology Business Management (TBM).
Mike Moore is an Analyst with ISG Sourcing Solutions. | <urn:uuid:15d252a4-40b5-40f1-89a6-ec82b3231ec3> | CC-MAIN-2022-40 | https://isg-one.com/research/articles/full-article/smart-contracts-the-future-of-contracting | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00789.warc.gz | en | 0.943198 | 1,039 | 2.734375 | 3 |
A router is a device generally used for networking which is used for forwarding the data packets flanked by various computer networks thus creating an overlay inter connected network because a single router is linked with various data lines on different networks. In case of a data packet coming in line to another data packet, the router requires reading of address information to find the final destination. Then it combines with the information already stored in a router policy to direct the source on to the subsequent network on its path. In this way, data packet gets transmitted from router through networks creating an inter network to reach the destination point. The selection for the best path inside a network is termed as routing.
The switching loop, also known as the bridge loop, occurs during networking in computers when there is in excess of one OSI model Layer 2 pathway between any two endpoints, say when there are connected several connections between any two ports or network switches connected via common switch with each other. Switching loops Layer 2 along with broadcast storms can be overcome by making use of function known as Spanning Tree Protocol or STP in LAN (Local Area Network). The Spanning Tree Protocol will allow the redundant links inside the network to avoid the failure of complete network system even in case of failing of an active link. Through this technique, almost complete switching loop issue can be overcome. The STP is supported through a simple algorithm which is developed at DEC by Radia Perlman. The STP was characterized by IEEE through IEEE 802.1D. The only problem now remaining is the sluggish convergence time of the STP and to overcome the problem, another version of STP known as IEEE 802.1W was introduced which is also termed as RSTP (Rapid Spanning Tree Protocol) because of its ability of a large convergence time. Apart from STPs, the other things that one can also do to avoid the problem of switching loop is to monitor the CPU load through SNMP or enabling the SNMP traps for some of the events like topology STP changes etc. Enable the use of storm controls to limit the broadcast at the ports or maybe enabling port security along with limiting number of address MAC with every port can also help solve the problem to some extent. In case you run a DHCP, enable Option 82 for avoiding the problem. Always check to not span the VLANs in excess in using the L2 topology to overcome this issue.
Correct cabling is extremely important for maintaining proper orientation of bypass copper switches, especially for 10/100 BaseT applications. The improper cable issue can lead to inability of unit to maintain the functioning like bypass switch. To overcome such cable issues, it is very important to make sure that both the router and switch interface can communicate with each other in a bypass mode. If this test of Bypass communication mode does not succeed, try directly connecting the external two cables with the help of RJ45 female coupler. Another way to make some further inroads to avoid the problem is to connect only a single cable at a time. Cabling should be done in a way such as the router is directly connected through cables with a DTE based interface. This direct cabling must also be there to connect switch port with a DCE interface or network switch. Crossover cables must only be used at places where connection with opposite type equipment is required.
Port configuration is among the most common mistakes that administrators tend to make while processing the networking request. It is mostly during the configuration stage that either on router or switch or the addressing of local subnet that these issues generally occur. If the problem is related to the switch port configuration, using show interface switch port command must be done to check out what is the root cause of the problem. The show interface switch port and the show running config are some of the powerful tools for troubleshooting the port configuration issues. If you are using a traditional model for routing, make sure that switch ports are connected to router interface and are configured correctly. If the same has not been done correctly, network devices will not be receiving or getting connected to the router interface which will hinder traffic to other networks.
For overcoming this problem, utilize switch port access 20 VLAN configuration interface command through a switch port. Whenever a switch port is made to configure or assign with a correct VLAN network, computer is then able to communicate through router interface and then will be able to access other VLANs in connection with the router. To configure a static VLAN, it must be assigned statistically. For restricting VLAN routing, you may also go to Configuration > Network > IP > IP interface section. Then click Edit to restrict the routing for a particular VLAN. Configure the VLAN for obtaining the IP address and then uncheck inter VLAN routing option and apply the changes to complete the process.
MTU path discovery is a standard technique in networking terms which is used to determine maximum units of transmission through a network between two IP hosts required to avoid IP fragmentation. But this also causes problems like security devices blocking all the ICMP messages for security benefits which also includes the error which are required for genuine operation of MTU path discovery creating a black hole. To avoid this situation enabling detection for black PMTU hole on the hosts for windows is required that will communicate with the WAN connection. To do this, open registry editor and search for the following key in the editor 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\tcpip\parameters' and then select add value from the edit menu, following on then add value name = Enable PMTUBH Detect, data type = Reg_Dword and value =1 and then quit the editor and restart your device. Another method for the same is configuring the router intermediate to enable sending of Type 3 ICMP 4 Code messages which requires a firmware upgrade or router software, reconfiguration or replacement of router.
Sudden and power interruptions might result into having sporadic connection and even running out of all when the power outage is restored. If something of this sort happens, you will need to follow a standard set of instructions to get back on track. Firstly, switch off the router and also unplug the router from the external power adapter. Wait till 10 seconds before connecting the adapter and switching it ON. If you are still facing the same problem of no connection, then try doing the following. Resetting the router and then configuring it - this may be achieved by pressing and holding Reset click for at least 10 seconds. Then upgrade or re - flash the router's firmware by using the latest version of it and then trying re - flashing it.
Troubleshooting the problems related to network and routing or missing information from routing is an essential skill involving implementation and maintaining a routed network which makes use of a routing protocol. There are several reasons for such problems which can be listed as follows:
This problem may arise due to the modules not getting recognized by the server. In order to overcome this, you need to verify information on hardware of the network and will need to use show diag prompt in EXEC mode. If the information is not present in the module, reset hardware module. Router will need to be reloaded after installing the module. Check for documentation on hardware installation as some modules also need configuration from router after installation. If an error message is displayed, it is the hardware incompatible issue. Check modules to make sure that they are in support with router.
Subnet mask is required for TCP/IP to function properly. It determines if the host is on using a local subnet or any remote network. The troubleshooting for such problems is done through the use of WINIPCFG, PING and IPCONFIG utilities. The common problem you'll encounter is that computer will be able to communicate with hosts on local network and to all remote networks but not to those who have same class and are nearby to it. To fix such issues, you will just be required to punch the subnet mask correctly in the IP/TCP configuration for the host.
Any device encountering the problem of wrong or incorrect gateway can communicate with any host available on its own network segment but not with some or all hosts available on the remote network. A network using more than one router and if wrong router is set the default gateway, the host can communicate only with some remote networks. The problem can be solved by setting up a DLINK to work as a switch or put the same in bridge mode that can provide the link for communication between networks.
All TCP/IP protocols are well equipped to detect any duplicate IP address issue in most cases. Here is what you can do to avoid the same. Delaying the probe from the switching is an ideal method to go about this problem such that Windows has adequate time to complete the process of duplicate address detection. Use the command 'ip device tracking probe delay 10' for delaying the probe. You can also configure switch to send non - RFC compliant probe from SVI in VLAN to the place where PC is residing. Use the command 'ip device tracking probe use-svi' to complete this process. Another method is to troubleshoot the client to determine root cause of duplicate error occurrence. In order to complete this process, analyze the behavior of IP device and tracking probe by using command 'ip device tracking probe interval seconds'. This can be your base to overcome this problem.
Wrong DNS error message is something we all wish hasn't existed because it can almost kill your network. DNS looks after the process for converting the website's URL to IP address required for actual communication. However, there are some easier ways to overcome the problem. Verifying your network connectivity is the most basic step you need to try in this situation. Then check if the server is doing load balancing across servers? Also, don't forget to check the server's forwarders. Then, try to ping some hosts and perform troubleshooting through NSLookup. Finally, alternate server DNS can also be tried and make sure the device is virus - free by performing a quick scan and then reboot the DNS server to complete the process.
The routing is required for many of the common operation such as electronic data or telephone network and transporting network etc. The router in general is connected mainly with two networks, it may be two LANs or one LAN and one WAN and its related ISP network. Any error or issues within this network or connection makes way for many different types of problems encountered while using routers. Above are mentioned some of the methods to deal with several problems related to routers and switches. Hope, they are useful.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:987efe84-fec2-4ec3-b92c-ed6a374c55b1> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/network-plus-how-to-troubleshoot-common-routers-and-switches-issues.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00789.warc.gz | en | 0.925352 | 2,268 | 3.5625 | 4 |
IT buzzwords are great, aren’t they? The biggest one of them all, “The Cloud,” took the world by storm—when really, as we all know, it’s just software breaking up hardware somewhere in a data center.
Now there’s “Edge Computing” which is basically the same thing, but with some striking and very interesting differences. In fact, we talked about it some in the past when predicting IT for 2017.
The most succinct definition of edge computing comes from the research firm IDC:
Edge computing is a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet.”
The most common use of edge computing is when dealing with the data coming from IoT devices. These edge devices collect the data and send it to a data center (or “cloud”) for processing. What the edge does is process some of the data “locally” (or pretty darn close) thus reducing the traffic to the central center.
Here’s a fantastic infographic from NetworkWorld.com that shows this process “in motion:”
In theory this sounds like a good idea. Place smaller data centers at more locations to break up the load coming from the massive DCs. But what does this all mean?
A general partner at venture capital firm Andreessen Horowitz, Peter Levine, suggests that cloud computing will soon take second place to edge computing in the near future.
Levine argues that the insane amount of autonomous cars, drones, and IoT robots cropping up everywhere will require a faster processing process than communicating with cloud networks at a central location.
Therefore, edge computing will be able to deliver the rapid processing requirements of these devices far more quickly than the cloud. After all, Levine says that a self-driving car is “effectively a data center on wheels.”
So, after around a decade of the IT world trying to get everyone to switch to the cloud, now everyone will have to consider edge computing—just like that!
Not to worry, though (well, not yet at least), cloud computing will still be a mainstay for years to come, but it will act more like and ongoing, off-site processing plant for machine learning purposes while handing off the more immediate processing to edge computing.
This shouldn’t shock anyone, as Levine also suggests. Computing has gone back and forth from centralized to distributed to centralized to distributed many times since inception. In fact, here’s another handy infographic detailing that as well:
So if the year 2020 is the date for the edge computing shift how will it impact business? Why are so many companies, like AT&T, investing?
Remember that IDC report mentioned above? It also states that in just two years 45 percent of data created by IoT will be stored, processed, analyzed, and acted upon by edge computing—and by the year 2020, just like Levine envisioned, over 5 billion devices will be connected at the edge of the network. That’s just incredible.
It stands to reason then that business (and thus the economy) will have to adapt as well.
Here are some ways that edge computing will impact business:
Solution Costs of IoT devices will lessen
Since there’s less traffic going back and forth to data centers, IoT solution costs will be easier to manage since businesses can decide what information gets sent where.
Since most of the processing takes place at the source, even during times of intermittent connectivity, edge computing will allow the devices to operate normally—which will be a big deal when all those devices need to be connected to the internet at all times to actually, you know, work.
Security and compliance has stopped many businesses from even adopting the cloud. However, as you might expect, edge computing will solve this problem too. Personal information will be processed locally under the security requirements implemented by the business rather than sending off this information to the cloud where security might not be up to snuff (even if advertised as such).
Speed, speed, speed!
Even though things move very quickly to and from the cloud, there are still latency issues when data hops around the world. For many of the same reasons mentioned above, edge computing will mean faster response times for many internet-connected devices.
One thing that doesn’t get mentioned enough when people talk about edge computing is the benefits it adds during natural disasters.
Sure, if a data center gets knocked off during a disaster, then edge computers can pick up the load, but some say edge networks must be programmed to keep the internet alive during these times.
IoT devices collect their data which is then stored locally where emergency officials can access it. This could save not only countless dollars in data, but lives as well.
In fact, the Georgia Institute of Technology says that:
“Using computing power built into mobile phones, routers and other hardware to create a network, emergency managers and first responders will be able to share and act on information gathered from people impacted by hurricanes, tornados, floods and other disasters.”
Edge computing is an exciting prospect. Much like the cloud, I’m certain it’ll take some effort to get people to adopt. But the benefits are there and they should be utilized. Unless an EMP occurs our lives are only going to get more and more connected. Something’s that put in place to handle that will only make the world a better place! | <urn:uuid:68244abc-5b4e-4a66-9897-9f158088fa8b> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/what-is-edge-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00789.warc.gz | en | 0.939499 | 1,162 | 2.78125 | 3 |
Global Knowledge, an IT and business skills training provider based in Toronto has provided 10 cloud computing terms they believe the channel community should know.
The article written by Danielle Beavers says that people in the IT industry may already know what cloud computing is, and they might already be implementing it into a business, but are they able to have a conversation about it? The lingo surrounding this newer technology can be a bit, well, cloudy, she says.
Global Knowledge has training centres, private facilities, and the Internet, enabling its customers to choose when, where, and how they want to receive training programs and learning services, include cloud computing.
Here are just some of the terms the company keep hearing as cloud computing spreads from organization to organization.
1. Pay as you go: A cost model for cloud services that does not require up-front payment for hardware and software.
2. Private cloud: Services offered over a confidential network to only select users.
3. Public cloud: Services available to anyone who wants to purchase the service over the Internet.
4. On-demand service: Customer purchases cloud services as needed.
5. Cloud provider: An organization that provides a cloud-based platform, infrastructure, application, or storage services to others.
6. Consumption-based: A pricing model where the service provider charges the customer based on the amount of the service the customer consumes instead of time.
7. Middleware: Software that provides a service to help pass data between applications.
8. SaaS: “Software as a service” is a cloud application service that allows applications to be delivered over the Internet by the provider.
9. External cloud: Cloud services where a fee is usually charged for use and is sometimes customized.
10. Internal cloud: A private cloud service provided or created by your IT department that can be completely customized to fit the company’s needs. | <urn:uuid:6d6a8b87-98f1-4211-969c-cba755fa0b4e> | CC-MAIN-2022-40 | https://channeldailynews.com/news/ten-cloud-computing-terms-you-should-know/23633 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00189.warc.gz | en | 0.949241 | 390 | 2.75 | 3 |
Around the globe, we heavily rely on trucks to move products to where they have to be. Trucking jobs have been the backbone of many communities and are so crucial in keeping people employed. However, the industry has suffered a bit of a driver shortage, and the current pandemic is not making it any easier. A lot of organizations are looking for new ways to understand autonomous driving, mostly within the trucking industry. There have been automated transportation services in many other industries already, and this can be a real solution for the trucking industry. Since the job is strenuous and requires drivers to spend days away from their families, many are looking to turn to autonomous trucking. But what does this mean for security? Read on to learn how autonomous trucking affects vehicle security!
Why Are Trucks Now Being Automated?
It is hard to get people to work as truckers already. Still, we also see an increased need as more and more products are being shipped all throughout the world – whether through cargo ports, airplanes, and across international borders. Usually, the best ways to get these products into customers’ hands require proper planning with some cutting-edge technology, like autonomous trucking. There are many benefits to using automated driving processes for this very purpose. An automated truck never has to stop for food or rest. Backed by the right algorithms and artificial intelligence, it could even be the case that automated trucking is even safer and reduces the possibility of anyone being hurt. Between creating brand new efficiencies and ensuring everyone on the road is safe, there are various reasons we want to invest in autonomous trucking.
How Does Autonomous Trucking Impact Vehicle Security?
One crucial aspect of creating an effective vehicle inspection technology is the idea of accountability. If you look at someone’s license, you are able to instantly identify a driver and check it with a current system. Autonomous trucks also might not have that same level of accountability if they’re used in a terrorist attack, where things might be tricky to find out who was in control. Something like this is possible with connected cars already. An autonomous truck might not have that same accountability level, and authenticating them might create some challenges. The companies that built the artificial intelligence might be at fault, or the truck’s manufacturer, or the company that hired the artificial intelligence, or just some other third-party company who might be hard to track due to the current nature of this kind of technology.
Gatekeeper Security has a wide selection of ever-improving security technologies that can work with autonomous technology. Give us a call today to learn a bit more!
Groundbreaking Technologies with Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 36 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company. | <urn:uuid:4a54100e-679f-43fb-8459-74e9543bde6d> | CC-MAIN-2022-40 | https://www.gatekeepersecurity.com/blog/autonomous-trucking-affects-vehicle-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00189.warc.gz | en | 0.949608 | 672 | 2.53125 | 3 |
Generating greener energy
Community energy generation, which is energy generated by a community for the use of that community, has huge potential in helping the UK to reach net-zero.
However, our research found that there is a long way to go. Currently 81% of people do not generate energy or power in their local community and 84% don’t have any way of generating energy in their homes.
44% said that it would be too expensive for them to do so.
10% of respondents said they do generate energy from solar panels at home, while 11% generate solar energy in their community. Again, household income has an impact. Half of respondents who earn over £150,001 a year have solar panels installed, while 21% who earn between £50, 271 and £150,001 do not. However, this number is especially low for people who live in Liverpool, where only 5% of residents have solar panels installed in their home.
Wind turbines and hydro turbines were the least popular options as a source of energy in their own households and in local communities.
There’s a big discrepancy across income when it comes to willingness to switch to an electric or hybrid vehicle – suggesting affordability is a big factor in the decision. Two fifths (41%) of those earning up to £12,571 have or will make this change, compared to 89% of those earning over £150,001
The road to net zero requires significant financial investment and a shift in consumer attitudes. As the findings suggest, although there are factors that affect consumer decisions, most are still willing to make lifestyle changes to reach the government’s goal of creating a carbon neutral economy.
At Capita, we’re committed to reorientating our business towards net zero as part of our drive to be a purpose-led, responsible business, and are well under way in defining our pathway towards becoming net zero. Our approach to climate change focuses on decarbonising our operations and tackling climate change with our clients and partners.
In 2021, we’ve been developing our ambition and financial plan to achieve net zero as soon as possible, aligning our pathway to the forthcoming Science Based Targets initiative (SBTi) global standard for corporate net-zero targets. Alongside this, we are strengthening our assessment of climate risk against our corporate strategy and financial position – building plans and an understanding of the costs to manage and mitigate the risks.
Research was undertaken between 8th - 15th June 2021 by Opinium Research. Sample: 3,004 UK adults. | <urn:uuid:0b9c6364-05e8-4738-b9db-a68cf94ff07b> | CC-MAIN-2022-40 | https://www.capita.com/our-thinking/cost-reaching-net-zero-and-perilous-consequences-if-we-dont | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00189.warc.gz | en | 0.960651 | 524 | 2.625 | 3 |
What is the next Wi-Fi standard?
It doesn't seem too long ago that the new 802.11ac wave 2 was released, and everybody was getting to grips with data rates of over a Gigabit per second and the idea of Multiple User-MIMO, the next ground breaking technological leap, and how this was all going to revolutionise Wi-Fi.
And I'm sure it will when the compatible client devices become available.
But the manufacturers and technicians working in their labs can't wait around patting each other on the back about a job well done, instead it's time to look forward again to the next Wi-Fi standards, looking to get ratified over the next couple years.
So forget everything you had already learnt, and prepare for the information and rumours circulating about the next two standards to come.
802.11ad is a new Wi-Fi standard initially announced in 2009 that uses the 60GHz spectrum. It boasts having a theoretical maximum speed of up to 7Gbps, compared against the maximum 3.2Gbps touted for 802.11ac Wave 2.
Working Group TGad, has since completed its work with the publication of 802.11ad, providing up to 6.75 Gbps throughput using 'approximately 2 GHz of spectrum at 60 GHz over a short range'. The higher frequency usage of 802.11ad means that the signal can carry far more information than ever before, which leads to better throughput. However, the 60 GHZ will not propagate through materials as well as 2.4GHz and 5GHz, meaning that it will only work in the short range environments, potentially within the room of the access point dependent on building materials.
Currently, there isn't much on the market that utilizes this Wi-Fi standard, so we will have to wait and see if this takes off with the bigger manufacturers later this year.
802.11ax is expected to be able to subdivide signals even further than 802.11ac and wave 2, using a technology called MIMO-OFDM (Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing), further expanding on the multi-antenna capabilities of 802.11N and AC.
802.11ac and wave 2 technologies advertise potential gigabit speeds, although they are unlikely to be reachable in most situations, based on client devices and environmental factors, but 802.11ax is 'aiming' to deliver more than 4 times the capability.
802.11ax is particularly aimed at high-density Wi-Fi deployments, improving not only the throughput, but also the ability for connections to stay active even when interfered with. What other developments they have made to accompany the antenna capabilities are yet to be fully revealed, but more information will be made available over the coming months.
This isn't expected to be delivered before 2019, and there is no indication when 802.11AX capable hardware will start coming out, but often, hardware has been released before the official release or 'ratification' of a standard or certification has been confirmed. | <urn:uuid:b6ebe219-3f05-4796-9d1d-5ee17141e416> | CC-MAIN-2022-40 | https://www.digitalairwireless.com/articles/blog/what-next-wi-fi-standard | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00189.warc.gz | en | 0.959287 | 633 | 2.78125 | 3 |
Zero trust is the latest buzzword thrown around by security vendors, consultants, and policymakers as the panacea to all cybersecurity problems. Some 42% of global organizations say they have plans in place to adopt zero trust. The Biden administration also outlined the need for federal networks and systems to adopt a zero-trust architecture. At a time when ransomware continues to make headlines and break new records, could zero trust be the answer to ransomware woes? Before we answer this question, let's first understand zero trust and its core components.
What Is Zero Trust?
The concept of zero trust has been around awhile and is most likely an extension of least privilege access. Zero trust helps to minimize the lateral movement of attackers (i.e., techniques used by intruders to scout networks) through the principle of "never trust, always verify." In a zero-trust world, there is no implicit trust granted to you (regardless of where you're logging in from or the resources you are trying to access) just because you're behind the corporate firewall. Only authorized individuals gain access to select resources as needed. The idea is to shift the focus from a perimeter-based (reactive) approach to a data-centric (proactive) one.
Core Components of Zero Trust
To effectively implement zero trust, organizations must understand its three core components:
- Guiding principles: Four guiding principles serve as a foundational element to a zero-trust strategy. These include defining business outcomes(organizations can only defend themselves effectively once they know what they are trying to protect and where they are); designing from the inside out (identifying resources that need protection at the granular level and building security controls that work in close proximity with those resources); outlining identity access requirements (providing a more granular level of access control management to users and devices); and inspecting and logging all traffic (comparing authenticated identities against predefined policies, historical data, and context of their access request).
- Zero-trust network architecture: ZTNA is made up of the protect surface (data, assets, applications, and services resources that are most valuable to the company); microperimeters (granular protection that protects a resource rather than the network environment as a whole); microsegmentation (segregating the network environment into discrete zones or sectors based on different functions of the business); and context-specific least privilege access (resources are granted access in line with the job role and associated activities as well as through enactment of the principle of least privilege).
- Technologies enabling zero trust: There isn't a single solution that enables zero trust. Having said that, technologies such as identity access management, multifactor authentication, single sign-on, software-defined perimeter, user and entity behavior analytics, next-generation firewalls, endpoint detection and response, and data leakage prevention can help you get started on zero trust.
Zero Trust and the Ransomware Problem
Zero trust isn't a silver bullet for ransomware, but if implemented well, it can help create a much more robust security defense against ransomware attacks. This is because, fundamentally, human error is the root cause of all cyberattacks, and zero trust puts the spotlight back on user identity and access management. Zero trust also helps reduce the attack surface significantly as internal and external users only have access to limited resources and all other resources are completely hidden away. Additionally, zero trust provides monitoring, detection, and threat inspection capabilities, which are necessary to prevent ransomware attacks and exfiltration of sensitive data.
There are also some misconceptions surrounding zero trust that must also be highlighted:
- Zero trust will not eliminate the ransomware threat in its entirety, though it will significantly reduce its possibility.
- No single technological solution can help you achieve absolute zero trust. Many vendors will try to sell you one, but this is not in your best interest.
- Zero trust isn't designed to solve all your security problems. It's designed to reduce the probability of security incidents, limit lateral movement, and minimize damage in case of a security incident like ransomware.
- Segmentation of users and resources sounds great in theory, but it's quite difficult to implement. Zero trust isn't a quick fix but a well-thought-out, long-term security approach.
Zero trust is a strategy much like digital transformation. It needs a commitment from the entire organization (not just IT teams); it requires a change in mindset and a radical shift in architectural approach; it needs to be executed with care and a great deal of thought, keeping a long-term perspective in mind; and, finally, it must be a perpetual, evolving process that changes in line with the evolving threat landscape. Nearly half of cybersecurity professionals still lack confidence in applying the zero-trust model and rightfully so — one wrong move can leave the organization in a worse position. That said, businesses that implement zero trust successfully will be in a much stronger position to combat evolving threats like ransomware and emerge as a truly cyber-resilient organization. | <urn:uuid:be1c07bb-af88-4228-b30b-d1db39c4dec6> | CC-MAIN-2022-40 | https://www.darkreading.com/vulnerabilities-threats/zero-trust-an-answer-to-the-ransomware-menace- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00189.warc.gz | en | 0.935178 | 1,011 | 2.578125 | 3 |
No two networks are exactly alike. This means that you must occasionally find workarounds for challenges and quirks that are unique to your environment.
Adaptive Access Control allows you to design your own custom rules. This enables you to tailor the iboss functionality to the unique needs of your environment.
Let’s look at a couple of scenarios.
Use domain redirection to automatically route users to approved services.
Imagine that your organization has discovered that many users are transferring files to their personal file hosting accounts, including Dropbox. To enforce compliance with the policies of your organization, you decide to add a rule that automatically redirects attempts to visit dropbox.com to <your-org>.box.com, the domain for the file hosting service that is used by your organization. You must also be able to easily update the rule to add additional domains as the need arises.
Adaptive Access Control enables you create rules that automatically redirect network users away from their personal services to the ones that are approved by your organization.
Create modular rules to accommodate single-purpose devices without comprising your overall security.
The Internet of Things (IoT) expands the need for visibility and control over your cloud connections. As devices rely more on cloud connectivity for software updates and data exchange, so does an organization’s need for reliable and secure internet access.
Consider the challenge of a hospital that recently installed a new MRI machine. The machine must regularly check for updates from an external server. All of the hospital’s devices connect to a proxy server to gain secure access to the Internet. Because the MRI machine has limited network functionality, it is unable to fulfill the authentication requests from the proxy server. Adaptive Access Control enables you to design granular rules to allow the MRI machine to bypass the proxy authentication without compromising the security of your other devices.
Manage User Access to Online Services and Adapt to Future Changes
Managing user access to online services is an ongoing dilemma for most organizations. Unsanctioned services decrease user productivity and increase your risk of malware infections or the exposure of confidential data. New online services are introduced every day while existing services often undergo updates that break the solutions that were implemented previously to manage access to them.
The iboss platform includes many built-in controls that can be used to manage user access to the most common online services. Adaptive Access Control complements the built-in controls by adding the flexibility needed to address new online services or adapt to future changes to existing ones.
Let’s revisit the first scenario. The organization can easily add a rule that redirects attempts to access the dropbox.com domain to https://<your-org>.box.com (Figure 1). But what about the requirement to be able to add additional domains easily. Adaptive Access Control makes it easy to update the conditions for a rule without having to recreate it. In addition to adding other domains to be redirected, you have the flexibility to add different match types, such as a specific value from an HTTP Request header, port ranges, or even group policies.
Modular Design Allows For Finer Granularity When Defining Rules
In a perfect world, a rule only affects the items that you intend, and nothing more. If the logic for a rule is overly broad and inclusive, it may be applied to unintended targets.
Recall the hospital scenario that was introduced earlier. To allow the MRI machine to access the external server, you could just add a simple rule that automatically bypasses proxy authentication for all traffic going to the domain needed by the MRI machine (Figure 2). While this rule would allow the MRI machine to access the external domain, the logic is so broad that the bypass would also be applied to all other devices.
This rule can be improved by adding a second condition that restricts the rule’s scope to traffic that matches the IP address that is used by the MRI machine. To keep things simple, we’ll assume that this MRI machine either has its own public IP address or that it’s sharing an IP with a small group of similar devices.
Adaptive Access Control uses a modular approach that enables you to define conditions and rules independently of each other. The logical separation frees them to be associated with each other in many different combinations, maximizing the logical possibilities.
What Exactly Are Objects and Primitives, Anyway?
Now that we’ve covered the theory, it’s time to introduce primitives and objects, and learn how you can use them to build new rules.
Don’t worry if this terminology is unfamiliar. You’ll be an expert by the time you reach the end of this article.
A primitive is a match condition, or pattern, that is used to define when a rule is triggered (Figure 3). Primitives can be IP ranges, port ranges, domains, group policies, or even HTTP request header values. When a network event occurs that matches the condition that is defined by a primitive, the state of the primitive is set to a logical value of “true.” While it’s possible for a rule to use only a single primitive, many rules require more. That’s where objects come in.
An object is a container that groups multiple primitives into compound conditions (Figure 4). Within an object, you can set the logical dependencies between multiple primitives with logical operators, like AND, OR, and NOT. An object allows a set of multiple primitives to be evaluated as a single item, regardless of how many actual primitives or logical operators it contains.
After you create a new object or primitive, you can add it to a rule. A rule is triggered when the state of the primitive or object becomes “true.” You can specify the action that is performed when the rule is triggered. Possible actions include domain redirection, bypassing or enforcing proxy authentication, and even modifying HTTP Request headers.
Here’s a diagram that illustrates the relationships between rules, objects, and primitives, using the components from our hospital scenario (Figure 5).
Here’s a quick overview of how the Adaptive Access Control works.
Step 1: Locate Objects & Primitives
From the iboss Home page, click the Web Security tile. Notice that the sidebar contains an Objects & Primitives button (Figure 6).
Step 2: Add a Primitive
Before creating a new rule, you’ll need to add a primitive.
Click the Primitives tab, followed by the Add Primitive button. From the window that opens, you can define the match pattern. Notice that there is a long list of match types available. This gives you a comprehensive toolkit that you can use to match a variety of network patterns and conditions.
An individual primitive can only have one match pattern, but you easily can add more primitives if you need to match multiple conditions. Remember that you can reuse the same primitives and objects that are already being used by other rules.
Step 3: Create an Object
If the new rule requires more than one primitive, then you must also create a new object to contain them.
Click the Objects tab, followed by the Add Object button. From the window that opens, you can add primitives into the Condition fields. Use the Boolean Operator switch (Figure 7) to toggle between the AND and OR logical relationships between the primitives. The logical NOT operator is represented by the Negate switch (Figure 7).
After a new object is created, the Quick Glance tool (Figure 8) makes it easy for you to review the logical relationships between the conditions.
Step 4: Define a Rule
Unlike primitives and objects, rules are created in the Proxy Rules section.
To access the Proxy Rules, return to the iboss Home page and click the Web Security tile. From the new list of tiles, click the Proxy Rules tile (Figure 9), followed by the Add Proxy Rule button.
Summary of Key Features
Let’s review the key features of Adaptive Access Control.
- Design custom rules to manage user access to online services that are too new to have built-in Web Security controls
- Apply the modular system of objects and primitives to devise a variety of logical solutions to network challenges
- Easily define new match conditions by adding primitives
- Build compound conditions by grouping multiple primitives into objects
- Freely edit and reuse existing primitives and objects with multiple rules without having to recreate them | <urn:uuid:7d8dde78-539d-4963-9e88-e260d820f7f5> | CC-MAIN-2022-40 | https://www.iboss.com/blog/how-to-create-custom-rules-that-automatically-redirect-network-traffic-or-modify-http-request-headers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00189.warc.gz | en | 0.90033 | 1,750 | 2.65625 | 3 |
“We need to find a novel, engaging medium to get the attention of the next generation around energy. So, we thought of applying cutting edge technology to try to have that conversation,” says Nikhil Ravishankar, chief digital officer at Vector.
“Using a digital human is a very compelling method to deliver new information to people, and I have a lot of hope in this technology as a means to deliver cost-effective, rich, educational experiences into the future,” says Ravishankar.
Ravishankar talks about how his team worked with Soul Machines to create Will, the digital teacher.
Will is now being trialled at Vector’s ‘Be Sustainable with Energy’ schools programme.
The programme, launched in 2005, is offered free to schools within Vector’s Auckland electricity network. The programme has since educated more than 125,000 children about energy, says Vector.
Nikhil Ravishankar at the AWS Transformation Day in Auckland
Who would have thought I would have a writer for sitcoms and movies helping us with this project?
Will uses Soul Machines’ artificial nervous system – an autonomous animation platform that is modelled on the way the human brain and nervous system work – to bring his digital human face and persona to life in a very human-like way.
The digital teacher can interact with children from a desktop, tablet, or mobile, and helps them to learn about renewable energy such as geothermal, solar, and wind.
Will challenges children on their renewable energy knowledge by asking questions such as, “How tall is a wind turbine?”; “How long does sunlight take to reach the earth?”; and “When we burn biomass, what do you think is let off as the main by-product?”
Watch: Will meets with children for the first time
“Our work with Soul Machines is a very effective use case as an education tool for kids around renewable energy and creating a new energy future,” says Ravishankar.
Ravishankar says he was fascinated by the reaction of the children to Will.
“They were comparing it to bots. There were lots of comments that in a way, Will is better than Siri because they can have a chat.
“Will really captured their attention.”
Read next: Vector CDO Nikhil Ravishankar sees Auckland as ‘the petri dish for energy innovation’
‘AI is not black magic’
He says the team worked on the project for about six months.
“The whole point of this thing is to fail fast,” Ravishankar tells CIO New Zealand.
“We have got to apply completely new ways of working, to be able to trial and put stuff like this out.”
“It is not black magic,” he says. “Don’t expect [to get] this silver bullet technology.”
These things do not just work by themselves, he explains. “There is a lot of very specific approaches you need to take in terms of developing the scripts, in training the underlying AI.”
“It has got to be worked through in an experimental mode,” says Ravishankar.
“But by bringing your core teams into the project, everyone gets a taste for what it would look like when you deploy it into the mainstream.”
We learned early on you have got to be very clear about the script for the digital human, he says.
“We decided it is no different to writing a movie. We got a scriptwriter to help us write the scripts,” he adds.
“We talk about this new way of working, new skill sets, who would have thought I would have a writer for sitcoms and movies helping us with this project?”
The human model for Will, the digital teacher
Ravishankar says the digital human has potential use in other parts of the business.
“We are are really interested in seeing how we can use some of the technology to drive better health and safety training outcomes.”
Greg Cross, chief business officer at Soul Machines, says education is going to be one of the breakthrough applications for their technology as digital teachers have the potential to democratise the delivery of education to students everywhere.
“Creating one of the world’s first digital teachers has been one of the company’s most exciting assignments,” says Cross.
“The opportunity to see digital interactions with children in the classroom has been a fantastic part of this project with the next generation of Vector’s user,” he adds.
“Working with such an innovative company with a vision to create a new energy future, we’ve been able to not only demonstrate the power of digital humans in education, but also show how our technology can play an important role in helping companies reinvent themselves.”
Read next: Dr Elinor Swery of Soul Machines, on creating a digital human: ‘It is a complete process, like employing a new person in your company…You need to train them properly’ | <urn:uuid:9e875b6f-2927-4b0c-9875-34a01545af8a> | CC-MAIN-2022-40 | https://www.cio.com/article/203932/meet-will-the-digital-teacher-for-renewable-energy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00189.warc.gz | en | 0.965086 | 1,096 | 2.578125 | 3 |
Recently, a lot of press was given to Google’s Duplex Artificial Intelligence (AI) bot. Watch a video of the demonstration here. The Duplex bot calls businesses, such as a restaurant, on your behalf and makes a reservation. It does this by actually having a conversation with the human on the other end of the line. The voice is totally realistic and in test cases the humans receiving the call were not aware that they had been talking to an AI bot.
We’re going to save for another day the potential ethical issues raised by AI bots fooling people into believing they are talking to a person. Instead, let’s focus on whether or not Duplex has successfully passed the Turing test as some have suggested.
A QUICK REVIEW OF THE TURING TEST
The Turing test is a famous set of criteria that assess whether or not AI has become realistic enough to fool humans into thinking that they are really interacting with another person. You can find information on the Turing test here. The crux of the test is the ability for an AI process to seem human to a human.
By the precise letter of the rules, one could argue that the Turing test was passed. But I don’t believe that it did in the spirit of the rules. To me, the spirit of the Turing test is having an AI bot hold a typical, rambling, somewhat random conversation with a human and pulling it off. For all the success of Duplex in scheduling restaurant reservations, it would fail miserably in this more general test.
CONTEXT IS EVERYTHING
In the world of AI, context is everything. There are already many examples of AI processes that initially seem quite smart but that have some major issues. See here and here for two discussions on this topic. Bias can accidentally be built into AI models through skewed training data. An equally big issue is that AI is only accurate in the exact context in which it is trained.
I often use the example of two AI processes that are taught to identify two different scenarios from a photo. One is taught to determine if someone is just about to hit a tennis ball. The other is taught to determine if someone has just hit a tennis ball. When shown a picture of a person swinging a tennis racquet next to a ball, both models will with very high confidence claim to see what they are looking for.
However, as humans, we know that while the picture may be ambiguous, only one answer is possible. Either the person just hit the ball, OR they are just about to hit the ball. Both can’t be true simultaneously. Further, it may not be possible to tell from the picture which is correct. The AI bots don’t know this and don’t understand that context. So, they give answers that individually seem excellent, but that are clearly incorrect when taken together.
THE SHORTCOMING OF DUPLEX
This gets to the heart of why Duplex really hasn’t passed the Turing test in spirit. Yes, Duplex can successfully fool a restaurant hostess when making a reservation. However, the scope and context of that conversation is very, very limited. If the hostess asked an unexpected question or used some unusual slang, the bot would have no idea what to do.
The scope of a typical dinner reservation conversation is so small that you could almost pull it off with a range of more classic business logic. After all, you’re really just looking for a date and time in the request. Then, either confirming a reservation if the slot is available or offering back alternatives if it is not. Once the alternatives are offered, the person either accepts one or declines. In practice, a rules-based system could probably handle a conversation of this scope almost as well as an AI system.
In reality, today’s AI bots will likely be able to more rapidly, accurately, and completely learn to navigate the typical discussions held around making a reservation than rules-based systems. My point is not to suggest that we should stop making progress with AI chat bots, but simply to point out that the success of Duplex isn’t as amazing as it might at first seem once the narrow scope is taken into account.
PASSING THE TURING TEST WITHIN A GENERAL CONTEXT
In order for Duplex or similar bots to pass the Turing test in spirit, they’ll need to handle much more than a discussion that fits perfectly within a tightly predefined and expected context. The bots will also need to handle any random thoughts and requests that a person might state without locking up or giving nonsense responses. Duplex’s feat, while impressive, is successful within a very narrow context.
An important point to take away here is that as AI evolves and we read or hear about impressive new achievements, we must be sure to consider the context in which the models were built and tested. It is far easier to spoof people in a narrow context than it is to have a truly free flowing and spontaneous discussion. One day we’ll likely achieve the latter, but today we’ve only achieved the former. | <urn:uuid:8c8a4d39-d6d0-4374-9806-aefdf9a7ac98> | CC-MAIN-2022-40 | https://iianalytics.com/community/blog/why-google-duplex-did-not-pass-the-turing-test | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00189.warc.gz | en | 0.962889 | 1,053 | 2.9375 | 3 |
The focus of a U.S. Senate hearing Monday afternoon is the potential danger of cellphone use — specifically, the risk of brain cancer. That link was suggested as long ago as last decade, when cellphones were slightly smaller than a shoebox and just beginning to become part of the everyday landscape.
As cellphone use became ubiquitous and researchers repeatedly discounted health concerns, the perception of risk faded among the general public.
Now, that may be changing. The Senate review was prompted in part by a new report from the Environmental Working Group, a Washington, D.C.-based nonprofit that recently evaluated the top cellphones and smartphones, measuring the levels of watts per kilogram.
The results include a list of top 10 “good” and “bad” cellphones and smartphones, along with tips for safeusage and levels for all currently available and legacy models. Pennsylvania Senator and cancer survivor Arlen Specter is chairing the hearing in the Subcommittee on Labor, Health and Human Services, and Education and Related Agencies, which began at 2 p.m. ET.
A representative from EWG is among those scheduled to testify.
EWG is not the only group to be concerned about the link between cancer and cellphone use, cancer researcher Devra Davis told TechNewsWorld. Davis is also testifying at the hearing.
“There are good strong reasons for concern, which is why Finland, Sweden, Denmark, Norway, Iceland, Israel, India, and some cities in Austria and Brazil have issued warnings for all users,” she said.
“We do not have ample evidence that cellphones are safe and urgently need research,” she said. “Any assertions that cellphones are safe are misleading. We are especially concerned about our young people and must protect the brains of children which are still developing.”
There have been a number of studies — mainly conducted overseas — that do, in fact, show a link between cancer and cellphone use. A joint study by researchers in Denmark, Finland, Norway, Sweden and the United Kingdom, for example, found that people who had used cellphones for more than 10 years had a significantly increased risk of developing glioma, a usually malignant brain tumor, on theside of the head they had favored for cellphone conversations.
In a study of 420,095 Danish adults, it was found that long-term cellphone users were 10 to 20 percent more likely to be hospitalized for migraines and vertigo than people who had taken up cellphones morerecently.
The EWG study differs in that it focuses on particular devices and offers some hope that they can still be used, despite the apparent long-term risk.
The group found several phones that emitted less radiation than others. These are the phones, obviously, they recommend.
In fact, EWG was surprised at the wide range of values, Nneka Leiba, a researcher with the group, told TechNewsWorld. A number of phones came very close to 1.6 W/kg — the standard established by the FCC in 1993.
“We saw some phones emit eight times more radiation than other phones,” said Leiba.
The Samsung Impression (SGH-a877) offered by AT&T is the safest cellphone on the market, while the worst is the Motorola MOTO VU204 offered by Verizon Wireless, according to the report.
Lower Signal, Higher Risk
EWG is not expecting consumers to abandon their cellphones, Leiba said. The organization is hoping for more Congressional scrutiny of the standard — as well as some publicity in promoting safer cellphone use. The first step towards the later is to use a low radiation phone, a complete list of which can be found in the EWG’s report.
EWG also wants to focus more study on the impact of cellphone use on children and teenagers. Children’s skulls are softer and thinner and thus more vulnerable to radiation, Leiba noted.
Other advice includes urging people to text in favor of making voice calls; to invest in a headset, which emits far less radiation; and to avoid using the phone when signals are low.
“That is when the phone is emitting the most radiation,” Leiba said. | <urn:uuid:4756a0ce-80b7-4ad9-a288-e7990dd55ac1> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/do-you-know-how-much-radiation-your-cellphone-emits-68116.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00189.warc.gz | en | 0.965962 | 880 | 2.75 | 3 |
Postgre SQL is a relational database management system that allows the storage of large types of data in form of tuples and relationships. As is obvious from the name, it uses SQL for all types of queries. Users can easily evolve their databases without the need for manual recompilation. Created by Micheal Stonebracker 1986,Postgre SQL has been awarded as the best database of the year many times.
Why Postgre SQL
- Since it is an open-source database, it allows its users complete freedom to evolve, update or edit the information as they will.
- It is also very affordable because it has no licensing fee or any other hidden charges – it is totally free!
- Security features include TDE and Data Masking
- Scalability is a feature provided by Postgre SQL. Hence, the users can easily scale up or down in the software as per their business requirements without any limits.
- It also supports replication methods such as streaming replication and cascading
- Its protocol for communication between the front and backend is message-based. Also, TCP/IP and Unix Domain sockets support it. | <urn:uuid:c5e0a8d0-1f10-424d-b22f-dd9afff938ea> | CC-MAIN-2022-40 | https://data443.com/data_security/supported-platforms-postgre-sql/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00189.warc.gz | en | 0.948207 | 227 | 2.578125 | 3 |
Redis is an open-source, networked, in-memory database that is used as a database, cache, and message broker. It is used as a database for caching and storing data, to boost the performance of interactive applications such as real-time analytics or gaming. It helps to process messages in a background processing system more efficiently. Also, it is a key-value store that is more flexible than traditional databases because the data structure is simpler and keys can contain more complex data types.
- Simple database that stores data in a key-value pair with a built-in set of atomic operations on the values.
- Few data structures, such as strings, hashes, lists, sets, and sorted sets.
- Supports different kinds of value types, including strings, hashes, lists, sets, and sorted sets.
- Non-relational database as it doesn’t support joins or relations.
- It is an in-memory database, which means it speeds up operations and reduces latency.
- Supports master-slave replication and persistence through an append-only file.
- In-memory databases like Redis are useful for speeding up transactional systems, caching, and queueing. | <urn:uuid:cd5c4a39-4f19-4e81-aef7-e23bb45aad41> | CC-MAIN-2022-40 | https://data443.com/data_security/supported-platforms-redis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00189.warc.gz | en | 0.91 | 251 | 2.984375 | 3 |
Although it may seem like IT security and cyber security can be used interchangeably, both terms refer to different things. In this article, we will take a closer look at what makes them different. You might have noticed that ‘cyber security’ and ‘IT security’ terms are often used as synonyms. Yet both terms refer to different things, and this slight difference in their meaning might lead to confusion. We aim to discuss how and why they differ in detail.
The similarities between cyber security and IT security
Let’s start with discussing the similarities between two terms that might lead to confusion. Both terms refer to practices that aim to provide the security and protection of computer systems from malicious acts and data breaches.
Moreover, cyber security and IT security are very closely related practices. They involve similar and complementary processes, but the distinction between them must be made in order to achieve a proper and successful application. Both cyber security and IT security have a component that deals with the physical security of the information. Keeping the door of server room locked to giving authorization to specific personnel, both practices employ various measures to keep the information safe.
Moreover, IT security and cyber security asses the value of the data they aim to protect. In other words, both practices try to focus on the most important information. For instance, let’s assume that you run a bank. The list of the names of your customers is a valuable data, but ID numbers, PIN codes or addresses of these customers are in a sense more important to protect. In both IT security and cyber security, most precautions are taken in order to protect most sensitive data the best.
IT security, also known as information security or InfoSec practically refers to data security. Essential concerns of IT security can be summed up in CIA triad: confidentiality, integrity and availability of the data. In other words, IT security aims to keep an organization’s data safe and reliable. As the broad definition implies, IT security covers a vast area including cyber security. Which means, it is possible to be an IT expert without specializing in cyber security.
Similar to IT security, cyber security aims to keep information safe but it especially focuses on the data in digital form: mobile devices, tablets, computers, work stations, servers, networks and such. The purpose of all cyber security practices is keeping electronic data from unauthorized access. In order to do so, cyber security professionals opt for various protocols and methods including the identification of sensitive and/or valuable data, alleviation of vulnerable spots in the security façade of an organization, assessment of risks and much more.
As we mentioned above, both IT security and cyber security aim to protect information. IT security refers to a broader area. It focuses on protecting important data from any kind of threat. Moreover, it deals with both digital information and analog information. Whereas cyber security focuses on digital information but also, it deals with other things as well: Cyber crimes, cyber attacks, cyber frauds, law enforcement and such. It might be beneficial for you and for your organization to check SIEM and SOAR solutions of Logsign in order to protect your organization. | <urn:uuid:4e65de18-0964-4f32-9cc4-e7cb587b1755> | CC-MAIN-2022-40 | https://www.logsign.com/blog/it-security-vs-cyber-security-what-is-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00189.warc.gz | en | 0.94811 | 641 | 3.015625 | 3 |
A research group has developed an augmented reality system. To create 3D simulations of the desired results of facial reconstructive procedures. Project them over the patient’s face during plastic surgery,
Koichi Ueda, MD, PhD, Daisuke Mitsuno, MD of Osaka Medical College, Japan, explains development and experience with an augmented reality (AR) system. Improvements of the body surface consideration in plastic surgery. They suggest augmented reality could be useful guide to planning, performing and evaluating the results of facial reconstruction and other procedures.
Augmented Reality for Plastic Surgery
Augmented reality is a technology that combines computer-generated images on a screen with a real object or scene. To develop a sophisticated yet simple and modifiable AR technique for use during plastic and reconstructive surgery.
The researchers used digital camera to capture 3D image of the facial surface. On the underlying facial bones performed tomography scans to obtain digital information for each patient. Meanwhile, these digital data manipulated to create 3D simulations for final results. However, the reconstruction simulated by obtaining and reversing an image of the opposite, uninjured bone.
The surgeon use smart glasses to superimpose the 3D digital simulation image. For the desired over the patient’s face during surgery. The researchers used open source software to solve various technical problems. Including manipulating and displaying the 3D simulations with the surgical field.
The experience with AR system with eight patients undergoing reconstructive facial surgery. Further, the AR system helped in planning and confirming reconstruction for underlying facial bones. In all patient cases the 3D simulation of the body surface provided a visual reference of the final facial appearance.
Moreover, the research team improved some additional technical clarifications to address limitations of the AR system. The improvised system not to guide surgery in this initial experience, it helped in visualizing the planned correction and confirming the final outcome.
In addition, the group plans further studies for benefits of using the AR system during plastic and reconstructive surgery and to refine the display method for comparing before and after images.
Although, Future include the ability to evaluate the improvements in body surface and even to enable navigation of internal organs. | <urn:uuid:acede4ef-4bae-4267-a678-07205e2b4f33> | CC-MAIN-2022-40 | https://areflect.com/2017/08/26/augmented-reality-to-accompany-plastic-and-reconstructive-surgery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00389.warc.gz | en | 0.900716 | 437 | 2.875 | 3 |
Health Benefits with Brisk Walking
People finding ways to keep health better by performing brisk walking, jogging, fitness activities in morning at parks and gyms. But the researchers have found brisk walking easy fitness technique to stay fit.
Brisk walking exercise as common intensity workout with benefits decreasing health risks. Walking and running both help reduce your risk of high blood pressure, high cholesterol and diabetes. The cardiovascular workouts walking and jogging possess greater benefits as people daily performing cardio workouts tend to live longer and have lower risks for coronary artery disease, cancer and diabetes.
Brisk walk depends on the individual, you must move at least 3 mph speed. At that speed you will notice some changes in body like developing warmth and heart rate gradually increases. A brisk walking pace is 3.0 miles per hour or about 20 minutes per mile.
According to the Centers for Disease Control and Prevention about 5 kilometers per hour or 12 minutes per kilometer. An average easy walking pace would be more than 20 minutes per mile, while a fast pace would be less than 15 minutes per mile.
You can use a Walking speedometers and apps for measuring the time it takes you to walk a mile or a kilometer. Furthermore, the best way to measure is to take a heart rate reading and check the target heart rate chart to see whether you are in a moderate intensity zone for your age.
Know the technique
Firstly, pull your navel towards your spine so that your core muscles are working.
Focus your eyes straight ahead and relax your shoulders.
Bend your elbows at a 90 degree angle and lift your hands lightly
Take a step forward with your right foot and move your left arm forward in opposition and vice versa.
Benefits in brisk walking
Increasing Oxygen Supply to the Body
It helps in increasing the level of oxygen supply to the body. People having issues in breathing must go for brisk walking which would be a great exercise that increases the oxygen flow in to your system.
Improving Body posture/Spine
Walking maintains flexibility and posture, the spinal structures with better circulation, pumping nutrients and removes toxins. Moreover, improving balance and reduces fall especially in elderly people or seniors.
Brisk walking Keeps away from stress and improve the brain health. It helps in improving memory in seniors. Also improves the academic performance in students.
However, 30 minutes of daily brisk walk increases the Killer T-cells and other markers of immune functions in the body. | <urn:uuid:5fe43120-c47d-456b-bdeb-25ba38db3d38> | CC-MAIN-2022-40 | https://areflect.com/2017/12/08/health-benefits-with-brisk-walking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00389.warc.gz | en | 0.907759 | 508 | 2.78125 | 3 |
Though unequivocally delicious, one thing chocolate cake isn’t considered is healthy, right? Actually, depending on your gut’s microbiome, it may be healthier than fresh fruit according to Research. Researchers from Israel’s Weizmann Institute of Science studied the effect of different foods on 800 people to learn more about the body’s glycemic response mechanisms.
Raw data including demographics, height, weight, BMI index, sleep and exercise patterns, as well as blood and gut microbiome was collected for the study. In all, over 50,000 meals were assessed, and over 1.5 million glucose measurements were collected. This project—with its massive datasets—was made possible by high-performance computing (HPC), a form of supercomputing that is able to digest staggering workloads and provide lucid results. For this study, researchers concluded that people respond differently to various foods so that indeed, for some people, cheesecake may be healthier than a grapefruit!
What unites these independent Research and studies is their shared requirement to process Big data. The potential of HPC brings a tsunami like innovation to the Healthcare Industry and the opportunity to Transform Healthcare is clear, but for your HPC infrastructure to run reliably and smoothly, it must be built on storage that can keep up with very high compute speeds. If storage is slow or unavailable, research will slow down, stop altogether, or—worst of all—data could be lost, destroying years of work.
Even if the HPC began to boom well before cloud computing with the development of huge supercomputers starting in the ’90s, the rise of cloud computing greatly democratized HPC. For example, a researcher studying a new material does not need to send a proposal to a supercomputer facility to calculate the electronic properties of its highly correlated material, he can just fire up 200 spot instances on his AWS account, run the workload, download the results, and finally switch off the cluster. All this for just a few bucks.
Going away with the lengthy and frustrating proposal review process needed to access a supercomputer facility, the scientist can obtain the same results in a fraction of the time with minimal effort, and act more quickly on the results: should a new experiment find out a slightly different composition with better properties than the original sample, the researcher can simply rerun the analysis overnight without any additional overhead.
Having the compute power you need, exactly when you need it, and pay only for what you use has a huge value in these situations. High Performance systems provide a fast, reliable, cost-effective storage solution that helps keep your healthcare operations running as fast as your data scientists’ projects evolve. You can seamlessly scale systems to accommodate a vast amount of data, using a granular, building-block approach to growth. You can auto-scale with a pay-as-you-go strategy, adding one or more storage drives at a time. Along with speed, reliability, and elastic scaling, the system helps reduce your Total cost of ownership because it helps bring down infrastructure costs.
It is a bit more like a collaboration not a competition and the particularity in the health system and medicine, is that we have all these new forms of data, genomics, their social information, and we need these tools to help us synthesize the data and the information that the doctor can use or can use as a patient. So, instead of being afraid of this technology, it should empower us. We must see this as a tool and technologies of power, but taking care of who owns the data (what we share, how or when a smartwatch is used) and how they are sharing it, but these technologies can give a much better vision of personal health, and even helping doctors and nurses make smarter and more personalized diagnoses and therapies. | <urn:uuid:c8d88ce9-2281-4379-9796-070099d64828> | CC-MAIN-2022-40 | https://enterpriseviewpoint.com/why-high-performance-computing-matters-to-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00389.warc.gz | en | 0.950211 | 773 | 2.890625 | 3 |
Anyone Can Look Inside!Working with OSS today can be a great advantage
A few days ago, we had mentioned the recent increase in the use of Open Source Software (OSS) by development teams to support or shape their applications. As shared by Oram & Bhorat, "every business and government involved with digital transformation or with building services in the cloud is consuming open source software because it’s good for business and for their mission."
Open source projects have been launched or used and maintained by large and well-known business groups such as Google, Facebook, Amazon, Huawei, and MasterCard, to mention a few. The production of many of these companies has accelerated thanks to the use of the open source. Open source products like Firefox, Linux, and WordPress are now quite successful in the consumer area.
But what exactly is OSS?
OSS definition and origin
When we talk about 'open source,' we mean the free availability of the software involved. We assume that anyone can freely access, copy, share, and modify the OSS, including every line of its source code. The source code, which constitutes the software and to which many users do not pay attention, is what programmers or developers manipulate to generate changes in the way the software works. Unlike OSS, we have 'proprietary software' or 'closed source software,' which doesn’t have its full source code available to the public. Its modification or redistribution may be prohibited or highly restricted.
The free software movement began in the eighties through Richard Stallman, a programmer at MIT. It emerged as a response to the limitations (for example, in cooperation) generated by proprietary software. As shared in Statskontoret, Stallman took the cooking as an example and raised questions like: "How would we experience the world around us if recipes were not freely available or free to change and modify?" The fact is that some have been interested in just sharing their final dishes and keeping their recipes secret.
Now it’s important to get something straight, based on a post by Kelly & Van De Mark: 'free and open source software' doesn’t necessarily mean that the software is free of price. This is an allusion to the freedom of use of the software and the code. However, "most free open source software is indeed free in price." Therefore, it is often requested that the copyright be attributed to the creator(s) and that OSS quality be preserved when distributed.
In fact, many organizations have achieved success selling consulting, support, and training services related to the use of their OSS. Some companies have chosen to sell exemptions to the terms of their licenses. In other cases, some products recommend other complementary products or services. Other times, OSS creators rely only on donations.
OSS licenses and security
On the legal side, licenses are not only provided for proprietary software; they also apply to OSS. The rights to examine, copy, alter, and distribute are stipulated in those licenses. Licenses determine the ways in which individuals inspect, modify, and distribute software and its code. For example, 'copyleft' licenses stipulate that if someone releases a modified open source program, he must also deliver the corresponding source code. That is, the terms of distribution or other requirements for all copies or alterations of all versions must be maintained. On the other hand, as Moffatt says: "If a program is free but not copylefted, some copies or modified versions may not be free at all."
On the security side, we must be clear that vulnerabilities and gaps are present in both open and closed source software. According to a post by Maryna & Vlad, regarding security, many times when we pay for software, we are left only with the confidence in the seller. However, when we have many eyes watching an OSS and the code, flaws, bugs, errors, or omissions in the program can be more easily detected and quickly fixed. Furthermore, the security of some companies using open source, without teams of cybersecurity experts or at least support communities, may be at risk from malicious hackers who can take advantage of it.
OSS in business strategies
Many organizations, regardless of their size and field of action, are recommended to include OSS in their strategies. For a company using OSS, an online community discussing its software can be a huge benefit. As Bromhead pointed out: "The output tends to be extremely robust, tried, and tested code." Besides, organizations that make good use of open source will be able to discover work dynamics that are more oriented towards collaboration, creativity, and innovation.
Collaboration is helping each other to move a project forward. Blogs and forums serve as a means to share and exchange knowledge or ideas. In addition to the fact that codes and products are continually being reviewed and optimized, each participant and collaborator can also receive constructive criticism and feedback. Customizations made by sometimes thousands of hands on the job can lead to improvements, either by adding features or repairing others. Also, new apps can be developed that are more efficient, more complex, and better suited to specific needs and preferences. Additionally, the practices and results of top coders in one field or another become, through open source, valuable sources of learning and skill enhancement for new or experienced developers.
As far as creativity and innovation are concerned: we must not reinvent what others have already invented. What already exists is used to give rise to the new. As Whitehurst suggests, "innovation occurs only when people feel a certain freedom to manipulate, experiment, and tinker." Creative youth can be easily attracted through a business model that is separate from the traditional. "This culture is strikingly different from the secretive, hierarchical, management-driven cultures of most companies today" (Oram & Bhorat). Great talents can then become part of a company they worked with on an OSS and contribute to other projects.
The accessibility of the code is a valuable strategy to avoid giving the impression that something is being hidden. Then the concept of transparency appears, being a fundamental principle in open source. The one that Bryant Son, from Red Hat, recently pointed out as "critical to making any project successful." Transparency involves sharing as much information as possible with the members of the company and all users and collaborators. Being able to report problems, errors, and methods of solution is a primary characteristic of a transparent community, united in favor of security and technological and social advancement.
Following DB Hurley, transparency is not just about allowing access to code, products, and services. "It is a commitment to total clarity, in business practice, structure, finance, and design." A transparent company does not give the impression that there is some mystery behind what the code is doing. It achieves greater trust from customers and communities, exemplifying an immaculate business practice.
The code created and used by
Fluid Attacks is public, and we believe
that strengthens us and reflects transparency. In the words of Rafael
Alvarez, our CTO: "Public repositories are a bet on transparency that
all actors should make in the long term." Take a look at our
repositories following this link:
Furthermore, we help our clients write secure code throughout the entire software development lifecycle, so that it may remain in the open, continuously secured against cyberattacks.
Do you have any questions or ideas to share? Don’t forget to contact us!
Ready to try Continuous Hacking?
Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying. | <urn:uuid:9a9b08af-9906-489a-ad33-71377362fe41> | CC-MAIN-2022-40 | https://fluidattacks.com/blog/look-inside-oss/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00389.warc.gz | en | 0.948853 | 1,564 | 2.765625 | 3 |
We don't always realise it, but every day, dozens of cameras can capture our actions. Globally there are 770m surveillance cameras in use, according to a recent analysis of the world's most surveilled cities. And by the end of 2021, over 1 billion surveillance cameras will be installed globally.
So, the video surveillance camera is a tool that is already widely deployed but we are currently experiencing a pivotal period of technological transformation in the security market. The democratization of Artificial Intelligence and the availability of metadata from image analysis are reshuffling the cards in terms of video surveillance. These developments are as stimulating technologically as they are ethically: on both these fronts, Europe is ahead of the game.
The evolution of CCTV cameras
From military use to the democratization of domestic use, to the connected video surveillance we know today, the market is not new and has already experienced several small revolutions.
Initially, it was only a question of positioning a camera to record a defined space. One of the first major changes was the arrival of the IP camera - or network camera - which, connected via an Ethernet network, went from a simple sensor to a fully-fledged computer tool. It was at this time, in the 2000’s when the video protection market was booming, that the first artificial intelligence algorithms were developed and deployed on the systems, making it possible to analyse video flows and giving birth to the image analysis industry via security cameras. Thanks to the ability to analyse volumes of moving pixels in the image, these systems can, for example, detect an abnormal movement in the video, understand that an individual is present and report the information. The only downside is the accuracy of this analysis is conditioned by the environment in which the system is located. For example, heavy rain or strong winds can make detection difficult by raising false alarms or even missing an intrusion altogether. Another obstacle to these technologies is the cost of the infrastructure: in order to analyse videos and provide a relevant result, it is necessary to deploy computer servers with a large computing capacity. And these are not the only complaints that have been noted over the years. Security cameras are often criticised because they raise another operational problem: all these collected videos have to be processed. This requires a substantial human resource, especially in urban security centers.
But this does not take into account the recent evolution of video surveillance systems, which, boosted by advances in artificial intelligence, are now increasingly efficient and autonomous.
Deep Learning and metadata: the leap forward in video surveillance
It's a reality that the more precise the algorithms, the more proactive the cameras equipped with image analysis become in feeding information back to the user of a video surveillance system. The new generation of Artificial Intelligence today allows us to go further and make more precise pre-analyses of situations. In the case of a city, for example, the new algorithms will be able to draw the operators' attention to crowds or abnormal crowd movements. How is this possible? Quite simply by using a new method for designing these image analysis algorithms called "Deep Learning" that allows large volumes of data to be used to "train" Artificial Intelligence. When we "train" Artificial Intelligence systems with all this data, they are able to recognise different shapes: a man, a woman, a cat, but also to recognise colours, a man wearing a red T-shirt and a moustache, a child on a scooter, a particular face. All this is made possible by Deep Learning algorithms, capable of processing an exponential amount of data. Added to this are advances in terms of computing power and the miniaturisation of computer processors, which are now integrated directly into security cameras, making the cost of deploying these technologies much more accessible to end users.
Today's video surveillance systems have therefore become experts in pattern detection and recognition. Based on what they have detected, they create a meta-database that is self-powered and allows for an ever finer analysis of the images. This is the basis of biometric and facial recognition tools. Scary for the protection of personal data? Not really, because the regulatory framework in Europe strictly protects individual liberties (GDRP). From a security perspective, the aim of metadata is not to identify citizens (which is not allowed in Europe anyway), but to optimise the use of video protection, and make it proactive and predictive. Connected in-car cameras to avoid accidents, smart cities optimizing flows and limiting incidents, cameras that detect the micro-expressions of an individual having a stroke... the perspectives are unlimited and some are already a reality. From their conception, today's security systems are designed to go beyond a protective logic. By 2024, it is estimated that 30% of cameras sold on the market will be capable of embedding deep learning. Artificial intelligence-based image analysis is booming. It is now only a matter of time: the time needed to deploy infrastructures and to create the AI applications to positively change our society for the better. | <urn:uuid:bceba8b0-e8cc-4ad9-b32d-fb8fc31fdc4f> | CC-MAIN-2022-40 | https://i-pro.com/global/en/surveillance/news/video-surveillance-infinite-possibilities-metadata | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00389.warc.gz | en | 0.941031 | 1,004 | 2.6875 | 3 |
Ransomware is a type of malware that encrypts a victim’s data and then displays a ransom message to the victim demanding payment in exchange for the decryption of that data. Attackers use ransomware to extort their victims and demand payment in exchange for their data.
Victims are presented with a choice, either pay the ransom or lose their data entirely. Also, there is absolutely no guarantee that hackers will follow through and decrypt the data once they receive payment. This leaves victims in a precarious position.
The best defense against ransomware attacks is to perform regular backups. With proper backups, it won’t matter if a hacker encrypts the data, because an exact copy of that data can be restored from the backups. Backups mitigate the loss of data to the most recent backup.
Of course, there will still be a cost to recover the data. The backups will take work hours to restore and some data is bound to be lost since the time of the last backup. With proper backups, you can treat a ransomware as just another disruption, rather than a catastrophic loss of data.
Crypto malware is a form of malware that uses a target’s CPU resources to mine for crypto currency. Crypto currency mining is when a device is used to process cryptocurrency transactions. The miner is rewarded for this with a portion of the mined cryptocurrency. This is what allows cryptocurrency to function without centralized control.
Hackers want their crypto malware to operate for as long as possible to maximize the mining time. So, crypto malware tries to avoid detection from antivirus and antimalware programs. Crypto malware will incorporate masking techniques, like code rewriting, within its code.
This video is part of the CompTIA Security+ Course. This course will get you certified within one week so you can earn an $85,000 salary as a Security+ certified cyber security professional. CLICK HERE to get started. | <urn:uuid:fae288b0-f196-43a1-987b-5e47e905b2af> | CC-MAIN-2022-40 | https://cyberkrafttraining.com/2020/12/17/ransomware-and-crypto-malware-security-study-session-17-december-2020/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00389.warc.gz | en | 0.925663 | 387 | 2.828125 | 3 |
Share this post:
What do you think of when you hear blockchain? The term cryptocurrency leaps to mind for many people, but that definition fails to represent the breadth and depth of blockchain today. Over the past several years, practical blockchain use cases have emerged as organizations worldwide recognized blockchain as a tool to help solve real-world problems such as food waste, supply chain inefficiencies and fraud.
Blockchain provides a public ledger that is secure, distributed, traceable, and unalterable. There are many use cases across a wide variety of industries that would benefit from this type of ledger. Retailers, manufacturers and logistics companies we talk to are excited about the opportunities for process improvement that blockchain provides.
Listen to Maribel Lopez, Lopez Research, on Episode 20 of the Blockchain Pulse Podcast
Practical applications are in use today
Walmart and IBM lead the development of Food Trust, a digital system for tracking and tracing food between retailers and suppliers. Today many global companies such as Carrefours, Nestle and Dole are partners in the trust. Blockchain helps these companies create a tracing solution to quickly identify the source of any food problems while preventing large volumes of good food from being destroyed. Blockchain also combines with other technologies like mobile to provide consumers with essential information. For example, scanning a QR code on a package can access data from the blockchain that verifies if the produce is organic and where it was grown.
During THINK, The Home Depot described how blockchain allowed the company to identify where inventory is in the supply chain and if it’s time to pay its suppliers for goods and services. In luxury goods retail, brands such as LVMH and DeBeers use blockchain to help stop counterfeiting and trace supply chains from raw material to the retail shop.
The shipping and trading still heavily rely on manual and paper-based processes, which are very costly, slow and error prone. Blockchain allows a company to view the origin and route of components and products as they move throughout the supply chain. Maersk, CSX and Global Container Terminals Inc are just a few of the logistics companies participating in blockchain called the TradeLens platform. Supply chain partners (e.g., cargo owners, ocean and inland carriers, freight forwarders and logistics providers, ports etc.) use the blockchain to securely provide permission-based access for sharing millions of shipment events and documents, such as cargo details, trade documents, customs filings, and sensor readings.
Insurance companies can use blockchain to record contracts and claims, making it easier for insurers to eliminate invalid and duplicate claims. An example of blockchain in action in insurance is openIDL, a group of insurance companies and the American Association of Insurance Services that came together to automate insurance regulatory reporting and streamline compliance requirements.
Given that blockchain provides a way to securely and efficiently create a tamper-proof log of sensitive activity, Lopez Research expects to see more use of blockchain in areas such as digital identity, voting, patient health data.
Companies can overcome challenges with proper planning
Yet, blockchain isn’t a product that you pick up at the grocery store. Blockchain is a tool that enables a broader digital transformation, which means the most significant challenge with blockchain isn’t the technology. A successful blockchain implementation requires a set of companies to agree on a business process change.
These companies form a network and must decide on a common set of standards and the scope of the problem that they are trying to solve. Additionally, the group must nail down security and governance guidelines for the blockchain. For example, who has access to the ledger, and how is access controlled? However, none of these challenges are insurmountable, and the rewards of adding blockchain outweigh the challenges.
Simply tire-kicking blockchain leads to poor results. Companies with successful blockchain deployments focused on addressing a specific problem and defining what success metrics should be measured. Vendor selection also matters. Companies should look for vendors with industry expertise and experience with building enterprise systems. There’s a high likelihood that your company will work with a systems integrator to help define both the process and technical aspects of the blockchain.
Leaders have demonstrated the value — it’s time to act
Blockchain has evolved from a speculative currency model to a foundational technology that delivers real business value. Blockchain, when architected correctly, improves regulatory compliance, reduce paperwork and shrink costs.
Blockchains minimize friction and simplify the process of trade. Examples like those from The Home Depot, Maersk and Walmart highlight why blockchain continues to gain traction and acceptance in more industries.
I look forward to the future ahead.
From time to time, we invite industry thought leaders, academic experts and partners, to share their opinions and insights on current trends in blockchain to the Blockchain Pulse blog. While the opinions in these blog posts are their own, and do not necessarily reflect the views of IBM, this blog strives to welcome all points of view to the conversation.
How to get started with IBM Blockchain now | <urn:uuid:836ed564-3a33-4f48-a5e1-4f07eee9616f> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/blockchain/2020/06/from-lettuce-to-luxury-goods-blockchain-helps-industries-thrive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00389.warc.gz | en | 0.936796 | 1,013 | 2.671875 | 3 |
Unlike buffer files or file sync, this approach does not work with files, but rather with specific sectors on a drive. Entire sectors are protected, so no changes stick to those physical areas of the drive, bringing a brand new computer experience with each restart.
What’s the difference between a block and a sector? A sector is a physical area on a disk. Think of it as a slice of a track on a formatted disk that can hold a fixed amount of data. The first ones used to hold 512 bytes, then 2048 —now the newer drives use a 4K sector (4096). A block is a group of sectors that an OS can address. A block can consist of one or many sectors.
This great ability to shuffle data on block level has a lot of benefits.
– No performance hit
Compared to file level based methods, there is no performance hit. Very little processor power is spent on providing this kind of protection.
– No application compatibility issues
Working well below file level takes away any issues with how, where and when any files get moved. Block level solutions don’t care about single files. As a matter of fact, they have no idea what kind of payload is being moved or protected at any given time.
– Set it and forget it
Once block level protection is activated, no further tweaks are necessary to improve its performance. Whatever configuration was frozen at the moment will prevail with every reboot. Machines can run for years without any updates. If a computer doesn’t boot up, it’s due to hardware failure. Simple is good.
– Instant imaging is possible!
The block level protection is often compared to instant imaging (too bad such technology doesn’t exist yet – we look forward to quantum computing though). A session can accumulate a lot of changes, and all of those changes can be reset with a simple press of a “restart” button or with a remote command. A fresh system boots up each and every time.
– CSIs will be happy
Think any changes on a computer are gone once it reboots? Not quite. The data is still there, just not where the OS thinks it is. Professional forensics tools that access the drive directly will be able to recover the data needed.
– You can patch and pull updates
Nothing sticks to a system protected at the block level, even updates and patches. After a reboot it will be like they never happened. To update/patch your systems just turn the protection off, patch your system, then freeze it again.
– Better performance later
Optimize your system, then apply protection. With no system lint your computers will operate at their peak since junk files and system settings will not survive a reboot. Even your 100% defragmentation settings will stay the same, further improving your performance.
– You can choose what to freeze
Block level solutions typically apply to a partition, so if you want to have some flexibility in what to freeze and what to leave alone, you can create multiple partitions. A good idea is to leave all your program and system files on one partition (C: on Windows) and have your user profiles setup on another. Freezing C: partition and leaving D: as is. This way you can completely lock your operating system, but leave user data intact. Emails and pictures will be saved, system files and installed apps will be reset.
There are a couple of downsides though:
– Be careful what you freeze
Block level protection freezes everything – even the bad stuff. If your computer is infected and antivirus cleans it, the infection will be brought back upon restart.
– Need to restart
You may not like it, but there’s no way to switch the protection status without rebooting a computer. If you need to change protection status on the fly, block level protection is not for you.
– What root?
Block level solutions don’t care about user privileges. Any changes, either by guest or full admin, won’t survive a reboot. Thaw systems before applying any patches or updates. | <urn:uuid:8fc82f6c-2db0-41dc-af36-7cc10a32f2c3> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/block-level-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00389.warc.gz | en | 0.898387 | 842 | 2.921875 | 3 |
Almost everything in the present world has shifted to the online system, especially businesses and information-sharing. According to the Americans with Disabilities Act (ADA), everything available publicly should be used by everyone, including people with disabilities. The Act also includes publicly available websites and applications. In effect since 1990, the ADA requires businesses to accommodate the accessibility needs of their clients. In other words, companies are not allowed to discriminate against people with impairments by providing inaccessible services.
Because of ADA’s engagement, website and application design and development is now a matter of legality. The web content accessibility restriction is incredibly stringent for the government, educational institutions, financial institutions, and healthcare organizations’ websites. However, while it is not required for enterprises and information platforms to meet the accessibility requirements 100 percent, doing so improves the potential visitor or client spectrum and profitability. Read about the key benefits of making your website accessible.
What is ADA Compliance?
ADA, short for The Americans with Disabilities Act, is a 1990 Civil Rights Act that restrains the discrimination regarding people with any disabilities like:
- Neurological and many others
This policy ensures reserving the right of disabled people in:
- Public Areas
- Offices / Place of Employment
- Governmental services (State and local)
This Act stimulated the adoption of wheelchair access ramps, restroom facilities, and numerous other accessibility measures, which made the daily routine more comfortable for people with disabilities to a more significant extent. However, in 1990, the internet was in its early stages when this Act was instituted. Nobody, including the legislators, knew about the upcoming boon of eCommerce. Therefore, the Act does not precisely mention any clause related to web accessibility, and this has always been a debatable topic. With multiple amendments over time, the legal instructions lean more towards making the websites accessible for disabled people as they are meant for public use.
Making Websites accessible saves one’s business from any possible legal inquiries regarding accessibility rights and extends the company’s audience.
What are ADA Compliant Websites?
ADA compliant websites are the ones that meet these four principles:
This principle regulates that all website contents are quickly processed and perceived by all types of users, including audio, video, and text-based content. Adopting high contrasting colors in the design, alternative text on audio and video content, and certain HTML tags that make navigation throughout the website more accessible.
This principle enables the usage of the website by both mouse and keyboard. It is not enough if you can access several elements of your website with the help of a mouse only. Each functionality and feature must be accessible with the keyboard, too, because most people with motor neuron disease, to any extent, rely on keyboard commands. To check this, try navigating through your website with the “Tab” key and see if it works fine. If it does not, get in touch with a developer and make the necessary changes.
According to this principle, the website contents are made comprehensive for each one of the users. Using vibrant and high-contrast colors ensures that people with color blindness or any other visual impairment can comfortably navigate the website. Using multiple languages makes sure no language barrier hinders the communication between you and your brand. Multiple language translation also makes the language less technical and easy to read. Suppose the website’s language is not comprehensible by a person of average lower secondary level. In that case, it is recommended to rewrite the content to make the experience effortless for the reader.
This principle enhances your website’s compatibility with different browsers, devices, and assistive technologies. Robustness increases the usage ratio of the websites.
Website Compliant Checklist
W3C has several guidelines and instructions to create an accessibility-compliant website. The most important ones are discussed below:
- Web pages with screen reading compatibility
- Images with alternative tags/text
- Tables with alternative tags/text
- Automatic Scripting
- Accessible forms
- Style sheet independent webpage contents
- Text links to plugins
- Inclusive and high-contrast color scheme
- Harmless web designs
- Keyboard friendly browsing and control
How to Implement ADA Compliance in Websites?
There are numerous strategies to boost website accessibility for persons with disabilities, the elderly, rural areas or developing nations, people with slower internet connections, and others to create an all-inclusive experience.
High-contrast colors – Choose colors that give a crisp contrast to the content and background. This will increase reading comprehension.
Alternative text – People with visual impairments rely on captions, labels, and alternative texts to make sense of any video or graphic content. If the visuals in your website are as meaningful as the text, then this step is a compulsory one.
HTML tags – Different HTML tags are also dedicated to enhancing the accessibility of web pages. Such tags are beneficial for people with disabilities, and it is encouraged to incorporate them during the development phase.
Multi-lingual content – Incorporating multiple languages stimulates a broader ratio of web page visitors. It also helps people better comprehend the website content.
Clean and clear UI – There is nothing more effective than a clean UI
in a website where users can find everything easily. It is not feasible for people with disabilities to keep switching from one page to another to find something due to their limited controlling capabilities. Web pages should be made extremely easy to navigate.
Accessible forms – Methods like grouping similar information, identifying each form field, and reducing the number of required fields can help users traverse submission forms. Forms curated in this way guarantee that they submit accurate information. At the same time, the recipient receives the data they require to assist the client.
What is WCAG?
WCAG, short for Web Content Accessibility Guidelines, is the set of guidelines for an accessible webpage formed by the World Wide Web Consortium or W3C. WCAG helps keep a check on different aspects of your website and be sure that it is helpful for people with any disability. Numerous governmental and healthcare organization websites have to comply with these standards to ensure a maximum number of citizens can navigate these websites.
There have been several revised versions for WCAG. However, the latest one was released in June 2018, called WCAG 2.1. WCAG success criteria are based on three different compliance levels – A, AA, and AAA.
What are Different ADA Compliance Levels?
WCAG has introduced three levels for ADA compliance for websites, apps, and any other digital content. Those three levels of conformance are discussed below.
Level – A
To achieve level A standards, web designers must ensure that color is never used as the sole significant aspect of conveying information, CTA, or distinguishing between two visual features. Individuals with visual impairments or color blindness cannot differentiate between RGB colors. They can only see the content on grayscale.
However, Level A is the most basic level of web accessibility. If you keep level A as the end goal of your website accessibility, there is a high chance that you will fail to comply with ADA. The reason is that level A overlooks many limitations for people with temporary or permanent visual disabilities.
Level – AA
It further modifies Level A by adding contrast guidelines. In order to attain level AA, the goal must be to achieve Level A’s success criteria and meet the contrast ratio of a minimum of 4.5:1 for the foreground and background. This visual setting will make sure your webpage is ADA compliant for the font size of below 18. In addition to color, the text on the web page should be scalable up to 200 percent without losing content or functioning, without the requirement of any assistive technology. Similarly, text should be used whenever possible in the site’s visual presentation rather than text images (jpg, jpeg, or png).
This level is better compliant with ADA. It guarantees easier readability of the text if a person has any visual impairment or color blindness. This also conforms perfectly well in situations when the individual cannot read clearly due to situational limitations like in sunlight or forgotten/broken eyeglasses.
Level – AAA
Level AAA is the most challenging and the most successful to meet ADA requirements. You must not only satisfy the minimum standards for Level A and AA, but you must also enhance the contrast between the webpage contents even more. Recommended contrast ratio of text and text pictures is 7:1 at the least. Other features of the success criteria for Level AAA visual presentations are harder to achieve, requiring a trained design and development team to achieve these objectives.
The W3C states, “it is not suggested that Level AAA conformity be imposed as a general policy for complete sites because it is not practical to satisfy all Level AAA Success Criteria for some content.” This highlights that some needs, such as sign language interpretation for live web events, may be complicated to implement.
Other than that, most of the standard requirements for Level AAA can be achieved precisely. To make your website, application, or other digital content as accessible as possible, you should meet as many Level AAA requirements as possible. You will benefit from a more expansive range of people in various locations and environments. This will broaden the potential client ratio for your business and make your website visitors feel more welcomed.
What should be the goal?
It would be best to make all the required modifications before your organization is confronted with a website accessibility ADA lawsuit.
15% of the overall world’s population suffers from some permanent disability, making more than one billion. Around 2.2 billion people worldwide are struggling with near or distant visual impairment. Aiming for Level A will only be the bare minimum. Your website will still not be accessible enough for most of the 15% of the world’s population suffering from some disability.
If you want to meet most of the accessibility standards by WCAG and abide by the law of ADA perfectly, striving for level AAA is the way to go. Achieving 100% of the Level AAA standard might not be feasible and practical in some cases. Otherwise, the rest of the requirements can be met pretty quickly. The goal should be to incorporate as many Level AAA standard features into your website as possible.
Companies must do everything possible to make it easier for persons with disabilities, whether in a physical area (shopping stores, offices, restaurants, etc.) or online platforms. Online spaces include merging web compliance evaluation and compensating services with assistive technology to provide persons with impairments with an effortless experience.
WCAG guidelines ensure web content accessibility utterly easier to achieve with three levels – A, AA, and AAA. Each group of conformity has a more excellent quality of accessibility. WCAG 2.0 was created with three levels to allow greater flexibility in various scenarios.
It is up to you which level of development you invest in and what degree of accessibility you strive for, based on the nature of your business. However, suppose your goal is to achieve the highest degree of accessibility. In that case, Level AAA guidelines are calculated to produce the desired outcomes for you and your business. | <urn:uuid:b286842b-e598-427f-bf44-f7a53ad0cff6> | CC-MAIN-2022-40 | https://www.liventus.com/ada-compliance-levels-for-accessibility/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00389.warc.gz | en | 0.926767 | 2,312 | 3.40625 | 3 |
Mainframe Data in a Non-Mainframe World
Modernisation, by its very nature, implies taking something old and applying new ideas to improve it. When we talk about modernisation in a computing context, we need to think about how modern technology can play nicely with data coming from older platforms.
Data Archaeology has been described as “the art and science of recovering computer data encoded and/or encrypted in now obsolete media or formats.” Of course, the mainframe is not obsolete, but in a Windows or Linux-centric world, mainframe data formats are best described as esoteric.
These data formats have evolved over time, along with the machinery they were meant to drive. But in many cases, there are no true analogues to those technologies in the open systems world.
For example, consider the Job Control Language (JCL) that you edit while logged into a mainframe Time Sharing Option (TSO) session. You are likely running the Interactive System Productivity Facility (ISPF) product on a software emulator of a 3270-type terminal. This modern scenario represents an evolution of the original mainframe environment.
Set your Wayback machines…
All of the tasks described above were previously done via a physical typewriter, attached to a card punch machine that knocked holes in a paper card.
Each line of a JCL “deck” in ISPF used to be a physical paper card. By punching holes in the card with a card punch, you would be editing a line of JCL. The cards would then be sorted into the correct order and placed on a card reader, which would read the deck of cards optically and convert them into bits in memory. This reader would then submit the JCL to the Job Entry Subsystem (JES) for execution.
The output from these jobs would then normally be printed on impact printers. Since memory and disk were extremely scarce and expensive, all efforts were taken to condense the data being printed. For example, eliminating white space wherever possible was a common method to compress report data. In order to control line spacing, therefore, a physical piece of tape called the carriage control tape was inserted into the printer. The printer would read the first character of each line and reference the tape to determine how many, if any, lines to skip before or after printing a line of data.
As the modern version of 3270-type terminal technology, ISPF, and laser printing came into being, the original technology adapted. The carriage control evolved to become the Forms Control Buffer (FCB), which replaced a physical tape with a file that described how to space lines. This allowed laser printers to support code written to print on older impact devices.
New software technologies like Advanced Function Presentation (AFP) were introduced by IBM to take advantage of these improvements and added the ability to manage graphics and fonts and create vastly more complex documents. These innovations laid the foundation for the modern graphics-rich account statements you receive in the mail from your bank or phone company to this day.
Modernisation and printing
When we think of modernisation today in an IT framework, we think of converting older COBOL and PL/I code to run on Windows or Linux machines. In the vast majority of cases, those programs were designed to generate output meant to be processed in a mainframe environment where carriage control and FCBs matter. The data would also have a record format meaning that each report’s records could be of fixed or variable length.
The output may be designed with an AFP PAGEDEF/FORMDEF in mind to format the data into a graphically-rich AFP data stream. Or the program may insert Xerox Dynamic Job Descriptor Entry (DJDE) statements meant to be interpreted in order to load FORMs and perform other specific functions at the print device.
For your modernisation plans to succeed, you need to ensure that this type of data is handled correctly going forward and LRS can help in this regard.
With software such as the VPSX/MFI (Micro Focus Interface) solution, LRS can process the data coming out of your Micro Focus environment. This ensures that the mainframe-formatted data is correctly handled when sent to modern networked printers via the VPSX solution.
Similarly, LRS Transforms can convert your AFP or LCDS/DJDE data into PDF for online viewing in a PageCenterX browser or a modern page description language such as Printer Command Language (PCL) or PostScript for delivery to modern printers.
With LRS, your modernised data can play nicely with the Windows or Linux platforms that power your business today and in the future. We have the products to simplify the modernisation process, and — just as importantly — a deep understanding of both modern and legacy technologies. | <urn:uuid:de8f2214-dcee-4333-bc13-bd612932b090> | CC-MAIN-2022-40 | https://www.lrsoutputmanagement.com/blog/post/mainframe-data-in-a-nonmainframe-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00389.warc.gz | en | 0.926583 | 984 | 2.828125 | 3 |
Analytics, big data, automation, and machine learning are all terms we use when talking about the future of cybersecurity. As the volume of security data increases, data science will become an important weapon to disrupt adversaries.
Too often, these terms are used as synonyms, but they refer to different parts of the domain of data science. To stay ahead of threats and predict vulnerabilities, we should all have a basic understanding of the fundamental security building block of data science.
What is data science?
Data science is the confluence of math, statistics, hardware, software, and data management. Data scientists apply mathematical algorithms and models to solve problems—such as detecting an attack before it happens or stopping ransomware before it takes over a computer or network. Data management covers the processes of gathering data throughout software and hardware environments, as well as governance, policies, security, storage, and mathematical boundary conditions. Effective data management is as important as the algorithms themselves.
What is big data?
Big is the essential part of big data. Security tools can collect massive quantities of data, which are necessary to develop sustainable patterns of normal and anomalous behavior. The quantities are mind-boggling—data scientists often work in yottabytes (1024 bytes) of data.
What are analytics?
Analytics are the scientific process of transforming data into business insight. This involves mining big data to identify patterns, build models, test those models against real scenarios, and iterate through the process to improve the ultimate effectiveness. There are four basic types of analytics: descriptive (what happened?), diagnostic (why did it happen?), predictive (what will happen?), and prescriptive (this is what is recommended because that will happen).
What is automation?
Automation (as it pertains to machine learning) is simply the process of having computers execute analytic models. Automation can be applied to many parts of cybersecurity and data science by removing repetitive tasks, summarizing datasets that are larger than humans can handle, identifying patterns, and performing mitigation functions, among others.
What is machine learning?
Machine learning is the action of automating analytics to the point that the computer builds on and enhances the model over time, identifying new patterns and relationships to which it can apply rules and policies. When working at the predictive or prescriptive levels, the machine will calculate the expected future value of a particular variable.
What are some common myths?
Big data, analytics, and machine learning are very powerful, but they cannot solve every problem.
Some key myths of analytics:
- They can be done quickly.
- The results are always right.
- You don’t have to know any math or statistics.
Some key myths of machine learning:
- Human involvement is not required.
- You can just pick a model and apply it to your data.
- It is hack proof.
Analytics, big data, automation, and machine learning can be applied to a wide range of business challenges. For cybersecurity, the opportunity is to identify anomalous behavior and other indicators of attack sooner, and even predict future attacks based on context, learned patterns, and shared threat intelligence. Understanding the basics of data science is important to be able to effectively apply these tools to current and future business and security needs.
For the full crash course in security data science, analytics, and machine learning, download the McAfee Labs Threats Report: September 2016.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:57dbe71d-8ad2-4c27-8371-df90b4bcb6e7> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-labs-threats-report-offers-primer-security-data-science-analytics-big-data-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00389.warc.gz | en | 0.920114 | 724 | 3.40625 | 3 |
There was a time when all a new development project required was a program compiler and an operating system, particularly when the operating system was a complete, integrated solution like OS/400. The language was RPG, the database was built-in, and there was a rich set of operating system functions available to support virtually any application need.
But the face of development is changing, and now you are told that in order to produce an application, you will need something called an application server. It seems that if you desire to use Java as a development language and if you desire to use the facilities of a browser interface, the capabilities provided with the base operating system may not provide everything you need to get the job done.
Not only will this application server cost money, but you will also need to make a choice between a bunch of available offerings, either for the iSeries or for another platform. There are a number of differences in price and feature set between the offerings, and you should consider carefully which server you want to use to develop with.
In this article, I will describe what an application server is, starting with the basic set of services it supplies. I will spend a bit of time discussing the three major offerings available for the iSeries and will finish up with a discussion of some of the important considerations involved when selecting such a critical component of your IT infrastructure.
A New Way to Compute
With the proliferation of the Web, now there’s a new and consistent way of communicating with software users: the browser interface. Browsers tend to not be platform-specific. Users with Linux or AIX or Sun or Macintosh or Windows workstations now can access the same application, in substantially the same way, provided that the application sticks to the standard protocol (HTTP) and the standard markup language (HTML). Sure, there are minor platform differences to be accounted for, but they’re nothing like the differences between a Windows dialog, a 5250 display, and a 3270 display.
TCP/IP networking was a simple solution initially designed to address the problem of connecting more than two computers together in a single network. It had its limits, and for years, the industry tried to develop and roll out much more complex, and admittedly elegant, network protocol solutions. In the end, TCP/IP beat them all, despite its limitations. With TCP/IP, there is now the Internet and large, high-speed corporate
networks, all of which work with the same protocols and can work together when necessary.
RPG is a great programming language—but, then again, so is COBOL and so is C. What you use simply depends on the project at hand and your point of view at the time. Each language has its pluses and minuses, but in the end, they all simply provide a means of syntactically describing the actions a particular piece of software should take.
Then, along came Java. Object orientation (OO) showed up on the scene long ago, but none of the languages that supported it really sprouted legs. Java seemed, finally, to be a language solution that would bring the promise of OO to the mass market and also solve some of the problems inherent in other languages, such as the issues with pointers in the C language. The application server brings these technologies of the Web, TCP/IP networks, and Java together in a way that makes it practical to use them.
Where the Application Server Fits
Figure 1 is a block diagram showing how the application server fits within the overall server architecture. Application software draws from the set of services and tools built into the application server in order to accomplish its desired outcome.
An application server is middleware in the truest sense of the word. It doesn’t interface with the system hardware, nor does it interface with the user. Its purpose is to make the job of developing and delivering the application simpler and more cost-effective. Application servers do not provide a means of communicating with the user (the browser and HTTP server), nor do they offer a means of managing data (a database), nor do they provide the Java execution environment. Each of those functions must be supplied separately. While that may seem terribly complicated, in some cases, it allows the selection of best-of-breed products. OS/400 is somewhat unique in the market, in that those sorts of functions are supplied as part of the operating system.
HTTP server software is the mechanism for communicating with the browser software on a user’s desktop. HTTP servers do offer mechanisms that support application development, such as CGI, along with a variety of product-specific programming interfaces. Direct development of an application within the HTTP server is certainly a possibility, but it could be very painful for all but the most rudimentary applications.
Some services offered by an application server are part of the overall Java standard. These include the ability to deliver Java servlets and JavaServer Pages (JSPs). Another capability showing up in recent application server versions is support for the Enterprise JavaBean (EJB) component architecture defined in the Java 2, Enterprise Edition (J2EE) specification. Other pieces of the J2EE specification, such as Java Transaction API (JTA) and the Java Message Service (JMS), are beginning to show up as well.
A vendor would have a difficult time differentiating an application server product from other vendors’ offerings if support for the Java specification was all the software had to offer. Among the features offered by the various vendors is support for other popular middleware products like messaging services, along with libraries of routines and components ready for integration into your application.
For the iSeries, there are three commercially available application server choices: BEA’s WebLogic, IBM’s WebSphere, and HP Bluestone Software’s Total-e-Server. A number of application servers have been developed using pure Java, so beyond the commercial offerings, I’ve heard stories of success from several sources in getting servers—other than the aforementioned three—to work. However, the vendors of those products are not yet supporting OS/400 officially. (Read “Tomcat for OS/400: An Open Source Web Application Server,” in the December 2000 issue of MC, at www.midrangecomputing.com/mc.)
BEA’s WebLogic (www.bea.com) server is currently the market leader across all the platforms it serves. IBM’s WebSphere seems to be catching up fast, but BEA still has the title. Given the position WebLogic has in the market, there should be plenty of market support and talent available for any size project.
Now at Version 5.1, WebLogic is known to be a leader in specification compliance and performance. Depending on configuration, it may also be more cost effective than some of its competitors. BEA counts among its customers high-volume sites like Amazon.com and ESPN.com, though I don’t believe iSeries hardware is a part of either of those two implementations.
Part of the WebLogic package is an extensive management capability. The Web- based management console supplies a way to look into the ongoing operation of the server with tools that monitor resource usage, view log messages, and manage server operations like startup and shutdown. The administration part of the console supports the configuration of server attributes and manages execution resources like EJBs.
Clustering is another feature available with WebLogic. Not to be confused with the hardware clustering and high-availability support available for the iSeries, this clustering support works at the network level and operates with a set of independent servers. Such support, when used in conjunction with other high-availability options, could make for a very dependable application environment.
IBM’s WebSphere (www.ibm.com/websphere) is gaining ground on BEA for the top spot in the application server market. In fact, a recent report did put IBM at the front, though the data was based on a management survey, not on sales data.
IBM packages WebSphere in three versions: Standard Edition, Advanced Edition, and Enterprise Edition. The primary differences between the versions are the available features. For example, the Standard version does not include support for EJBs, though it will manage JSPs and servlets. The Standard Edition is shipped with OS/400 at no additional charge; it simply must be installed and configured. WebSphere Advanced Edition is available for OS/400 at an additional charge. The Enterprise Edition is not currently available for the platform.
Version 3.5 of the Advanced Edition brought with it extensive support for XML. WebSphere’s XML document services bring a consistent way of interchanging data between existing and new application systems. XML could be big part of the computing future, and WebSphere can provide a platform for getting started.
WebSphere, too, has a management facility. Its browser-based interface allows the management of execution resources and the server itself. A connection to IBM’s Tivoli management infrastructure is available for needs beyond those the Web interface offers.
Part of the functionality of the Advanced Edition is the ability to create and manage transactions. Web-based applications are unique in that they are nonpersistent. That is, as a user visits different pages of the application, the server has no intrinsic knowledge about the user, such as where he’s been or what he is after. In the traditional OS/400 world, there is the concept of a job. A user may traverse a number of screens within the job, but all of the user’s actions are contained, and programs take advantage of that fact. WebSphere, as well as other application servers, bring the concept of transaction persistence—the ability to manage a complete unit of application work—to Web application development. There are ways to do Web transactional processing without an application server, but they tend to be non-trivial to develop and implement. The application server is certainly a simpler way to satisfy the need.
IBM offers a number of application offerings that operate with and within WebSphere. For example, WebSphere Commerce Server supplies much of the functionality necessary to implement a B2C Web application.
HP Bluestone Total-e-Server
Total-e-Server (www.bluestone.com) is one of the servers advertised to be built using pure Java. This fact allowed the product to operate on OS/400 as well as any other platform that had the appropriate Java Runtime Environment (JRE). Recently, Hewlett-Packard purchased Bluestone. HP did not have a competitive application server offering of its own, and I suspect that purchasing Bluestone was the simplest way to get into the market. Interestingly enough, since the HP purchase, OS/400 has been removed from the supported platform list in the Total-e-Server literature. However, if it is truly pure Java, it should still be an option.
Total-e-Server touts many of the same features as WebLogic and WebSphere, including a management console, XML support, clustering, and application management. Beyond those features, it also includes a number of tools directed at the developer. A tool called Visual XML allows developers to browse available databases, create document type definitions (DTDs), and bind data elements from within the databases to Java applications. Another tool that’s included with the package is one that Bluestone calls J2EE Developer. With it, a programmer can create and manage EJB components and develop persistent transactions with those components.
Other Necessary Pieces
Beyond security mechanisms built into the application server, both WebSphere and WebLogic will connect to a Lightweight Directory Access Protocol (LDAP) server. (See “Resource Management the Directory Way,” MC, February 2001.) An LDAP server supplies the mechanism by which security can be established for an application and for the server. By defining users and resources to the directory, and by establishing the relationship between them, an application can determine who has authority to access particular portions of the application and authority to access the data underlying the application. Application servers installed on iSeries systems can either use the LDAP server built into OS/400 or use a server located on another system.
Another significant aspect of Web application security is the protection of information as it flows between the server and the browser. This need is typically addressed by the exchange of digital certificates and the initiation of a Secure Sockets Layer (SSL) conversation. Application servers typically provide a certificate authentication mechanism and will integrate with the SSL support in the underlying HTTP server.
Application servers require both a JRE and an HTTP server to operate. Again, there is an advantage with OS/400 in that these services are included as a part of the operating system; this is not necessarily the case with other platforms. Most of the application server products will work with most popular HTTP servers. JREs (formerly known as the Java Virtual Machine, or JVM) are based on the Java specification, so as long as the JRE supports the level of the Java (such as 1.2) specified by the application server, you should be good to go.
For platforms other than iSeries, vendors often ship a JRE and an HTTP server with the application server. By far, the most popular HTTP server is the Apache server, and IBM has announced that Apache will be the HTTP component of WebSphere. Apache is just coming to OS/400, and PTFs are already available to use it with WebSphere.
Developing an application using an application server is a new experience for many programmers. Properly developed, an application should be able to operate without alteration on any hardware platform that the application server supports. The key word here
is should. All too often, platform-specific pieces creep into applications, and changing platforms becomes difficult, if not impossible. You should plan for a platform change during development if a future change is anticipated at all.
While software written for a particular application server may port easily to the same server running on another platform, it may not port easily to another application server. Each vendor adds its own unique APIs to an application server, beyond the APIs called for in the Java specification. Development done with these server-specific interfaces may be a bit faster and may solve some distinct problems, but the resulting software may be tied to a specific vendor and product. Such development may not be a bad thing, but it is something that should be given deliberate consideration.
Making the Choice
Figure 2 (page 40) contains a list of features one should consider when selecting an application server. Most of the servers on the market will address each item in the list, to some extent, but some may have differences that affect your decision. For example, on platforms other than iSeries, it may be important that the application server connect to a specific database product.
Consider, too, any expected changes in user delivery that you may find necessary. A prime example of such a change today is the use of wireless devices. New application software will likely need to communicate with wireless devices like cell phones and personal digital assistants (PDAs) in addition to a traditional browser, and some sort of intrinsic support for them in the application server will be a good thing.
Other selection factors to consider include the levels of Java specification support and what pieces of the “alphabet soup” you will need or want for your application in the future. Figure 3 (page 41) is a list of a number of the acronyms you will likely run into when evaluating products.
Don’t discount the “free” solutions. Each of the three commercial products that I’ve covered bundles facilities for using J2EE technologies, but open source versions of those facilities can be added to any one of those products. (Read “Open Source WebOS,” in the November 2000 issue of MC, at www.midrangecomputing.com/mc.)
The Microsoft Factor
So far, this discussion has centered on Java application servers. Recently, Microsoft settled its long-running licensing battle with Sun, and in doing so, chose to remove any intrinsic support for Java from Windows. Instead, Microsoft launched its Microsoft .NET series of products, based on its Component Object Model (COM) architecture. The programming language for Microsoft .NET will be a new offering from Microsoft called C# (pronounced C sharp), a language similar to Java but with a decidedly Windows twist to it.
Microsoft .NET will offer the same sort of services and components that the Java application servers do, except that the resulting applications will operate only on the Windows platform. For some applications, that may be an acceptable limitation—though the extended scalability available with other, non-Intel hardware platforms may be a deciding factor.
In the end, the standardization that Java is bringing to development is a good thing. As programmers, we can now train developers in a common fashion while utilizing their skills across a number of application development problems. Today’s application servers bring with them a potential for software portability that’s never been seen before.
Applications can be delivered to wider audiences, on scalable platforms, with an ease not seen in the past. By taking some care in the selection of an application server and by taking some care with the development of the software that runs on it, you will have a flexible, manageable, and powerful way to deliver good software.
REFERENCES AND RELATED MATERIALS
• BEA Systems home page: www.bea.com
• HP Bluestone Software home page: www.bluestone.com
• IBM’s WebSphere home page: www.ibm.com/websphere
• “Resource Management the Directory Way,” Randy Dufault, MC, February 2001
• “Web application servers power e-commerce,” Paul Ferrill, Network World, September 25, 2000, www.nwfusion.com/research/2000/0925bg2.html Figure 1: The application server is true middleware, existing between the application and other services on the platform.
• Platform the server operates on
• Supported HTTP servers
• Supported databases
• Supported client systems (e.g., browser, wireless)
• Datastream and protocol support (e.g., HTTP, XML, WML)
• Security and directory support
• Transaction-management support
• Fault-tolerance capabilities
• Clustering support
Figure 2: There are a number of basic capabilities to be considered when selecting an application server.
CORBA Common Object Request Broker Architecture DOM Document Object Model (part of XML)
DTD (XML) document type definition EJB Enterprise JavaBean
J2EE Java 2, Enterprise Edition
JAR Java ARchive (file)
JDBC Java Database Connectivity JMS Java Message Service
JNDI Java Naming and Directory Interface JRE Java Runtime Environment JSP JavaServer Page
JTA Java Transaction API
JTS Java Transaction Service LDAP Lightweight Directory Access Protocol WML Wireless Markup Language
XML Extensible Markup Language
XSL Extensible Stylesheet Language
Figure 3: Evaluating an application server requires a fair amount of “alphabet soup” decoding. | <urn:uuid:f7e62df4-fc67-4fe3-9edf-10a6f6713ff4> | CC-MAIN-2022-40 | https://www.mcpressonline.com/it-infrastructure-other/application-servers/application-servers-the-engines-of-ebusiness/print | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00389.warc.gz | en | 0.92786 | 4,002 | 2.921875 | 3 |
Part 1 | Part 2
Although static IP addresses are usually set by the IS/IT department, knowing the concepts and basic structure helps you a lot. An installation and maintenance technician can then play a much more active and contributing role in the turn-up of IP elements.
A manager can better position their department to survive the transition from legacy transport to IP-based solutions. The widespread use of protocols, such as SMTP for telemetry reporting and TCP/UDP for telemetry transport, are evidence of this transition in the process.
These protocols, as well as SMTP, POP3, FTP, TELNET and a host of others, all rely on IP addressing. This tutorial will discuss the three fundamental items involved in typical element addressing - the IP Address, the Subnet Mask, and the Default Gateway.
A manager can better position their department to survive the transition from legacy transport to IP-based solutions.
The IP Address is perhaps the easiest one to understand. In legacy multi-drop polling environments, each element on the leg had to have a unique ID that the polling master used to collect data.
In a LAN/WAN environment, each element on a network segment must similarly have a unique ID. This unique ID is technically a combination of IP and MAC addresses, but since the equipment manufacturer generally assigns the MAC address, the IP address becomes the controllable variable.
Though the assignment is generally done by the IS/IT department, it is good to remember that IP communication difficulty can sometimes be a simple issue of mistyping the assigned IP address when setting up the element. Another somewhat less frequent occurrence is the assignment of an already-used address to a new element.
The conflict resulting from this mistake will usually impair communication with both elements just as it did in legacy networks where two units would respond to the same poll.
The Subnet Mask is a filter that prevents everyone else from seeing the traffic on your leg of the network. The actual filter is a special network element that sits on your leg and handles traffic to and from other network legs.
A standard element uses its Subnet Mask to determine when a request needs to be handled by that special element and when it can simply be placed on the local leg. The technical side of this function involves a logical XOR of the element's IP Address with the destination IP Address.
The result is ANDed with an uninterrupted sequence of bits (the Subnet Mask) to enable a decision. If no bits survive the logical shenanigans, the traffic will be placed on the local leg. Otherwise, the traffic will be sent to the special element, which is typically called the Default Gateway.
Obviously, an accurate Subnet Mask is needed for traffic generated by an element. It is equally important when the element is just responding. Though the request may reach the element, the response can be utterly prevented by a single bit error in the Subnet Mask that prevents the element from correctly routing the traffic to the Default Gateway.
The Default Gateway must then, by definition, have an address on your local network leg. It also has separate connections and unique addresses on other legs.
The Default Gateway processes traffic-routing requests received from each leg decides where the traffic needs to be sent. It places it on the appropriate destination leg, using logical functions similar to the ones mentioned previously.
If it has no leg that is suitable for the addressing of the routed request, it typically has a Default Gateway of its own to which it forwards the request.
The maintenance of gateways is no casual affair, and it is not generally involved in the deployment of an IP element.
The routing of IP traffic is one of the magic elements of the Internet and entire volumes have been printed in exhaustive detail regarding every little aspect of the IP addressing and routing mechanism.
This article has presented what should be an easily understood model of three items commonly involved in deploying elements in an IP environment. The IP Address which 'uniquely' identifies an element, the Subnet Mask which 'filters' both autonomous and response element traffic and the Default Gateway which provides 'operator assistance' for non-local traffic.
In our next article, we will deploy a NetGuardian as a DCPx responder to illustrate how the concepts we have discussed up to this point play into a typical installation.
You need to see DPS gear in action. Get a live demo with our engineers.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:c18448ba-e0ea-4844-a71a-acb5037428e9> | CC-MAIN-2022-40 | https://www.dpstele.com/dps/protocol/2001/jul-aug/ip-routing.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00389.warc.gz | en | 0.937229 | 970 | 3.03125 | 3 |
October 3, 2018 by Siobhan Climer
From hurricanes to cyberattacks, and everything in between, school districts across the U.S. are increasingly in need of up-to-date disaster recovery plans. But developing an effective disaster recovery plan that prepares you for the unexpected is no easy task, regardless of the industry. Due to the unique operational functions of a school district, planning for disaster recovery in schools is even more challenging. As such, we have developed a disaster recovery template to help you get started, and we have laid out the strategic steps you can take to build an effective disaster recovery plan.
Disaster Recovery In Schools: DR Template
To get started, simply download our Disaster Recovery Plan for schools below. If you get stuck or need help, simply contact a Mindsight representative to help you develop a more specific DR plan for your school district. Every district has its own needs, so you may need additional components not included in the free template.
Disaster Recovery Strategy
Use these steps to help you develop a strategic and effective disaster recovery plan for your district. Planning for disaster recovery in schools is an especially collaborative process, so be sure to engage with your team, seek feedback from the community, and give yourself adequate time to develop a complete policy.
No single person can find every nook and cranny that may be exploited, intentionally or otherwise, during a disaster event. By including the IT team, administration, teachers, and community in your disaster recovery planning, you ensure varied perspectives are given weight during the process. Everything from the HR payroll to storage switches will be accounted for with a collaborative strategy.
Accounting for every potential disaster event is impossible, but that doesn’t mean you shouldn’t try. Break down disaster events into two categories, natural and human-made, to help make the task easier. The DR Template above includes a table in Appendix A that conveys many – though certainly not an exhaustive list – of the disaster events to which your district may be at risk. Disaster recovery in schools is complicated, and the physical environment and societal system in which your district operates may significantly affect your risk assessment.
Not sure how to proceed with your technology roadmap? Join Mindsight experts for our weekly (free!) whiteboard sessions. In these one-on-one conversations, our consultants and architects will hear what is happening in your environment and help you identify the next steps to meet your objectives.
Define Strategies, Technologies, and Procedures
For every potential disaster event, there are possible consequences and response actions your team can take. Break down each event and determine the resources needed to properly respond. For example, if flooding might negatively impact your on-site data center, do you have a cloud backup solution in place? Finding the right solutions, and determining the procedures required to utilize those solutions, is essential for planning disaster recovery in schools.
Ensure Regular Review
This is critical. You should consider your Disaster Recovery Plan a living document, which means it should be amended, updated, changed, and reviewed regularly. We recommend reviewing the document at least every six months, though the policies or specific needs of your district may require different intervals.
Testing, Testing, 1-2-3
In every technology environment, testing and verification are essential to ensuring a reproducible and fail-safe strategy. Every element of your DR plan should have a fire-drill like procedure, that ensures the right people are ready to do the right thing should the wrong thing – a disaster event – occur.
Expect The Unexpected
Disaster recovery in schools – and, really, everywhere – is about being prepared for the unforeseen. Ensure your district is prepared with a disaster recovery plan that accounts for the various risks and consequences that may affect your district. If you need any help in developing either your DR plan, or the technologies that support your plan, reach out to Mindsight today.
Like what you read?
Mindsight, a Chicago IT consultancy and services provider, offers thoughtfully-crafted and thoroughly-vetted perspectives to our clients’ toughest technology challenges. Our recommendations come from our experienced and talented team of highly certified engineers and are based on a solid understanding of our clients’ unique business and technology challenges.
About The Author
Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She previously taught STEM programs in elementary classrooms and museums, and writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s writing fantasy, gardening, and exploring the world with her twin two-year old daughters. Find her on twitter @techtalksio. | <urn:uuid:d2fd1941-2d8b-4a54-b8be-b521e2670345> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/disaster-recovery-in-schools-template/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00589.warc.gz | en | 0.930013 | 984 | 2.8125 | 3 |
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos
BlogsNewsPress ReleasesIT NewsTutorials
Give us your email and we'll send you the good stuff.
Tanya Valdez is a Technical Writer at Constellix. She makes the information-transfer material digestible through her own transfer of information to our customers and readers. Connect with her on LinkedIn.
Before we get into how edge computing works and its benefits, we need to understand exactly what it is. For some time, it was hard for anyone to find an exact, sensical definition. To many, it initially seemed like a grandiose buzzword that was created by marketing experts to generate sales for something that already existed by making it sound new and “edgy.”
The word “edge” when used in the computing sense can still sound like a buzzword, but there is definitely an association to what edge computing actually is. It is distributed computing that takes place at or as close as possible to the source of the data. It’s like taking the cloud (where all of the heavy-lifting is done) and moving it to the device that is doing the actual computing or as close as possible to it.
Cloud computing has been the way to efficiently store and manage data. It provides on-demand availability of resources and stores them over the internet. Since we don’t see these files, we say they live in the clouds, when in actuality, they are distributed among multiple servers scattered across the globe.
Edge computing is simply moving where the computing is done. In this case, it is being moved from the cloud to, or as close to, the user’s device as possible, or “at the edge.”
The edge of the network is the door that accesses the network. The network edge is commonly referred to as the area where a device or local network interfaces with the internet. An edge network consists of pieces of hardware that control data flow, or endpoints. These endpoints are the network edge and serve as entry or exit points.
There are many devices that can assist in processing data in real time near its source. One of the most common types of edge devices is a router. Since they are set up to connect the network to the web, routers act as the gateway to the internet. Other examples include switches, firewalls, and other endpoints that serve as IoT (Internet of Things) gateways.
With the ever-increasing amount of data and connected devices, the list of who uses edge computing is a long one. It is an ecosystem of cloud, infrastructure, and communication service providers that work with organizations to bring their content and services to the edge. Examples of industries that benefit from edge computing include manufacturers, medical institutions, autonomous vehicles, and businesses with remote offices and employees.
Did you know? The worldwide edge computing market is projected to reach 250.6 billion U.S. dollars by the year 2024.
Although the primary function of edge devices is to maintain connectivity between different networks, the edge has evolved to support advanced services. This includes wireless networks, DHCP (Dynamic Host Configuration Protocol) utilities, and DNS (domain name system) services. DNS assists the resolving server by providing the domain name resolution for the devices in the connected network.
Now that you have a better understanding of what edge computing is, you may be wondering what are the associated benefits. For one, moving the data closer to the user improves speed and latency. Cutting down on distance also shortens the travel time in obtaining the information and reduces virtual road bumps along the way. The closer proximity of the resource also lends a hand to increased reliability in terms of potentially compromised environments and internet connectivity issues that can also take place in other data storage locations. There are also security benefits associated with edge computing because there is less data being transferred over greater distances. The more you can contain the area that needs to be safeguarded, the better the security benefits.
The frequency of distributed denial-of-service (DDoS) attacks increases by the year and the average cost of a DDoS attack averages between $20,000 to $40,000 an hour. Catching them at the edge helps absorb the large traffic spike to limit backend impact. There is a growing need for DDoS attack mitigation service providers. DNS management services can be implemented to monitor DDoS mitigation providers to give preference to the best-performing resources while removing unhealthy ones from configurations. This adds an additional layer of resource security to help avoid these types of attacks.
So, is edge computing just another buzzword in a grand marketing ploy? Maybe, but as we studied the semantics surrounding the use of the word, there is a definite technological meaning for it. Lexicologists and Linguistics experts can attest to the technology industry’s contribution to word formation (just look at converted words like granular, client, and handshaking). But until we see “edge computing” in the dictionary, just know that it is a way of bringing the network as close to the user as possible without actually storing and managing the data on the user’s device itself.
If you found this useful, why not share it? If there’s a topic you’d like to know more about, reach out and let me know. I’ll do my best to bring you the content you’re looking for!
Here are some more interesting reads:
Sign up for news and offers from Constellix and DNS Made Easy | <urn:uuid:eabcc753-32f6-42ae-b21c-fbc58449b073> | CC-MAIN-2022-40 | https://constellix.com/news/edge-computing-what-is-it | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00589.warc.gz | en | 0.936984 | 1,201 | 3.359375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.