text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Microsoft is one of the world’s most ubiquitous technology company. Best known for their Microsoft Windows operating systems the company also produces office suites, games, Azure, Xbox and many more. However, Microsoft is also at the forefront of developing artificial intelligence technology. As well as the potentially revolutionary Project Hanover, Microsoft AI applications are transforming businesses, services and how we interact with the world.
However, this has not always been the case.
Microsoft was initially slow to realise the potential of AI.
In the course of this article, we will explore how Microsoft, despite being slow off the mark, has transformed themselves into a leading AI-first company.
We will also highlight some of Microsoft AIs most exciting innovations.
By exploring some of Microsoft AIs leading initiatives we will see how then have been able to evolve into a leading AI company.
The History of Microsoft
Microsoft was formed in Albuquerque, New Mexico on the 4th of April 1975 by Paul Allen and Bill Gates.
The company’s name, Microsoft is a portmanteau of the words microcomputer and software.
Success was swift, as was early growth.
At the end of 1979, the company opened its first international office in Japan.
Following a move to Washington in 1979, the company restructured in 1981 becoming an incorporated business.
Early products were variations of the Microsoft BASIC, the dominant programming language of the period.
This meant that a Microsoft operating system would come as standard with IBM computers.
Alongside this Microsoft continued working on their own operating system.
The first publicly released Microsoft operating system was a variation of Unix.
This was launched in August 1980.
It would eventually be home to the first edition of the companies word processing software, Microsoft Word.
By the 1990s Microsoft Windows had over 90% of the market share of personal computers.
However, the company was not content to rest on its laurels.
In 1991 they established Microsoft Research.
This branch of the company is dedicated to researching and developing applications of computer science.
Much of its work paced the way for the development of many Microsoft AI applications.
Microsoft was Initially Slow to Realise the Potential of AI
Despite the formation of Microsoft Research, the company were initially slow to embrace the possibilities of the internet.
Instead, they focused on continuing to develop Windows and Office applications.
Additionally, the company developed security and anti-piracy software as well as the Xbox gaming console.
This focus, on applications away from AI, remained as late as 2014.
This lack of focus meant that other companies were beginning to usurp Microsoft’s established position as an industry leader.
It was also threatening Microsoft’s market share.
This changed in 2014 with the appointment of CEO Satya Nadalla.
Since then Nadella has focused Microsoft into transforming its AI strategy.
Nadella has stated that his mission is to rebuild Microsoft, getting it back to the pioneering company that it originally was.
Nadella’s change in focus has led to the development of a number of Microsoft AIs innovations.
These, such as the cloud suite Office and Azure, have helped to re-establish the company at the forefront of the industry.
2002 saw Microsoft AI launch .NET alongside a number of updated versions of established products.
This included a new programming language C# and API for Windows.
Microsoft AI is not only focused on evolving the companies established operating systems and developing new applications.
The company has also acquired other AI-powered companies such as Skype Technologies.
Microsoft purchased Skype for $8.5 billion USD in May 2011.
This evolution has seen Microsoft’s net profits climb to $16.6 billion.
If growth continues at this rate the company is expected to break the trillion-dollar mark in the next few years.
This transformation is attributable in large part to the focus of acquiring and developing Microsoft AI applications.
Microsoft has a constant desire to evolve and stay at the forefront of technology.
Consequently, this means that Microsoft AI is at the forefront of the majority of their innovations.
This drive has helped the company to remain a world leader.
It has also led to Microsoft being described as a trillion-dollar company alongside Apple and Amazon.
The Microsoft AI Lab
The Microsoft AI lab is dedicated to promoting and developing AI-based initiatives and applications.
The stated aim of the Microsoft AI lab is to design AI applications that are trustworthy and reflect the ethical principles of the company.
This means that the Microsoft AI lab promotes principles such as making AI understandable and accessible.
It also aims to develop applications that are secure, reliable and safe.
Microsoft believes that by providing a safe and accessible means of accessing AI more people will be able to benefit from its possibilities.
To further their aims Microsoft AI Lab initiatives are developed alongside a number of respected partners.
These include the machine learning department at Carnegie Mellon University.
Microsoft AI Lab focuses its efforts in a number of different areas.
The belief is that while progress is made in each separate field, lessons can be learnt and applied across the board.
This harmonious approach to development helps to drive initiatives forward, meaning more complete developments can be made quickly.
Microsoft AI in Action
The Snow Leopard Trust works to protect the endangered snow leopard.
Living in remote, hard to access areas snow leopard numbers are monitored by camera.
Using motion-sensitive trail cameras the Trust captures over 1 million images each year.
However as the cameras are motion-sensitive, many images are of grass, sheep or other items.
Other images may be blurred or difficult to see.
As little as 5% of the images taken could be of snow leopards.
Manually reviewing each image for signs of the snow leopard can be time-consuming and difficult.
With help from Microsoft AI lab, the Snow Leopard Trust have developed intelligent systems that can read these images.
The algorithms that drive this system can sort through the images taken with a reported 95% accuracy.
As technology improves, and the algorithms continue to be refined, this will only improve. Not only is this application reliable, but it is also quick. A workload of images that would take a team about 10 days to sort takes the system around 10 minutes.
Koustubh Sharma is a senior regional ecologist with the Snow Leopard Trust.
Sharma has said of this partnership that “By automatically analyzing the images and creating a database for us, Microsoft AI is providing our small team with the time to do more surveys and collect better data.”
Microsoft AI Unversity
One of the most difficult challenges for any AI-focused initiatives is bridging the skills gap. To solve this problem the company has established an internal university to help train staff in artificial intelligence.
Chris Bishop is the director of the Microsoft Research lab based in Cambridge, UK.
In an interview, Bishop explained that Microsoft AI university “is an internal education programme so that people who are incredibly smart and capable but trained in a different domain can quickly learn about machine learning both in a foundational sense but also in a practical sense of how to use it.”
As AI continues to evolve and improve it is increasingly integral to tech companies strategies and products. Even the smallest advances can have a major impact on product features.
Microsoft is Also Recruiting the Next Generation of AI Talent
Microsoft’s AI policy is not confined to developing the skills of their own staff.
The company is also attending technology conferences, identifying potential employees and sponsoring students through university. Once the student graduates they are then guaranteed a job within the Microsoft AI operation.
Bishop sees the recruitment of professors and established experts as a short term approach.
“I don’t think it serves even the industry itself very well, let alone academia or the nation, to take that rather short term view.”
That is why Microsoft also invests heavily in training their existing staff body as well as recruiting the next generation.
Microsoft is not the only company focusing on developing the next generation of AI experts.
DeepMind is also working alongside universities, giving AI and machine learning lectures to students.
Project Hanover to Revolutionise our Understanding and Treatment of Cancer
While Microsoft is best known for its operating system, Microsoft AI applications are impacting in a whole range of areas.
This is a consequence of the company’s increased focus on exploiting the potential of AI.
One of the most potentially world-changing applications is Project Hanover.
This is Microsoft AIs attempt to use computer science to solve cancer.
Trained biologist Jasmin Fisher who works in Microsoft AI’s Cambridge lab programming principles and tools.
She explained that “we are trying to change the way research is done on a daily basis in biology”.
To hasten the process different teams work on different areas of AI and potential applications.
Microsoft AI does this by sorting through the available research data and presenting it in a categorized, useable way.
Meanwhile, another Microsoft AIs research team is using computer vision and machine learning to develop insights into how tumours progress.
Many of these applications are already being used or trialled practical.
However, with the long term in mind, some teams are working on more advanced ideas or moonshots.
One of these so-called moonshots aims to develop a way of programming cells to fight diseases.
Microsoft AI Adopts Two Basic Approaches
Jeannette M. Wing is Microsoft’s corporate vice president and is in charge of the company’s basic research labs.
She has explained how Microsoft AI’s approach to solving cancer focuses on two approaches.
The first approach works from the concept that cancer, and other diseases and biological processes are natures information processing systems.
Here Microsoft AI researchers utilise software, tools and applications that conventionally model and reason computational processes such as programming languages.
These processes are trained to model and reason biological processes.
The second approach attempts to apply machine learning and similar tools to analyse biological data.
By analysing information with sophisticated tools Microsoft AI researchers try to get a better understanding of how cancer develops and works.
This can then help to inform treatment plans.
Wing believes that “the collaboration between biologists and computer scientists is actually key to making this work.”
Project Hanover has led to Microsoft investing in larger concepts such as cloud computing.
This allows Microsoft AI researchers access to more computer powers, meaning that they are better able to tackle large problems.
Wing has also explained that it makes sense for Microsoft to invest in developing tools that can operate on any computing platform, even a living cell.
She said, “If the computers of the future are not going to be made just in silicon but might be made in living matter, it behoves us to make sure we understand what it means to program on those computers.”
Changing Approaches to Treatment
Microsoft AI’s Project Hanover comes at an interesting time in genetic research and understanding.
David Heckerman, a scientist and director of Microsoft AIs genomics group explained that “We’re in a revolution with respect to cancer treatment.”
Heckerman explained that not so long ago if a patient was diagnosed with cancer they were treated for cancer.
However now, “we know it’s just as, if not more, important to treat the genomics of cancer, e.g. which genes have gone bad in the genome.”
The mapping of genetic material such as the human genome has helped to increase this understanding.
As has the ability of AI tools, in particular, machine learning, to organise and process large volumes of data.
Microsoft AI and Project Hanover are helping to drive these developments forward.
Microsoft’s Bio Model Analyser
Known as the BMA for short, the Bio Model Analyser is a Microsoft AI developed cloud-based tool.
It aims to accurately recreate the connections that cells make.
This allows biologists to model the way that cells interact with each other.
BMA works by creating a computerised model.
This allows the user to compare the processes of a healthy cell with those of an abnormal cell.
This approach allows scientists and researchers to see how interactions between both genes and proteins, can lead to cancer.
Understanding this relationship will help scientists personalised patient treatment plans that are effective but not invasive.
It can also be used to predict and identify at which point cancer will become resistant to treatment.
Ben Hall is a Royal Society University Research Fellow based in Cambridge, UK.
Hall has worked alongside the Microsoft AI team that developed BMA.
He explained that “I use BMA to understand cancers – understand the process of becoming cancers, understand the communications that are going on.”
The Literome Project can Improve Research
One of Project Hanovers first developed tools was Literome.
This is a computing system based on the cloud.
It is capable of sorting through huge amounts of data, such as research papers.
Microsoft AI developers combined machine learning tools with natural language processing systems.
This enabled them to create Literome, a sophisticated research model.
It is capable of conducting complex searches and identifying the most relevant documents.
For example, Literome can be used to identify genomic research that may be useful to individual diagnosis.
Without Literome this could be a time-consuming process that is prone to human error and inconsistencies.
Giles Maskell is a radiologist and president of the Royal College of Radiologists.
Maskell has observed that at one time a CT scan would produce around 200 images.
Now, the technology behind the CT scanner is more advanced.
This means that it may produce 2,000 images.
“The fine detail far exceeds our ability to understand it all and to actually process it into something that is meaningful,” said Maskell.
Applications such as Literome can be used to sort through this information.
This allows professionals to accurately read and interpret the information, helping to improve diagnosis and treatment.
Making Powerful Computers Accessible to all
As well as machine learning tools, Azure cloud computing platform is also helping researchers.
This platform hosts tools and applications that can be accessed by biologists and medical experts the world over, even if they don’t have a powerful computer.
Microsoft is proud of its reputation as a software innovator.
For this reason, Microsoft AI and other applications are developed with the aim of being as easy to use as possible.
The more understandable, and accessible the tool, the more likely it is to be used by physicians and researchers.
How Azure is Making the Power of Cloud Computing Accessible to All
One of the driving forces in Microsoft’s re-emergence as a major AI player is Azure, their cloud computing service.
Originally conceived under the name Project Red Dog.
Azure aims to provide a platform for building, testing, and using a range of applications via Microsoft’s own data centers.
Microsoft AIs Azure is capable of a range of functions.
Capable of supporting different programming languages, Azure also supports tools, frameworks, systems and both Microsoft and third-party software.
Additionally Azure provides Saas (software as a service), PaaS (platform as a service), and Iaas (infrastructure as a service).
This wide range of capabilities makes it a very useful tool.
Microsoft AI’s Azure allows users to process huge amounts of data.
This data can be fed into apps that convert the information into something useful, such as a forecasting API.
The data Azure processes can be used to predict trends, such as a buildings energy needs.
It can also help a company identify which products it should produce and market.
Joseph Sorosh is corporate vice president of Information Management and Machine Learning at Microsoft.
He explained that Azure allows the user to use data intelligently.
“That’s really where machine learning comes in.
Machine learning is really about looking at historical data patterns and being able to predict ahead.
It allows you to take the past and peer into the future.
So instead of looking in the rearview mirror, you’re looking forward.”
An Easy to Use Tool With Real World Applications
As with other Microsoft AI applications, Azure is easy to use.
Alongside the accessible visual interface, users can access a range of starter templates and drag and drop workflows.
Microsoft AI developers have also made the process of loading data easy.
This all means that you can use Azure with little to no programming knowledge.
Microsoft AI’s Azure platform is already being used in a range of real-life scenarios.
For example, ThyssenKrupp uses Azure to power predictive maintenance software.
This software can highlight a potential problem in the machinery before it affects the working process of the elevator.
By fixing the problem before it becomes a problem, it prevents a catastrophic breakdown.
Predictive maintenance, when used correctly allows machinery to become even more reliable.
Musthawu Ahmed is the chief operating officer of JJ Food Service Limited.
They are one of the UK’s largest food delivery service companies.
The company uses Microsoft AI’s Azure to predict orders before the customer makes them.
This helps to speed up the process, and also helps JJ Food manage stock levels.
Ahmed said “By using Azure Machine Learning, we can now make recommendations to customers ordering a particular item.
This feature is vital to promote new products or bring customers’ attention to products that they currently go elsewhere to purchase.”
Azure is Also Improving Microsoft’s own Services
It is not just outside companies that are using Azure and machine learning.
Microsoft AIs developments are also powering some to the company’s most popular applications.
These include Cortana their personal assistant and Bing, Microsoft’s search engine.
In the latter example, machine learning informs targeted advertising and also returns the most relevant results following a search.
Finally developed apps and solutions can also be shared in the Gallery or Azures own Marketplace.
This helps developers to monetise their solutions while making them available to a wider range of potential clients and users.
Microsoft AI and Azure are Powering Healthcare NExT
The Microsoft AI focus is not only impacting on the company’s output.
The applications that they are developing are also helping progress to be made in a number of other fields.
One such area is in healthcare.
Microsoft AI’s Azure cloud computing software is driving the company’s Healthcare NExT initiative.
Healthcare NExT aims to use Microsoft AI and Azure developed resources to improve healthcare provision.
This project has seen Microsoft AI integrating voice recognition software, robots and cognitive services into collaborative healthcare-focused applications.
These are intended to help medical providers deliver more personalised care and automate outpatient care plans.
These applications can also speed up medical record access time, streamline patient triage processes and make data entry simpler.
The University of Pittsburgh Medical Center is one of the United States largest healthcare delivery networks.
They were also the first healthcare providers to make use of Microsoft Ai’s initiative.
Healthcare NExT allows healthcare providers to access Microsoft Genomics.
This is Microsoft AIs attempt to enhance the sample-to-answer process.
Here gene testing methodology is available through an Azure-powered genome analysis pipeline.
Healthcare NExT is Transforming Cancer Treatment
Another Microsoft AI Healthcare NExT initiative is Project InnerEye.
This is a Microsoft AI-driven research-focused software tool for planning courses of radiotherapy.
InnerEye intends to provide oncologists with the ability to 3D contour the planning scans of a patient in a matter of minutes.
Without this application, this planning and contouring could take many hours.
This initiative is already being used in Addenbrookes Hospital in the UK.
Finally, Microsoft AI has developed a health chatbot.
This enables third-party software developers to construct their own conversational healthcare tools and applications.
This feature and other applications are being further improved by Microsoft’s acquisition of Skype.
Microsoft AI is incorporating Skype for Business into a number of virtual healthcare templates.
This is aimed at increasing versatility and functionality, making Healthcare NExT easier to use.
Helping Save Patients From Heart Disease
In India Microsoft, AI is being utilised in Apollo Hospitals, one of India’s largest private healthcare companies.
Here Microsoft AI solutions are being used to improve the detection rates of cardiac disease, highlighting patients most at risk from heart disease.
Previously prediction models in this area have been based on studies conducted in North America or Europe.
These models were inaccurate when used on an India patient base for a number of reasons.
For example one of the main causes of heart attacks in the west is high levels of LDL cholesterol.
Indian patients rarely suffer from high levels of LDL cholesterol, meaning that testing for it is largely useless.
Instead, Indian patients heart attacks are caused by other factors.
Microsoft AIs cloud computing abilities have been able to develop a more effective model.
This allows Apollo Hospitals to better identify which are most at risk of a heart attack.
Microsoft AI is Pioneering Motion Sensor Technology
Microsoft Ai’s motion-sensing input device Kinect was originally designed to be a video game controller for the company’s Xbox console.
The latest version, the Azure Kinect Developer Kit which was unveiled in 2019, is focused firmly on business applications.
Microsoft AI developers conceive the latest version as a tool able to build corporate applications.
These then seamlessly plug into Azure, Microsoft’s massive cloud computer.
Cameras and sensors allow Azure Kinect users to create Microsoft AI-powered applications based on facial recognition or body tracking.
Combining a series of these sensors can also allow the user to create a 3D map of a room.
Healthcare providers Ocuvera are already using this technology in their clinics.
Here it is used to identify potential patient falls in the hospital.
Hospital falls can cause patients further injuries and are, in some cases, fatal.
Microsoft AI’s DK can identify if a patient is about to fall and alert a nurse.
This means that the patient can be aided before any serious damage is done.
AVA Retail is also using Kinect technology.
Here it is being applied to power self-checkouts, allowing customers to grab-and-go.
A Vision for Augmented Reality
HoloLens is the name of Microsoft AIs developed mixed reality smart glasses.
It uses tracking technology that was first developed for Kinect.
HoloLens was the first head-mounted display to use Windows Mixed Reality platform.
HoloLens is a self-contained unit, employing eye-tracking technology alongside a large field of view and hand tracking software.
Since 2018 Microsoft AIs HoloLens has been supplied to the United States military.
Here the technology is used to “increase lethality by enhancing the ability to detect, decide and engage before the enemy.”
Initially available to purchase, HoloLens was subsequently made available for rental.
This was done in partnership with Abcomrents.
In 2019 at Barcelona’s Mobile World Congress Microsoft announced HollowLens 2.
This is a retooled version of the original HoloLens.
Initially, HoloLens was conceived as a game-playing tool, with enterprise possibilities.
HoloLens 2, while making the most of Microsoft AIs latest developments, has a more practical focus.
Microsoft AI intends to use HoloLens 2 as an AR tool to help people get things done.
This means that it seamlessly connects to Microsoft’s cloud services as well as android apps.
The technology of the Future
HoloLens 2, like its predecessor, makes use of a large field of view and hand and eye-tracking technologies.
However, now Microsoft AI has a second use for the eye-tracking cameras, biometric security.
Users of HoloLens 2 have their irises scanned automatically.
This allows the software to automatically sign in to their personal account, or remember their preferred settings.
Currently, this technology is more advanced than its potential client base.
Only the most advanced or forward-thinking factories and organisations are currently ready to make use of HoloLens 2.
One is the US Army, who were previous users of the original HoloLens.
As the world catches up the market will only grow.
Crossing the Platforms in Collaboration with Amazon
In 2017 it a partnership between Microsoft and Amazon was announced.
Jeff Bezos said of the collaboration, “I want them to have access to as many of those A.I.s as possible”.
This means Microsoft AI developed tools and services such as Cortana, Office 365 will integrate with Amazon services such as Alexa.
This partnership will, it is believed, be mutually beneficial.
For example currently, Amazon owns the shopping and buying domain, or Voice Commerce through Alexa.
Over time Cortana users will also increase access to the Amazon platform.
This increased usage not only increases Amazon’s profits but also benefits Microsoft as an Amazon affiliate.
The partnership also helps to cement Microsoft at the forefront of AI development.
Placing Microsoft and Apple at the forefront of the Voice First revolution.
This collaboration may pose an existential threat to both Google and Apple.
This will see Alexa, and Amazon, become more widely available on Microsoft AI-powered platforms.
Consequently, hardware, such as iPhones, and OS, may become obsolete, or at least less important.
The collaboration allows Alexa and Cortana, or Microsoft and Amazon, to reach across the limitations that come with platform domains.
Instead, this synergistic approach allows them to benefit from the power of each separate platform.
In the future, we may even see skills or apps being constructed that unite both platforms.
These apps will use features from both Microsoft and Amazon to create a powerful, overarching solution.
Currently, a customer uses one platform or device.
In the future, thanks to collaborations such as this, there may be only one, easy access, universal platform.
Microsoft AI is delivering Complex Analytics to Financial Service Providers
Here they are developing an AI-powered platform called Next Generation Complex Analytics.
This tool can simulate financial market behaviour as well as transporting networks and other relevant environments.
Next Generation Complex Analytics is useful for predicting financial market patterns.
These tools are also able to generate financial insights, highlighting both opportunities and risks.
Making the most of Microsoft AI developed tools, this has been tested on a number of NatWest’s services.
In one case these Next Generation Complex Analytics was used to analyse the Buy-to-Let housing market.
Here the tool will focus on house price fluctuations, monitoring how they affect mortgages and the demand for small business loans.
This information can be used by staff to decide the best course of action for improved financial performance.
Kevin Hanley is Director of Innovation and Solutions at NatWest.
He said of the technology, “By allowing us to better predict future outcomes, risks and trends, the implementation of this technology could be of significant value to our customers and shareholders over the coming years.
For the first time, we’ll be able to deliver an aggregated, forward-looking view of the world around us, ultimately helping us build a stronger, safer bank.”
Project Brainwave Delivers a High Powered Chip
Project Brainwave is a Microsoft AI-driven initiative aimed at accelerating machine learning algorithms to work in real-time.
It is also helping to accelerate Microsoft AI technologies that power the Bing search engine and Azure platform.
This sees Microsoft AI tools using FPGAs, programmable processors, to power sophisticated machine learning algorithms.
Project Brainwave technology can be programmed straight onto a chip which, when installed in hardware, allows the hardware to function as a processing unit for a deep neural network.
Microsoft Ai is so confident of the power of these FPGA chips that they are already installed in many of the company’s data centres.
Microsoft AIs FGPA chips and tools use machine learning to drive a number of the company’s larger products, such as Cortana, the virtual assistant or the chatbots that operate on Skype.
Here machine learning is used to help the applications improve and develop the more that they are used.
Bing and Office 365 also make use of this intelligent functionality.
Currently, AI, machine learning and similar tools are ideal for set, or restricted tasks.
Microsoft AI intends to develop further in future. The aim is to produce intelligent machines that have AI capabilities.
These machines will be capable of completing any given task.
Harry Shum, executive vice president of Microsoft AI and Research, said “Computers today can perform specific tasks very well, but when it comes to general tasks, AI cannot compete with a human child.”
Microsoft AI Initiatives and Applications Have Helped the Company to Remain at the Forefront of Technology
In recent years Microsoft has transformed its model.
Using Microsoft AI-focused initiatives has allowed the company to reassert itself as a market leader.
Adopting a forward-thinking approach allows Microsoft AI to drive innovation and progress in a number of different ways.
These include transforming healthcare and cancer treatment, via Project Hanover, to platforms such as Azure and HoloLens technology.
Throughout their history, Microsoft has prided itself on delivering accessible, useful software applications to a wide market.
Even today, Microsoft’s most advanced applications are made to be as accessible and usable as possible.
Microsoft has long been a major presence in the technology world.
Their newfound drive to innovate means that they look set to remain a major presence for a long time to come.
Images: Flickr Unsplash Pixabay Wiki & Others | <urn:uuid:3300ec88-8c00-475e-bc29-190691615324> | CC-MAIN-2022-40 | https://algorithmxlab.com/blog/microsoft-ai-first/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00384.warc.gz | en | 0.937666 | 6,214 | 3.03125 | 3 |
2020 Critical Infrastructure Protection and Resilience Americas
The Nation’s critical infrastructure provides the essential services that underpin American society. Proactive and coordinated efforts are necessary to strengthen and maintain secure, functioning, and resilient critical infrastructure – including assets, networks, and systems – that are vital to public confidence and the Nation’s safety, prosperity, and well-being. Critical infrastructure must be secure and able to withstand and rapidly recover from all hazards. Achieving this will require integration with the national preparedness system across prevention, protection, mitigation, response, and recovery. | <urn:uuid:79c88547-891b-489c-b99e-f7c7d269316b> | CC-MAIN-2022-40 | https://cyware.com/cyber-security-events/conference/2020-critical-infrastructure-protection-and-resilience-americas-b901ca2e/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00384.warc.gz | en | 0.902835 | 122 | 2.765625 | 3 |
“Previous generations had to find a job. Today our kids will have to invent a job. Every child needs to be innovation ready—knowing how to add value to whatever they do.” –Thomas L. Friedman
I recently heard this quote, and while I thought it profound, I believe that students innately have this mindset. What they need is more tools to practice with. As an example, students have become the creators of innovative AR and VR experiences, not just consumers of it.
The classroom VR technology that we are seeing, from companies like Lenovo and Intel, is flooding the education marketplace. You may be like many educators I speak with and think VR is really amazing, but are struggling to connect the funding and the justification for its placement in the classroom.
Connection® Public Sector Solutions is working with schools like yours daily to help them adopt technology like VR by creating projects that provide innovative research, development, and evaluation of practices aimed at improving STEM teaching and learning. Last month, we hosted a live webcast that discussed this very topic. Here are the grants we discussed that are available from the National Science Foundation:
• Computer Science for All: Funds projects focused on research-practitioner partnerships, which are mutual collaborations intentionally organized to investigate problems of practice and solutions for improving school and district outcomes. Application Deadline: 2/12/2019
• Innovative Technology Experiences for Students and Teachers: Funds projects that actively engage business and industry to better ensure K-12 experiences are more likely to foster the skill-sets of emerging STEM and cognate careers. Application Deadline: 8/14/2019
• STEM + Computing Partnerships: Funds research and development of interdisciplinary and transdisciplinary approaches to the integration of computing within STEM teaching and learning for pre-K–12 students. Open Application
If you’re looking for more tools and resources to help you align the latest classroom technology to evidence-based STEM projects, I encourage you to listen to our recorded webcast. After you do, I would love to hear about what amazing projects you’re looking to fund and what support we may be able to offer you throughout the process. | <urn:uuid:19c9de2a-9339-4028-ad25-8e58431ecf94> | CC-MAIN-2022-40 | https://community.connection.com/help-students-become-innovation-ready/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00384.warc.gz | en | 0.955623 | 439 | 2.609375 | 3 |
Whether you have kids of your own or not, they’re our future. Our doctors and nurses. Our lawmakers and lawyers. Our Supreme Court Justices. And, yes, even our President. That said, their educational success often falls back onto the school district and not the students themselves – and rightfully so. Seriously, have you ever tried to motivate a teenager? It’s not always as easy as you may have been led to believe. And that’s why programs like 1-to-1 are a natural way to help a student remain involved in their learning process.
For those of you unfamiliar with 1-to-1, the idea is simple: every student borrows or owns a notebook, tablet, or device that supports enhanced learning. There are several ways to go about achieving that ratio – discount or government-sponsored programs for schools are at the top of the list. But, cost aside, does 1-to-1 work? One study* found that students with notebooks had better writing skills than those without. Other published studies concentrating on different subjects had the same thing to say about science† skills and literacy‡.
Of course, with success comes planning. Lenovo, who manufactures some of the most highly rated notebooks, tablet, and desktop PCs on the market, weighed in on the subject, saying,** “1-to-1 learning initiatives have proliferated over the past decade. This has created a solid set of lessons learned for IT leaders responsible for planning activities and rolling out supporting infrastructure. By taking advantage of these accumulated insights, IT leaders can more confidently engage in 1-to-1 planning and implementation efforts of their own.”
With that in mind, here are 6 steps to success for your next 1-to-1 initiative:
- Planning is critical to successful adoption: Connection and Lenovo suggest a full year of pre-planning.
- Budgeting must account for more than infrastructure: Don’t neglect staff training, support services, or curriculum integration.
- Optimize infrastructure, minimize maintenance: Standardizing hardware and centralizing IT support pays off.
- Consider the right computers for your environment: What device is right for your situation; a tablet, a notebook, or something else?
- Professional development is critical to success: All 1-to-1 initiatives depend on training and professional development for success.
- District and school policies impact IT approaches: Computer policies should reflect school policies. And let’s not forget about Children’s Internet Protection Act (CIPA) compliance.
And while that list may seem daunting, all of those items are easily achievable – with the right amount of research and attention to detail. Several online resources exist with the sole purpose of lending a hand. To learn more on this topic click here.
- Bebell, D., & Kay, R. (2010). One to one computing: A summery of the quantitative results from the Berkshire Wireless Learning Initiative. Journal of Technology, Learning, and Assessment, 9(2), 5-57.
- Dunleavy, M., & Heinecke, W. F. (2007). The impact of 1:1 laptop use on middle school math and standardized test scores. Computers in Schools, 24(3/4), 7-22.
- Suhr, K. A., Hernandez, D. A., Grimes, D., & Warschauer, M. (2010). Laptops and fourth-grade literacy: Assisting the jump over the fourth-grade slump. Journal of Technology, Learning, and Assessment, 9(5), 4-45.
- Planning critical to successful adoption,” Lenovo special report, 2012. | <urn:uuid:094fe888-f0c7-462d-97bf-117c1b6ce98e> | CC-MAIN-2022-40 | https://community.connection.com/the-future-of-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00384.warc.gz | en | 0.92659 | 763 | 2.75 | 3 |
Around 68.5% of businesses worldwide experienced a ransomware attack in 2021, according to Statista. During the COVID-19 pandemic, nearly 85% of data breaches involved defrauding humans.
These cyberattacks can be carried out in several ways. In many cases, your files will be held hostage until you pay a ransom to the malicious actor who executed the attack. These incidents cause costly disruptions and compromise critical data.
Ransomware attacks are on the rise, so it's critical to learn how they work and what measures you can take to detect and prevent them. Understanding how ransomware attacks work can help individuals and organizations better protect themselves from becoming victims.
Basic Overview of Ransomware Attacks
Executing a ransomware attack can deliver a huge payday from a hacker's perspective. Malicious actors rely on this method to force companies to pay hundreds, even hundreds of millions, of dollars in ransom.
No organization wants to turn over large amounts of money, but that's the main reason hackers use this attack method. The worst part is there's no guarantee the encrypted data will ever be recovered.
In some instances, hackers choose to hold on to the decryption key, leaving companies without their sensitive data. Businesses can even face fines if they lose sensitive information.
Understanding the Five Steps of a Ransomware Attack
Navigating a ransomware attack is difficult. However, a good step companies can take is to learn more about how these hacks work to ensure the right protective measures are in place.
While the details of individual ransomware attacks will vary, each attack typically follows the same five-step process.
Step 1: Infection
Hackers will use common infection methods like phishing, online chats, security holes or USB drives to access a machine or data before launching a ransomware attack. Maintaining a strong cybersecurity posture can prevent these infection methods from working for hackers.
Step 2: Security Key Exchange
Once hackers gain access, they will attain a security key to infiltrate data, applications or other online systems. The security key is something only the hacker knows. When a target is infected, the attackers are alerted and keys are exchanged.
Attackers generate and embed public keys when they create ransomware. After gaining access to the victim's system, it encrypts the files using a symmetric key. The hacker then uses the public key to decrypt the symmetric one and follows up by decrypting the target files. This exchange is required before encryption can occur.
Step 3: Encryption
Once identified, critical data is encrypted, making it inaccessible to victims if they do not have the decryption key. It has no value to the company if it cannot be decrypted, which is why hackers will continue to the next step.
Step 4: Extortion
Hackers will demand a ransom of a specific amount and typically include a note containing urgent, threatening language to urge the victim to make the payment. Often, criminals require payment in cryptocurrencies like Bitcoin or Ethereum because it is difficult to trace and accessible.
Organizations with the most sensitive data will sometimes pay the ransom quickly because their reputation and client trust are at risk. Occasionally, hackers will use "double extortion" tactics, where they threaten to expose data to the public, adding to the pressure on companies to pay.
Step 5: Recovery
Depending on the scenario, it may or may not make sense to pay the ransom. It does not guarantee that data will be recoverable, and it may encourage more cybercrime.
These are some other ways companies can recover:
- Search for decryption keys online, as they are free and often work to decrypt data.
- Try to negotiate with the cybercriminal before paying.
- Ask if you can have your files decrypted before making the ransom payment.
- File a complaint with the FBI's Internet Crime Complaint Center (IC3), which will instruct you on how to proceed.
The Importance of Ransomware Attack Protection and Detection
There are a few ways to protect your organization from ransomware attacks, including increasing visibility, adopting segmentation policies, using IDS and malware detection software, and leveraging deception tools.
Here are other ways to protect yourself from a ransomware attack:
- Endpoint protection
- Patch management
- Data backup
- Email protection
- Network defenses
Investing in the best ransomware detection and protection tools can help your security team maintain a good posture in a high-risk environment.
Do's and Don'ts Surrounding Ransomware Attacks
Here are some basic do's and don'ts concerning ransomware attacks.
- Implement multi-factor authentication
- Use security software
- Keep software and applications updated
- Utilize biometrics for simple, highly secure authentication
- Back up important data
- Use cloud services
- Automatically open email attachments
- Use short, common, duplicate, or easy-to-guess passwords
- Pay the ransom
- Let the attack get worse
- Provide sensitive data to unauthorized sources
- Run backups during a ransomware attack
More ransomware attacks are cropping up, especially as the COVID-19 pandemic persists. Remaining vigilant against these attacks will help protect your organization.
Understand How Ransomware Attacks Work
Ransomware attacks shut down computer systems and render files inaccessible until a ransom is paid. The five steps of a ransomware attack include infection, security key exchange, encryption, extortion, and recovery. Following the do's and don'ts listed above when experiencing a ransomware attack is highly suggested.
Always consider cybersecurity best practices to increase your protection and decrease the risk of being victimized by a ransomware attack. Interested in hearing how we at BIO-key approach cybersecurity against potential ransomware attacks? Check out how we utilize Identity-Bound Biometrics and other forms of multi-factor authentication to provide the strongest possible security solutions. | <urn:uuid:4d5dc65b-d0a2-4c5c-b252-b4ab4305046f> | CC-MAIN-2022-40 | https://blog.bio-key.com/the-best-ransomware-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00584.warc.gz | en | 0.898629 | 1,203 | 3.390625 | 3 |
Section 1: Bridging and Switching (15 Points)
Section 1.1: Frame Relay Configuration (6 points)
Configure the Frame Relay portion of the network as shown in Figure 1-8; ensure that DLCIs 110 and 104 between R1-R6 are not used.
The question clearly states that DLCIs 110 and 104 are not to be used; you must, therefore, disable inverse-arp on the routers. It is good practice to ensure that all routers do not rely on inverse-arp so if you have configured no frame-relay inverse-arp under routers R1,R4 and R6 serial interfaces 0/0, you have scored 2 points.
If you experience difficulties and can not clear any dynamic map entries, reload your routers to remove these, a drastic measure but every point counts.
The routers are to be on the same subnet and should be configured with subinterfaces.
R4 will need to be a multipoint subinterface to accommodate both R1 and R6 on the same subnet; R1 and R6 only have PVCs to R4, hence, they will require point-to-point subinterfaces. R4 will require manual frame-relay map statements pointing to both R1 and R6 as inverse arp is disabled. The maps require the broadcast keyword as RIP will multicast the routing updates over the PVCs. It should be apparent that when RIP is run over a multipoint interface, split horizon will be enabled by default and routing updates from R6 into R1 will never be propagated by the hub router R4 because of the rule of not advertising a network that was received on the same interface; R4 will, therefore, require no ip split-horizon configured under its Frame Relay interface. If you have configured all items correctly as in Example 1-1 through Example 1-3, you have scored 4 points, unfortunately no marks if you have omitted anything.
For clarity only, the required configuration details will be listed to answer the specific questions instead of full final configurations.
Example 1-1 R4 Initial Frame Relay Solution Configuration
interface Serial0/0 no ip address encapsulation frame-relay no frame-relay inverse-arp ! interface Serial0/0.1 multipoint ip address 10.100.100.3 255.255.255.240 no ip split-horizon frame-relay map ip 10.100.100.1 100 broadcast frame-relay map ip 10.100.100.2 102 broadcast
Example 1-2 R1 Initial Frame Relay Solution Configuration
interface Serial0/1 no ip address encapsulation frame-relay no frame-relay inverse-arp ! interface Serial0/1.101 point-to-point ip address 10.100.100.1 255.255.255.240 frame-relay interface-dlci 101
Example 1-3 R6 Initial Frame Relay Solution Configuration
interface Serial5/0 no ip address encapsulation frame-relay no frame-relay inverse-arp ! interface Serial5/0.103 point-to-point ip address 10.100.100.2 255.255.255.240 frame-relay interface-dlci 103
Section 1.2: 3550 LAN Switch Configuration (6 Points)
Configure VLAN numbers, VLAN names, and port assignment as per the topology diagram as shown in Figure 1-10.
The switch in this instance is isolated but you can still use the default mode of VTP Server. From the VLAN database, add the required VLANs and name them accordingly; you should note that you can not change the VLAN name of VLAN1. You must ensure that the port speed and duplex is fixed to 100 Mbps and full duplex, if your routers support this; leaving your ports in auto mode could cause connectivity problems. If you have configured these items correctly as in Example 1-4, you have scored 2 points.
Example 1-4 3550 Switch1 Initial Configuration
Switch1#vlan database Switch1(vlan)#vlan 2 name VLAN2 VLAN 2 modified: Name: VLAN2 Switch1(vlan)#vlan 3 name VLAN3 VLAN 3 modified: Name: VLAN3 Switch1(vlan)#vlan 4 name VLAN4 VLAN 4 modified: Name: VLAN4 Switch1(vlan)#vlan 5 name VLAN5 VLAN 5 modified: Name: VLAN5 Switch1(vlan)#exit APPLY completed. Exiting.... interface FastEthernet0/1 switchport access vlan 2 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/2 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/3 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/4 switchport access vlan 3 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/5 switchport access vlan 4 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/6 switchport access vlan 2 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/7 switchport access vlan 5 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/8 switchport access vlan 2 switchport mode access no ip address duplex full speed 100 ! interface FastEthernet0/9 switchport access vlan 5 switchport mode access no ip address duplex full speed 100
The VLAN configuration is completed under vlan database.
There is to be a host connected on interface 0/16 in the future; the network administrator requires that this host is authenticated by a radius server before access to the switch is granted. The radius server is to be located on the IP address 172.16.100.100 with the key radius14.
This question calls for 802.1X Authentication before a port is granted access to the switch and network. If configured correctly as in Example 1-5, you have scored 3 points.
Example 1-5 802.1X Switch Configuration
aaa new-model aaa authentication dot1x default group radius ! interface FastEthernet0/16 switchport mode access no ip address dot1x port-control auto ! radius-server host 172.16.100.100 auth-port 1812 key radius14
Ensure the switch is reachable via Telnet to the IP address of 10.80.80.8/24.
Configure VLAN2 with the IP address of 10.80.80.8 255.255.255.0. The switch will also need a default-gateway configured; you could use 10.80.80.2 or 10.80.80.1 here. The previous question requires that you enable AAA. Enabling AAA prompts you for a username when you telnet to the switch from one of your routers. To ensure typical access to the preconfigured line and to ensure that the enable password is used for telnet access to the switch, you should add the aaa authentication login default enable authentication configuration onto the switch.
Example 1-6 Switch1 Management IP Configuration
aaa authentication login default enable enable password cisco ! interface Vlan2 ip address 10.80.80.8 255.255.255.0 ! ip default-gateway 10.80.80.2 ! line con 0 password cisco line vty 0 15 password cisco
Section 1.3: ATM Configuration (3 Points)
Configure the ATM network as shown in Figure 1-12.
Use a subinterface on R6 for the ATM matching the VCI number and ensure the latest method of PVC configuration is used on this router. For R5 ATM, use the physical interface and legacy PVC configuration; after you have configured your Layer 2 information, you may then add the Layer 3 addresses.
Do not rely on inverse ARP.
R6 requires a point-to-point subinterface named ATM1/0.99 with the PVC details configured under the separate PVC; R5 requires the legacy style with the map-list to achieve the PVC connectivity in this back-to-back configuration. The map-list, ip 10.99.99.1 atm-vc 1 broadcast, and protocol ip 10.99.99.2 commands ensure that inverse-arp is not relied upon.
You can use whichever encapsulation suits the three tasks in Section 1.3 as it has not been defined which type must be used.
If you have successfully configured all items as in Example 1-7 and Example 1-8, you have scored 3 points.
Example 1-7 R6 ATM Configuration and Map Verification
interface ATM1/0 no ip address no atm ilmi-keepalive ! interface ATM1/0.99 point-to-point ip address 10.99.99.1 255.255.255.248 pvc 0/99 protocol ip 10.99.99.2 broadcast encapsulation aal5snap R6#show atm map Map list ATM1/0.99pvc1 : PERMANENT ip 10.99.99.2 maps to VC 1, VPI 0, VCI 99, ATM1/0.99 , broadcast
Example 1-8 R6 ATM Configuration and Map Verification
interface ATM3/0 ip address 10.99.99.2 255.255.255.248 map-group atm atm pvc 1 0 99 aal5snap no atm ilmi-keepalive ! map-list atm ip 10.99.99.1 atm-vc 1 broadcast R5#show atm map Map list atm : PERMANENT ip 10.99.99.1 maps to VC 1 , broadcast | <urn:uuid:9ed98dd0-4e2c-4fc0-90dc-1e0012ab49e2> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=330806&seqNum=30 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00584.warc.gz | en | 0.791094 | 2,085 | 2.546875 | 3 |
There is a new market out there for hackers and virus-makers that is on the verge of booming- the cell phone market.
Back when cell phones were only used for making calls, it was very difficult to send a computer-type virus and infect a network of cell phones. But now, with the emergence of smart phones, this has opened up a new world. Not only do we use phones to make calls, but we text, email, surf the internet, and download apps. This has brought about many new ways for viruses to infect phones.
While many cell phone viruses are not incredibly sophisticated, they do have the ability to cause severe damage to your phone- causing you to lose data like your pictures or music, or even rendering your phone useless so that you need to either do a factory reset or get a new phone.
Here are the top ways you can get a virus on your phone:
1. Email: If you have your email account on your phone, make sure you have a good email spam filter set up for your account. Now that you have email on your phone, any virus sent through email can infect your phone the same way it would infect your computer.
Along with a good spam filtering system in place, make sure you use the same guidelines when opening email on your phone as you would on your computer. Don’t open emails from people you don’t recognize, don’t click on suspicious links, etc.
2. Internet Surfing: The same basic principle applies here as well. Since you now can get internet access on your phone, it is just as vulnerable to viruses as your computer.
So, keep the same guidelines here too. Don’t download anything suspicious, don’t visit questionable websites, etc. For more guidelines on how not to get a virus from the internet, check out our free e-book: Surf Smart.
3. Apps: Downloading apps is one of the great advantages of having a smartphone, but you need to be careful. Hiding some malicious code within an app is a really easy way to transmit a virus. Not only that, but since you are giving out your credit card information for some of these apps, it provides hackers with another way to access your personal and financial information.
If you think an app looks suspicious, do a little research online. Find out if anyone else had trouble with the app before downloading it. Ideally, you only want to be downloading apps from trusted third-party sources, but since there are so many different app-makers these days, it is tough to know which ones can be trusted. So make sure to do your homework.
4. Text Messages: Usually a virus sent through a text message requires you to do something like click a link, download something, or reply with some type of phrase before it is able to infect your phone. Although viruses sent through text messages are becoming more complex, they are still not likely to do damage to your phone if you follow some simple guidelines.
If you don’t recognize the number, and they don’t immediately identify themselves (“Hey, this is Peter, so-and-so gave me your number”), don’t click on anything or reply to the message. Better yet, delete it immediately. If that particular number is clearly a scam/spam, and they keep sending you messages, you can call up your provider and block that number from your phone. Generally speaking though, if you don’t take any action the first time it is unlikely they will try again.
Anti-Virus for a Phone?
Major virus protection companies, such as Kaspersky or MacAfee, have released some anti-virus programs for mobile devices. Generally speaking though, these programs are not very far along in their development or sophistication, much like the viruses themselves. While putting them on your mobile devices may offer you a little more protection, it is nothing substantial yet. That said, these companies are aware that mobile viruses are becoming more of a threat and are working on creating better versions of the mobile anti-virus software.
In the meantime though, if you follow the same guidelines when you use your phone as you would when using a computer, it is unlikely you will get a virus on your phone. The key is, just be aware and be careful. | <urn:uuid:2c15b89f-8dd6-4a65-b47e-8e66011f53f5> | CC-MAIN-2022-40 | https://www.networkdepot.com/how-did-i-get-a-virus-on-my-cell-phone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00584.warc.gz | en | 0.954669 | 894 | 2.53125 | 3 |
Active learning wakes up your brain. It helps you understand and retain information quickly so that it’s easy for you to use.
Many traditional educational programs rely on passive learning tools like reading and listening to lectures. Learning this way requires a lot of focus. Because students don’t have to engage with new information right away, it’s also easy to forget what you’ve learned.
When you use active learning strategies, your brain processes both new information and how to use it. You practice forming a neural pathway instead of just thinking about doing it later.
Active learning builds a strong foundation for applying and integrating new ideas into your professional skill set. Here are five active learning strategies that you can use to learn new concepts quickly.
1. Take Notes
It’s a proven science that notetaking increases memory and your understanding of new ideas. When you listen to speakers or watch lectures, you should write down the big ideas and anything else that stands out to you.
Also, consider writing in your books if you own them. Underlining, circling, and otherwise marking reading material will help you better engage with and understand it. You should also keep track of any questions that come up while you’re reading.
2. Write About It
Another way to actively engage with educational material is to write about it. To remember a new idea, try writing a summary of it. Push yourself to write an essay exploring different opinions about a new topic. Write down definitions and then put them into your own words to help you memorize new ideas.
You can also keep a journal of how new ideas make you feel or any interesting thoughts they spark. Do you have questions? What’s easy for you, and what’s challenging to understand? What would you like to come back to later? Responding to new information is the basis for learning, and it’ll cement new concepts into your brain.
3. Teach Someone Else
Verbal communication is another excellent way to practice active learning. When you explain an idea to someone else, you realize how much you actually understand. The other person may ask questions you haven’t considered or want clarification on certain points that still seem confusing.
Even if you don’t specifically “teach” someone else, it’s still a good idea to have a person to talk with about what you’re learning. Make sure you give them time to share as well!
4. Move Around
Movement wakes up your body and is good for your brain. Many people spend extended periods of time sitting each day at work, so being still for longer to learn can be frustrating.
Ask if you can stand while listening to lectures. Move and walk around while you study, and plan in quick breaks for jumping jacks, climbing stairs, or wall push-ups. Even a 15-minute weightless YouTube video can make a difference for your focus, interest, and memory.
5. Take Breaks
To learn well, you must rest well, too. Take frequent breaks to recharge and be ready to learn from educational meetings or courses. Research shows you’re most productive at work when you alternate high focus periods of work with regular 15-minute breaks.
For your brain and body to be working at their peak, you also need to get adequate sleep every night. Structure your day around breaks, and you’ll be amazed at how much more you enjoy learning.
Active Learning Online
Active learning in an online format looks a little different from the classroom. Each response learners give must go through a remote communication system, which adds an extra step and thus an extra hurdle for students and educators.
For instance, classroom discussions may become online discussion boards. Or, learners may meet on a group Zoom call or communicate with teachers purely by email. Even in an online setting, learners can incorporate active learning strategies to make the most of their education. These strategies become even more important when the only educational tool you’re engaging with is a computer screen.
Although online learning platforms can take some getting used to, they’re also pretty amazing. Technological innovation makes it possible for learners to save time and money, all while learning from the security of their homes.
Learning for Life
Use these five active learning strategies to sharpen your educational skills and boost your comprehension of new material. Education is a privilege and one you should make the most of. Don’t settle for passive learning! Actively engage with material so that you can excel in the classroom and beyond.
For professors wondering how to provide active learning to their students, check out our Cyber Arcade! Find out how simple done-for-you online cybersecurity training can be. You’ll have the peace of mind of knowing your students have all the knowledge they need, right at their fingertips. | <urn:uuid:2152e68c-679f-439d-b14c-7a121f7db1f2> | CC-MAIN-2022-40 | https://www.cybintsolutions.com/5-types-of-active-learning-and-how-theyre-beneficial/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00584.warc.gz | en | 0.932223 | 999 | 3.71875 | 4 |
Quantum Tech: Semiconductor ‘Flipped’ to Insulator Above Room Temp
(Semi-ConductorDigest) A semiconducting material that performed a quantum “flip” from a conductor to an insulator above room temperature has been developed at the University of Michigan. It potentially brings the world closer to a new generation of quantum devices and ultra-efficient electronics.
Observed in two-dimensional layers of tantalum sulfide only a single atom thick, the exotic electronic structure that supported this quantum flip was previously only stable at ultra-cold temperatures of -100 degrees Fahrenheit. The new material remains stable at up to 170 F.
“We’ve opened up a new playground for the future of electronic and quantum materials,” said Robert Hovden, U-M assistant professor of materials science and engineering and corresponding author of the study in Nature Communications. “It represents a whole new way to access exotic states.”
Hovden explains that exotic quantum properties—like the ability to switch from a conductor to an insulator—could be key to the next generation of computing, providing more ways to store information and faster switching between states. That could lead to far more powerful and more energy-efficient devices.
Today’s electronics use tiny electronic switches to store data; “on” is one and “off” is zero, and the data disappears when the power is turned off. Future devices could use other states, like “conductor” or “insulator” to store digital data, requiring only a quick blip of energy to switch between states rather than a steady stream of electricity.
In the past, however, such exotic behavior has only been observed in materials at super-cold temperatures. The ultimate goal is to develop materials that can quickly “flip” from one state to another on demand and at room temperature. Hovden says this research could be an important step in that direction. | <urn:uuid:96fb4786-bc4f-4375-9a44-e636a02f955d> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-tech-semiconductor-flipped-to-insulator-above-room-temp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00584.warc.gz | en | 0.914255 | 411 | 3.375 | 3 |
Sometimes it’s obvious. That message from a Nigerian prince requesting you wire $2,000? Ok, probably not going to fall for that one. If the CEO of your company asks for your credit card information via email? Something is definitely off. But often phishing is harder to spot.
Phishing is a common scheme in which someone poses as a trusted party (like a bank or government employee) in an attempt to steal personal information, such as credit card numbers, usernames, and email addresses.
You might get an email that appears to be from Netflix, asking you to log in or your account will be terminated. It could come as a text from Best Buy offering you a gift card if you enter your account information. When it looks too good to be true, it probably is and if something just feels off—it’s worth taking a closer look.
Phishing attempts will often include a false story meant to lure you into entering your sensitive information.
Some common forms:
Messages might include:
High sense of urgency
Hackers will often create a sense of urgency like threatening you with the loss of service. For instance, a phishing email from someone posing as a bank or another financial institution might ask for you to “confirm your account” and re-submit your payment information or else your account will be terminated. Don’t panic. If something seems strange or alarming, it’s worth taking a pause to investigate.
Since cyber criminals often send hundreds of emails at a time, another clue that it may be a fake email is the lack of a personalized greeting. Proceed with caution if the email doesn’t include your name or username, or addresses you simply as “Customer” or “Account Holder.”
One quick way to tell the difference between an official communication from a service you use and a phishing scam is the use of misspelled words and poor grammar in the body of the email.
Check the sender’s email address
Cyber criminals will often create an email account that closely resembles a company’s official email address. For instance, a phishing email address from Amazon might look like “email@example.com”. Notice the “A” in “Amazon” is not included in the email address.
Hover your mouse over any link in an email
Before clicking make sure the address looks right. When in doubt, do not click the link or open any attachments.
If you think a website might be fake, check the URL and confirm it includes “https://”
Similar to phishing emails, the URL of a fake website may look nearly identical to a legitimate website. Make sure to look out for any misspellings, unusual words or special characters before or after the company’s name. Look for “https://” not “http://” at the beginning of the address URL. Any legitimate entity asking for your payment info will have a secured website, as indicated by the “s” in “https.”
One thing you can do today to protect your accounts is turn on two-factor authentication. This will make it much harder for hackers to get into your accounts even if they do obtain your password.
But a much more complete security solution is using a password manager. Dashlane not only creates and stores strong passwords, but also alerts you about potential security breaches, so you can quickly change compromised passwords and secure your accounts. Try it for free today. | <urn:uuid:47e25d0a-3ff9-4656-8218-882b6dcdd531> | CC-MAIN-2022-40 | https://blog.dashlane.com/preventing-phishing-scams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00584.warc.gz | en | 0.935418 | 748 | 3.28125 | 3 |
In recent years, the world of cybersecurity has been turned upside down. As government employees shifted to remote work, new vulnerabilities emerged, and bad actors continued to innovate.
Around this time last year, the Sunburst hack was discovered. Malware inserted into the software compromised a long list of organizations, including numerous federal agencies. Then, in May of 2021, hackers targeting the Colonial Pipeline shut down thousands of miles of fuel transport and demanded a significant ransom. But while these attacks made headlines, hundreds of other cyberattacks flew under the radar. In fact, according to Redscan Labs, more cyber security vulnerabilities were reported this year than ever before.
To help prepare government employees to face new cyber challenges and the growing number of cyberattacks, here are three predictions on what’s to come in the year ahead.
Militaries will leverage cyberattacks: Earlier this year, a disruptive and high-profile ransomware attack on Colonial Pipeline halted thousands of miles of pipeline and disrupted a large part of the east coast of the United States. Going forward, I expect more nation states will look for vulnerabilities in government and critical infrastructure as an alternative to warfare, or as part of it. However, the use of cyberattacks in warfare isn’t new. For instance, in 2017, the Russian military launched a cyberattack that planted ransomware in numerous multinational corporations. Many years before that, a sophisticated computer worm called Stuxnet, reportedly a joint creation of the U.S. and Israel, destroyed nearly one-fifth of Iran’s operating centrifuges, which are used to enrich uranium for nuclear power. But in 2022 and beyond, we expect military-sponsored cyberattacks to become frequent. Kinetic efforts will be preceded by cyberattacks, similar to a naval bombardment prior to launching a beach assault in WWII.
Criminals will imitate successful hacks: Anytime a major hack makes headlines, it’s not just industry and government executives who take notice. Bad actors are paying attention too. The Sunburst attack, for instance, used highly sophisticated malware hidden inside legitimate software updates. It was an unusually complex and sophisticated attack. Once a technique is proven to work, copycat attacks will follow suit. For instance, this past summer, Irish IT solution provider, Kaseya, was hit by a similar technique; its remote-monitoring tool was infiltrated with malware, allowing attackers access to multiple end customers. As we look to next year, we can expect to see a significant rise in criminal copycats utilizing software updates to install detrimental malware.
Zero Trust becomes the only way forward: Between copycat attacks and attacks targeting critical infrastructure, it’s obvious organizations must adapt their cybersecurity postures. IT leaders may embrace a standard of 100% prevention, which will be achieved through zero-trust principles and technologies like content disarm and reconstruction (CDR). CDR intercepts documents at the network boundary, re-creates the content from scratch and eliminates any corrupted elements, and delivers them clean and safe to the intended recipient. Moving forward, cyber teams must assume everything is corrupted, sanitize it all, and ensure least privileged access. This is radical thinking, but existential threats like ransomware demand a fresh approach.
If we’ve learned anything from the cybersecurity events of 2021, it’s that the government must adapt its posture to address vulnerabilities. With the looming threat of military-sponsored cyberattacks, copycat attempts and newly developed attack methods, we must leverage these predictions to strengthen our perimeters to withstand evolving threats.
Visit our website to learn more about how Forcepoint can support your organization’s cybersecurity needs. | <urn:uuid:88b102a9-5575-4603-873a-202f13f7086d> | CC-MAIN-2022-40 | https://www.carahsoft.com/community/forcepoint-cybersecurity-predictions-blog-2022 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00584.warc.gz | en | 0.938493 | 737 | 2.71875 | 3 |
Most people know that flying is the safest mode of transportation, but we certainly don’t always feel this to be true. Although driving offers far more potential pitfalls than flying, many people aren’t as apprehensive when getting into their car and driving to work as they are when boarding a plane and then climbing to an altitude of 33,000 feet.
Yet, a heightened level of apprehension is a factor today where autonomous vehicles are concerned. As we develop human error out of vehicles and improve sensor and artificial intelligence systems, autonomous vehicles will become safer than human-driven vehicles, but it will be some time before people trust them. Part of this is because of the novelty of the technology and people’s tendency to be somewhat wary of new technologies, and part of it is because of a human tendency to fear less likely and more catastrophic events, such as a shark attack or a robot car deciding to kill its occupants. Much of the fear comes from misreporting and overreporting.
On average, 3,287 car crash-related fatalities occur each day on U.S. roads. Although undoubtedly tragic, none are likely to make national news unless they involve someone famous. However, any fatality involving an autonomous vehicle will undoubtedly make national news because the technology is still novel, so it’s likely that every instance of bad news related to autonomous vehicles will be tremendously amplified. Does this mean it will be impossible to persuade most people to trust autonomous vehicles? No, but it will take a lot of work to build and maintain that trust, and security will be foundational to that trust.
The intersection of autonomous vehicles and cybersecurity is critical considering that safety, security, and trust are essentially inseparable. The relative insecurity of most every connected thing is top of mind due to the heightened awareness of the impact of cyberattacks and breaches, which are highlighted in the media and popular culture depictions of nefarious hacking exploits. This makes achieving security right from outset extremely critical to avoid becoming the organization highlighted by this type of media attention.
The massive mobile endpoint that is the modern vehicle comes with more than its share of security concerns, and the question remains as to whether today's security solutions are going to translate well to autonomous vehicles, which are complex systems that require the same protections as any other network, such as firewalls, antivirus (EPP), endpoint detection and response (EDR), data loss prevention, and more.
Effective cybersecurity efforts must ensure that these vehicles are protected against malware attacks or takeover by bad actors, and the array of components that make up autonomous vehicles must also be assured to work together harmoniously without the introduction of unanticipated vulnerabilities. That’s the challenge both the automotive and security industries are facing today, and nothing less than the widespread market acceptance of autonomous vehicle technologies is at stake.
Autonomous Vehicle Adoption Challenges
As John Chen, Executive Chairman and Chief Executive Officer of BlackBerry, wrote in his column Mobility Explodes Opportunities for Automotive, Let’s Seize the Moment, published in the eBook The Road to Mobility, the global autonomous vehicle market is set to rise dramatically in the future. The market reached $27.9 billion in 2017 and is expected to grow nearly 42% and reach $615 billion by 2026. Autonomous global light vehicle sales could account for 15% of the market share by 2030.
The evolution toward autonomous mobility is certainly promising, but it’s not guaranteed. With autonomous vehicles being a newer technology and requiring such a dramatic change in how people use that technology, any cyberattacks that undermine safety and security in these vehicles will increase market fears, damage trust, and slow down adoption dramatically.
Now consider the state of autonomous vehicle cybersecurity: a survey from the Ponemon Institute that found 62% of auto manufacturers believe that autonomous vehicle software and related components face short-term risks from malicious attacks. Ensuring that those attacks aren’t successful is critical for the industry to succeed, because even a handful of successful attacks could erode trust to the point where it would take years to recover.
However, the news doesn’t look good. The same Ponemon Institute survey found that, when it comes to cybersecurity efforts, 84% of automakers and their suppliers aren’t confident that they are keeping up. More worryingly, 30% said their organization has not established a cybersecurity program. What’s needed are cybersecurity efforts and industry regulations to get out in front of these challenges, rather than lagging behind them.
Parham Eftekhari, Executive Director at the Institute for Critical Infrastructure Technology (ICIT), and Drew Spaniel, Lead Researcher at ICIT, agreed in the article Connected and Autonomous Vehicles: Policy, Performance and Peace of Mind (also found in The Road to Mobility) that there needs to be an agreed-upon regulatory framework to get out in front of the challenge. “Without meaningful regulatory oversight, autonomous vehicles risk being similarly developed without security controls sufficient to protect consumers from life-threatening risks,” they wrote.
“Most consumers lack the capacity to evaluate the security of the products that they purchase. Therefore, there is little external pressure for technology manufacturers to ensure that they develop products with layered security controls throughout the software development lifecycle,” they continued.
Effective industry or government regulation would certainly help put forward the right set of standards to help maintain the safety of autonomous vehicles. It could help establish the right software and hardware design controls, promote the best practices for software development and certification, and make certain such devices will be updated in a timely manner when patches are needed.
Effective Regulations Challenges
Still, developing effective regulations won’t be easy, and poorly conceived regulations can be a disincentive to innovation and even possibly incentivize the wrong activities. “One of the challenges in developing regulatory legislation and frameworks that rely on non-compliance penalties is that they often fail to encompass the scope of the risk regarding insecure software,” Eftekhari and Spaniel wrote. “This is because policymakers may lack a comprehensive understanding of cybersecurity best practices and the underlying technology being regulated.”
This is largely because of the complexities involved when it comes to modern software development, networking, and manufacturing processes – building autonomous vehicles involves all three. Eftekhari and Spaniel clearly detailed how many distinct disciplines autonomous vehicle security crosses:
- Supply chain security
- Secure coding practices
- Security-by-design throughout development
- Layered security
throughout the hardware and software stack
- Threat intelligence sharing
- Consumer privacy protections
- System reliability and autonomy controls
- Manufacturer accountability
- Secure update procedures
- Penetration testing to reduce zero-day vulnerabilities
- Compliance with NIST and other best practice frameworks
Getting the regulations right is going to require considerable individual and collaborative work among diverse stakeholders in the private sector and government. After all, few would argue that regulatory frameworks don’t have a controversial history of effectiveness, but we’ve learned quite a bit from previous attempts, and today, we know what’s needed to succeed.
Regulatory Lessons Learned
Prior to PCI-DSS (Payment Card Industry Data Security Standard), the security at online retailers and in-store point-of-sale systems was undeniably dreadful. PCI-DSS certainly improved retailers’ security, yet it did not stop credit card breaches. Meanwhile, when it comes to healthcare security, few would argue that HIPAA has had a dramatic impact on healthcare data security.
What could be different when it comes to designing regulations for autonomous vehicles? How could this regulatory and security challenge be approached differently? One big difference is that we now have significant data and effective artificial intelligence and machine learning to analyze. The amount of data autonomous cars collect about themselves and the nature of the roadways is staggering. Thanks to such extensive data and machine learning, engineers will be able to understand the nature of these systems to a level that just wasn’t possible before.
As I wrote in my recent article, Security Confidence Through Artificial Intelligence and Machine Learning for Smart Mobility (also found in The Road to Mobility), the ability to dynamically route vehicles, manage rules, and ensure safe conduct can be achieved through supervised and unsupervised machine learning.
This, combined with effective planning, scheduling, and optimization processing of autonomous vehicles, will help increase safety and security. With such insights, malware and bad behavior will be more readily identified and corrected while maintaining the overall security of autonomous vehicles.
In vastly lowering the number of annual driving fatalities, reducing carbon emissions, and nearly eliminating street congestion, autonomous vehicles promise to dramatically improve the world. But to get there, we’re also going to require consumers to trust in the technology and trust that it is resilient to malware attacks, denial of service attacks, and other types of attacks we have become so familiar with online. After all, it’s one thing if an attacker takes over your bank account, but it’s quite another for an attacker to take control of your vehicle. Safety and security are key. | <urn:uuid:3d1f3d40-fb91-4f73-8711-819335cd3387> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2020/05/security-is-the-smart-mobility-enabler | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00584.warc.gz | en | 0.946481 | 1,856 | 2.96875 | 3 |
Last Updated on May 29, 2015
What is a Cyber Weapon? At first glance this seems an immediate question to answer, but should anyone try to analyze the meaning of this term more deeply, probably he would be quite surprised and disappointed in discovering that the answer is not so immediate since an exact definition has not been given (at least so far).
A real paradox in the same days in which The Pentagon, following the Japanese Example, has unveiled its new strategy aimed to dramatically accelerate the development of new Cyber Weapons. And do not think these are isolated, fashion-driven examples (other nations are approaching the same strategy), but rather consider them real needs in the post-Stuxnet age, an age in which more and more government are moving their armies to the fifth domain of war [you will probably remember the (in)famous episode, when F-Secure was able to discover Chinese Government launching online attacks against unidentified U.S. Targets].
Recently Stefano Mele, a friend and a colleague of the Italian Security Professional Group, tried to give an answer to this question in his paper (so far only in Italian but it will be soon translated in English) where he analyzes Cyber Weapons from a legal and strategical perspective.
As he points out “Correctly defining the concept of Cyber Weapon, thus giving a definition also in law, is an urgent and unavoidable task, for being able to assess both the level of threat deriving from a cyber attack, and the consequent political and legal responsibilities attributable to those who performed it”. Maybe this phrase encloses the reason why a coherent definition has not been given so far: a cyber weapon is not only a technological concept, but rather hides behind it complex juridical implications.
Having this in mind, according to Stefano’s definition: a cyber weapon is:
A device or any set of computer instructions intended to unlawfully damage a system acting as a critical infrastructure, its information, the data or programs therein contained or thereto relevant, or even intended to facilitate the interruption, total or partial, or alteration of its operation.
The above definition implies that cyber weapons may span in theory a wide range of possibilities: from (D)DoS attacks (which typically have a low level of penetration since they target the “surface” of their targets), to “tailored” malware like Stuxnet, characterized by a high intrusiveness and a low rate of collateral damages.
One could probably argue whether a cyber weapon must necessarily generate physical damages or not, in which case, probably, Stuxnet, would be the one, so far, to encompass all the requirements. In any case, from my point of view, I believe the effects of a cyber weapon should be evaluated from its domain of relevance, the cyberspace, with the possibility to cross the virtual boundaries and extend to the real world (Stuxnet is a clear example of this, since it inflicted serious damages to Iranian Nuclear Plants, including large-scale accidents and loss of lifes).
With this idea in mind, I tried to build a model to classify the cyber weapons according to four parameters: Precision (that is the capability to target only the specific objective and reduce collateral damages), Intrusion (that is the level of penetration inside the target), Visibility (that is the capability to be undetected), and Easiness to Implement (a measure of the resource needed to develop the specific cyber weapon). The results, ranging from paintball pistols to smart bombs, are summarized in the above chart.
As you may notice, in these terms a DDoS attack is closer to a paintball pistol: the latter has a low level of penetration and the effects are more perceived than real (it shows the holder’s intention to harm the victim rather than constituting a real danger ), nevertheless it may be used to threaten someone, or worst to make a robbery. The same is true for a DDoS, it is often used to threaten the target, its action stops at the surface and usually the effects are more relevant in terms of reputation of the victims than in terms of damages done. Nevertheless, for the targets, it may lead to an interruption of service (albeit with no physical damages) and monetary losses.
On the opposite site there are specific “surgical” APTs: they have a high level of penetration with reduced collateral damages, they are able to go hidden for long time, but require huge investments to be developed, which ultimately make their adoption not so easy.
Of course, in between, there is a broad gray area, where the other Cyber Weapons reside depending on their positioning according to the four classification parameters identified… So, at the end what do you think? Do you agree with this classification? | <urn:uuid:93e2738d-36d1-4ad6-8339-4bf5e5e0482e> | CC-MAIN-2022-40 | https://www.hackmageddon.com/2012/04/22/what-is-a-cyber-weapon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00584.warc.gz | en | 0.948925 | 972 | 2.765625 | 3 |
The root object for request context information.
Contexts is a map of available contexts, which implement the
interface. The contexts map’s keys are strings and the values are context
objects. A context holds type-safe information useful for processing requests
and responses. The
contexts map is populated dynamically when creating
bindings for evaluation of expressions and scripts.
All context objects use their version of the following properties:
Name of the context.
Read-only string uniquely identifying the context object.
True if the context object is a RootContext (has no parent).
"context-Parent": Context object
Parent of this context object.
The contexts object can provide access to the following contexts for each request:
The contexts object can provide access to the following contexts when related filters are used: | <urn:uuid:75416501-d6ff-4ea3-8df3-067f6f4d157f> | CC-MAIN-2022-40 | https://backstage.forgerock.com/docs/ig/7.1/reference/Contexts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00584.warc.gz | en | 0.683679 | 202 | 2.640625 | 3 |
Deduplication is an approach that is based on multiple usage of identical data in several places.
This functionality is supported for new backup format only
The new backup format is based on the client-side deduplication. Deduplication on a local computer brings the following benefits:
- Client-side deduplication is much faster compared to a server deduplication
- Absence of internet connection issues: data is deduplicated locally
- Significant decrease in internet traffic
- Ability to purge an unnecessary data
- Storage costs: server deduplication database constantly grows, so this can cause a significant expense increase. Client-side deduplication uses local capacities only
How It Works
The first backup is always full. In most cases, it is enough to have a full backup with subsequent incremental backups. Thus, after a full backup, the next backup plan executions are usually incremental and depend on full backup and previous incremental backups as well.
The new backup format reckons for a full backup plan independence, so each separate backup plan has its own deduplication database. Moreover, backup plan generations (generation is a sequence of full and incremental backups that follow this full one) also have their own deduplication databases.
Once a backup plan is run, the Backup for macOS reads backup data in batches multiple (2x, 4x,...) to block size. Once a block is read, it is compared with deduplication database records. If a block is not found, it is delivered to storage and is assigned with a block ID, which becomes a new deduplication database record. The block scanning continues, and if a block matches any of the deduplication database records, a block with such ID is not backed up.
This approach significantly decreases a backup size, especially in virtual environments with a large number of identical blocks.
If a deduplication database is manually deleted or corrupted, a full backup type is always initiated | <urn:uuid:bba3a7b8-330b-467b-ae80-87bd7ba30630> | CC-MAIN-2022-40 | https://help.msp360.com/cloudberry-macos/backup/nbf-backup/client-side-deduplication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00584.warc.gz | en | 0.921224 | 418 | 2.609375 | 3 |
Simple Mail Transfer Protocol (SMTP) is a way in which email travels across the internet. An SMTP relay is a mail server that passes on your email message to another server that can transfer your message to the intended recipient. Email providers like Gmail own and manage SMTP servers; some allow you to connect to their servers directly while others require you to send email via their webmail applications. In the latter case, providers are also safeguarding against the risk of companies sending several emails in a short period of time and engaging in spamming.
Providers that allow direct access to their SMTP servers may or may not support SMTP relaying. ‘Support’ means that you can connect to their SMTP server to send outbound email to recipients whose email is not managed by the provider (e.g., they handle email for luxsci.net addresses but not yahoo.com).
SMTP authentication versus Secure SMTP
To avoid the risk of hackers spamming users, many email providers require authentication (e.g., via a username and password) to use their SMTP servers. Some providers may go beyond SMTP authentication and offer Secure SMTP, encrypting the communication between your computer and their server using SSL/TLS protocols. This way, the contents of your email message cannot be read along the transmission channel to the SMTP relay server.
Read the rest of this post » | <urn:uuid:498fd86d-8ca5-4df0-8cdb-e85b5f98ced6> | CC-MAIN-2022-40 | https://luxsci.com/blog/tag/smtp-relaying | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00584.warc.gz | en | 0.924864 | 286 | 3.390625 | 3 |
Over the last few months we have reviewed the data path from the application through the operating system. The next step in the data path is the file system, and the volume manager that accompanies most file systems.
The file systems as we know them today trace their roots to a proposal for the Multics operating system in 1965. (See http://www.multicians.org/fjcc4.html for more details.) This proposal for all intents and purposes became the Unix File System file (UFS) in the early 1970s and remains the UFS as we know it today. Not much has changed in file systems over the last 35 years — they recover faster and support larger files and file systems, of course, but these are evolutionary changes rather than radical changes. Compare the changes in file systems to the hardware changes that have been made over the same period of time, and you’ll see only incremental change overall at best.
In my opinion, the file system (and the associated volume manager) is the most critical component in the data path due to its ability to dramatically affect I/O performance. Even the best file system and volume manager available today can be improperly configured to the point that performance is horrible.
File System (FS) Basics
The purpose of file systems is to maintain a consistent view of storage so that we can effectively manage it. This is done in a way that allows the users to create files and directories as well as delete, open, close, read, write and/or extend the files on the device(s). File systems also maintain security over the files that they maintain and, in most cases, access control lists for a file.
Volume Manager (VM) Basics
Volume management was developed in the late 1980s to enable the creation and management of file systems larger than a single disk, which offered the possibility for greatly improved performance. Since almost all file systems at that time could only mkfs (create a file system) a single device, volume mangers provided file systems a single set address range for multiple devices. Other volume manager enhancements in the 1990s included support for software RAID 1 and 5.
Page 2: Standard Volume Manager (VM) Inner Workings (Striping)
Standard Volume Manager (VM) Inner Workings (Striping)
Most file systems require a VM to group disk and/or RAID devices together, typically via striping. Striping spreads the data across the devices based on the stripe size set within the volume manager. The idea behind striping is to spread the data across multiple devices to improve performance and to allow multiple I/O disk heads to be seeking simultaneously. It should be noted that some volume managers support something called concatenation, which starts with an initial device and then begins writing to a second device only after the first has become full.
The following examples show what happens under standard striping when writing multiple files at the same time (first illustration) and what happens when one of those files is removed (second illustration).
File Systems that Maintain Their Topology
Some modern file systems maintain and understand the device topology without a volume manager. These file systems support both striping and something called round-robin allocation. Round-robin allocation means that each device is used individually. In most cases, each file open moves to the next device. In some file systems, it could be that each directory created moves to the next device. Here is an illustration of round-robin allocation:
As we will see, round-robin allocation has some other important implications for performance as well.
Page 3: File Allocation Comparison
File Allocation Comparison
One of the main reasons that many volume managers do not provide a round-robin allocation method is due to the interaction between the volume manager and the file system. Every file system must allocate space and maintain consistency, as this is one of the main purposes of the file system. There are multiple types of file system allocation, but the real issue is that a volume manager presents a single set address range for the block devices in the file system for the file system to allocate from, and the volume manager translates the address to each of the devices. It is difficult but not impossible for the volume manager to pass all of the underlying device topology to a file system. Also, just as important, most file systems designed with volume managers do not have an interface to understand the underlying volume topology.
How Volume Managers and File Systems Work
To choose the best file system for the application environment, it is important to fully understand how volume managers and file systems work internally. By understanding the inner workings of the software, you will have a much better idea what the tunable parameters represent and how to improve performance with those tunables for your available hardware and the application environment.
Each file system supports a method of how space is represented for files within the file system. The two most common methods are:
- Indirect blocks
If the file system is using extents-based allocation, space is allocated within the inode for block address locations for the file data. For most file systems, 15 extent addresses are used for the data in the base inode, and the last address in the inode is linked to another inode where an additional 15 extent addresses are available.
With indirect block allocation, the last block of space allocated for a file points to the next allocation. Space is allocated for the data in the base inode, and the last block of the allocated data space points to the next allocated space. Indirect blocks were originally designed for the UFS file system in 1970, when disks drives were slow and seeks were not required to go back to the inode and allocate additional space.
Here is an example of how Sun’s UFS file system does allocation:
offers a helpful write-up on UFS internals.
Indirect block allocation and read/write performance is generally slower in comparison to the extents-based allocation method. I am unaware of any modern file systems using indirect blocks for space allocation because of the large performance penalties for random I/O. Even for sequential I/O, the performance for indirect blocks is generally still less than that of extents-based file systems.
Page 4: Free Space Allocation and Representation Methods
Free Space Allocation and Representation Methods
Each file system uses an algorithm to find and allocate free space within the file system. Most file systems use binary trees (btrees) allocation to represent free space, but some file systems use bitmaps to represent free space. Each method of free space representation has its advantages and disadvantages.
The use of bitmap representation is less common. In this method, each bit in the map represents a single allocation unit such as 1,024 bytes, 512 KB or even 100s of megabytes; therefore, a single bit could represent a great deal of space.
Some good information can be found at:
Binary Trees (Btrees) Representation
A btree is basically a sorted list of all the free allocations and used space for the file system. It is important to understand how the space is allocated from the tree. Some good reading on btree allocation, which is used in most file systems to find free space, can be found at:
Free space Allocation
With each allocation type (btree or bitmap), free space must be found and allocated with the representation method. These allocation methods find the free space based on their internal search algorithms. The two most common methods used are first fit and best fit.
The first fit method tries to find the first space within the file system that matches the allocation size requested by the file being allocated. In some file systems, the first fit method is used to find the space closest to the last allocation of the file being extended, thereby allowing the allocation to be sequential block addresses for the file within the file system. Here is a good example of first fit:
The best fit method tries to find the best place in the file system for the allocation of the data. This method is used to try to reduce total file system fragmentation. This method always takes more CPU cycles than first fit, as the whole file system must be searched for the best allocation. (Note: In systems that use round-robin allocation, only the device that the initial allocation was made on needs to be searched.)
This method also works better at reducing fragmentation, especially on large (multiple megabyte) allocations or when files cannot be pre-allocated (for file systems that support this method). Most vendors do not support this method, and most allocations in file systems are not large, as the overhead would be huge. The old Cray NC1FS supports this method by using hardware vector registers to quickly do the search.
Here is a good example of best fit:
In general, btrees do a better job for small allocations, but then the files could become fragmented. Bitmaps are better for larger allocations, but the time for allocation can be much longer to search the map for free space. Both of these methods can be optimized for your operational requirements by tunable parameters in the file system and volume manager as well as by RAID configuration and allocations, which we will review in a few months. While some file systems have a hybrid approach to allocation, these are generally proprietary and require a non-disclosure agreement to completely understand their inner workings.
You should now have a good understanding of how file systems and volume managers work internally. Next month we will apply this information to specific tuning issues and to understanding file system and volume manager tradeoffs and choices. As we follow the data path, we’ll then move on the RAID devices and discuss how to put all of this information to good use. | <urn:uuid:c21e57e7-2a07-4d43-833a-6e18a915d525> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/hardware/file-systems-and-volume-managers-history-and-usage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00584.warc.gz | en | 0.938248 | 2,154 | 3.265625 | 3 |
We recently had the opportunity to sit down with Kris Hammond, the Chief Scientist for Narrative Science. Narrative Science focuses around automating text generated from data, turning raw data into insightful accounts. Hammond has spent over 20 years working in and developing the AI labs at the University of Chicago and Northwestern University, making him uniquely placed to offer perspectives on the past, present and future of AI. In the first part of our discussion, we discussed the technologies which will shape the future of machine learning; in this installment, Hammond discusses the future of AI, and whether or not robots could actually wipe out humanity and steal our jobs.
When we talk about AI, almost anyone you talk with will say that they think that AI image- the genuine artificial intelligence that is building a system as intelligent, if not more intelligent than a human being- is simply not feasible or possible. Unless we start talking about machines killing us, and then the response is “Oh my god, we have to be terrified of this”.
I think the reality is that we have complete flexibility in terms of building the things that we’re going to build. In order to be a true AI, the future of AI is going to have a goal structure associated with it. Really, all you need to do is make sure that one of the higher priority goals is don’t kill everybody. I know Elon Musk is a very present figure, very smart man, but what I’m worried about existential threats, I’m actually a little more worried about New York being underwater in 30 years. That worries me alot more than the vague possibility of AI which decides to hunt us down and kill us. In fact from a Narrative Science point of view, we look at what we do when we think, what’s Quill going to do? Explain someone to death? Because that’s what it does: explaining things.
So I think when we get a little further down the line, and we get closer and closer to what looks like a genuine, complete AI systems, that’s when it’s time to consider, “Okay, what are the constraints going to be?” But the notion that we should start regulating now, as Musk suggests? I think that’s absurdist. There is no point in regulating something that is a glint at this point in people’s eyes. Now, I actually do believe that we will have complete AI. I believe that people are causal beings and that AI and computers live in the same causal environment, and we will have machines that are as- if not more- intelligent than we are. Maybe in my lifetime.
But it’s not time to worry about killing sprees quite yet. Although my concern is that right now a third of the marriages in United States at least, were the result of online dating. Which means that there are algorithms out there that are actually determining the breeding habits of people in the United States. If I were an AI, I wouldn’t blow everyone up. I’d just insert myself into that process and just make sure that system matched up people who were nice and calm, and make the entire species calm for the rest of time.
For a lot of people historically, AI has meant ‘killer robots’. I understand that. But nowadays, there seems to be this huge focus on AI stepping in and taking over jobs, and automation in general. And most for most of us, there’s still a focus on the blue collar side, but I think that there’s a growing awareness of the white collar side.
I think the reality is that AI is not going to take over jobs; it’s going to take over work. If you look at the work that Watson’s taking on, that Narrative Science is taking on, it’s the work that’s not particularly interesting or enjoyable for people. Having Narrative Science step in to look at the data and do the reporting means that the people who were doing that reporting can step away from doing commodity work and they can actually start working on what a data scientist or an analyst should be doing. They can focus on more speculative work, more discovery work, exploratory work against that data, to find new things instead of reporting on the things they have already found.
I think for AI in general, the goal is not to make the machine smarter and destroy us, but to make machines smarter and as a result, put us in a position where we no longer have to deal with the machine, as an unintelligent device which requires frequent input and supervision. We can deal with the machine as a partner, whose job is to make us smarter. We get smarter because it gets smarter. Because who in the world wants to actually look at a spreadsheet, or figure out what’s going on in the visualization, or go to massive textual data to get the answer to a question? No one wants to do that. As the machine takes more and more of that on, our lives become more human.
And so, AI moving forward is part of the process of actually more deeply humanizing us in our work, in our lives, in our thinking. I think there will be a moment where we embrace that finally, but I wish we could get to it. Understand the excitement of having intelligent partners, whose job is to help us and help move us forward, and to give us more of what it means to be human.
(Image credit: Saad Faruque) | <urn:uuid:cae052a2-5058-4b90-9344-db2a2a493cbf> | CC-MAIN-2022-40 | https://dataconomy.com/2014/12/why-ai-isnt-going-to-kill-you-or-steal-your-job/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00584.warc.gz | en | 0.969749 | 1,142 | 2.578125 | 3 |
Experimenting with AI is a critical next step for businesses seeking digital disruption. It frees employees from tedious tasks and allows certain activities to scale in ways that were earlier financially unfeasible. It is, however, not to be taken lightly. AI applications must be built appropriately with careful monitoring to minimize bias, ethically questionable decisions, and poor business outcomes.
Although Artificial Intelligence (AI) has immense promise within enterprises, it is still mainly a problem-solving solution. AI is prone to misuse because many companies lack the funding, skills, and vision to apply AI in a truly disruptive way.
However, just because AI isn’t evident in day-to-day operations doesn’t imply it isn’t at work elsewhere within the company. Ethical flaws in AI, like many other ethical challenges in business, are often hidden. Whether on purpose or not, an AI project or application that crosses ethical lines can be an optical and logistical nightmare. The key to avoiding ethical issues in AI is to establish corporate governance from the start.
Developing AI with Trust and Transparency
There have already been several instances of AI gone wrong. These incidents not only make for poor headlines and social media backlash but also jeopardize other legitimate AI use cases that will never be realized if the technology is still viewed with suspicion. For example, AI has the ability to enhance cancer diagnosis and identify individuals at high risk of hospital readmission, requiring further support. Businesses must learn to develop AI people trust to gain the full benefits of these powerful technologies.
Ethical AI is Impossible to Achieve in a Vacuum
If AI applications are implemented poorly, they can have far-reaching consequences. For instance, this is a common occurrence when a single department begins to experiment with AI-driven activity without monitoring. Is the team aware of the potential ethical ramifications if the experiment goes wrong? Is the implementation in line with the organization’s current data access and retention policies?
It’s difficult to respond to these questions without supervision. Without governance, it can be considerably more challenging to bring together the stakeholders required to address an ethical breach if one occurs. Oversight should not be viewed as a constraint on innovation but rather as an essential check to ensure AI operates within a set of ethical bounds. In organizations with Chief Data Officers (CDO) or the CIO, if there isn’t one, oversight should ultimately be their responsibility.
Always Have a Plan in Place
The organizations at the center of the worst headlines about AI projects gone wrong often have one thing in common: they weren’t prepared to address questions or explain decisions when things went awry. This can be remedied by oversight. When the very top of a business has a good understanding of AI and a healthy mindset, there’s less chance of being
Mandatory Testing and Due Diligence
With a little more patience and testing, many of the classic examples of AI bias could have been avoided. Before the product is released to the public, more testing can uncover bias. Furthermore, any AI application should be thoroughly evaluated from the start. Due to its complexity and undetermined possibilities, AI must be employed carefully and strategically.
Establish an AI Oversight Function
Businesses spend a lot of time and money managing access to sensitive documents to preserve their customers’ privacy. Their records teams classify assets and set up infrastructure to ensure that only the right departments and job roles can access them. This structure could be used to establish an AI governance role within a company. A dedicated team can estimate the potential impact of an AI application and how often and by whom its outcomes should be reviewed. | <urn:uuid:72374093-565c-4429-85ff-4dc794240c76> | CC-MAIN-2022-40 | https://enterprisetalk.com/featured/corporate-governance-crucial-to-avoiding-ethical-missteps-in-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00584.warc.gz | en | 0.951018 | 743 | 2.765625 | 3 |
In the earliest days of what could be considered cybersecurity, the primary threats were malicious programs that would operate against the wishes of the machine and its operator. These programs, referred to as viruses, served as the progenitors of what we generally refer to in modern parlance as malicious software or “malware.”
Because the long history of malware and anti-malware protection is often the foundation of most compliance frameworks and approaches to cybersecurity, we’re touching on the topic, including what it is and how it has evolved.
What Is Malware?
Malware is a sort of catch-all term for malicious software. To be confused with web application exploits or other forms of hacking, malware refers specifically to programs that will execute on a host computer with the express intent of delivering a payload. Depending on the malware and its intent, this payload could be one of many different attack types.
Generally speaking, malware will attempt to perform a few operations, including writing data to a host computer, taking control of operating resources and other programs, hiding its presence from the computer and its owner, and propagating itself to any connected systems.
Some types of malware include:
- Viruses: The earliest forms of malware were conceptual, following theories from computer scientists on academic or military networks about the feasibility of a self-replicating and damaging program. The name “virus” comes from the idea of a virus that infects a host, creates copies of itself, and uses communication vectors to transmit itself to other potential hosts. And, in many cases, earlier versions of viruses functioned precisely like this–that is, they were stored on removable disks and then spread, creating thousands of copies of themselves to clog systems.
- Worms: The term “worm” is often used synonymously with a virus and operates identically in many cases. However, one of the critical differences is transmissibility. A virus requires a host computer or user to propagate itself, usually with specific user actions (sharing a disk, emailing a program, etc.). On the other hand, Worms are built to exploit weaknesses inherent in a system and can transmit themselves across systems without direct user or system intervention. Because of this, worms were often behind some of the fastest-spreading malware ever known.
- Trojans: Trojans aren’t distinct from viruses, for the most part, outside their delivery mechanism. Trojans, as their name suggests, are malware delivered to systems presenting as legitimate software. In many cases, this can simply be a file named with a standard filename and icon (such as a well-known anti-malware program or a game) with the file extension hidden. In more sophisticated attacks, trojans can obfuscate their payload from anti-malware software to deliver their attack.
- Ransomware: Ransomware is a relatively modern form of malware. Unlike viruses that may attempt to clog or hijack a system, ransomware uses unbreakable cryptography to essentially lock system data, demanding a ransom before the attacker provides the decryption key.
- Rootkits: Some malware will drop a specific piece of software called a rootkit. Unlike other forms of malware, a rootkit is fully intended to provide long-term and secret control over a system. The name “root” comes from the nomenclature of older Unix and Linux systems, where “root” is the system’s administrative user. Rootkits seek to gain complete administrative control over the computer to run it undetected essentially.
- Grayware: Grayware is a broader category of malware that, while not technically malicious, skirts with that intent pretty closely. Different forms of adware (unwanted software that serves ads) or spyware (unwanted software that tracks behavior) are forms of grayware.
What Are Malware Attack Vectors?
In the earliest days of viruses and malware, the most common attack vectors included removable media or local area networks that were air-gapped from the outside world. The advent of the modern Internet saw a parallel explosion of public malware.
Some common vectors that developers of malware exploit include:
- Phishing and Software: Phishing is still one of the most common forms of attack, and in many cases, hackers look to gain access to the system via user credentials. Some hackers, however, will use phishing attempts to get recipients to open software–so, if an attacker pretending to be your IT department sends a trojan to a company, at least one person may run it.
- Vulnerable Default Software: Hackers will often use system scanners or other manual attacks to identify out-of-date or unpatched systems or resources that still use default security settings. If these are attacked and breached, it’s trivial for the attacker to launch malware into the system.
- Operating System Saturation: Hackers look for targets of opportunity, which means they seek out common vulnerabilities in a wide range of systems. Accordingly, more popular operating systems will suffer more attacks, so established systems (like Windows or Android) will suffer more malware attacks than Linux systems.
Historical Examples of Malware
Theoretical and experimental versions of computer viruses were developed and released throughout the 1970s. However, due to the closed nature of these systems and the relative simplicity of the programs, this malware didn’t impact society more broadly.
However, moving into the 1980s, malware became a reality. Threats rapidly evolved, and the next forty years saw giant leaps in innovation, bringing plenty of stress to security experts.
Some of the more famous versions of malware released in the past few decades include:
Popular and Consumer Malware
- Elk Cloner (1982): Elk Cloner is often considered the first publically-released virus. Initially written for Apple II systems by a high school student, it was transmitted through the exchange of floppy disks. It used the features of disk reading to load into a host computer’s memory automatically. This virus didn’t cause any harm, offering a silly poem instead.
- Morris Worm (1988): The Morris worm exploited buffer overrun vulnerabilities to propagate over Unix systems using the Sendmail program. More a proof of concept, the delivery mechanism of the work quickly grew out of control, and it began rapidly infecting DEC VAX machines, bogging down system performance with out-of-control copying of itself. It is considered the first example of a virus spreading in the wild and the first example of a felony conviction for cybercrime.
- Melissa Virus (1999): A powerful, Windows-based worm, Melissa spread by tricking users to open a file attachment (a trojan written in Visual Basic) that would execute the malicious code. Upon execution, the malware sent copies of itself via email to the first 50 contacts in the user’s Outlook program. This program eventually infected 1 million systems and served as an example of how cultural ignorance of security best practices (and developers’ ignorance of patching vulnerabilities) led to widespread viruses.
- ILOVEYOU: Similar to Melissa, ILOVEYOU used Visual Basic and Outlook to propagate itself. This worm would delete and hide files with specific extensions, rendering systems difficult or impossible to use. Within a few hours of its release (by a student in the Phillapeans), it spread across the globe, following the rising sun Westward as office workers in Asia, Europe, and America turned on their computers in the morning. This virus is estimated to have caused almost $9 billion in damages and another $10-$15 billion to remove.
- CryptoLocker (2013): One of the earliest forms of ransomware, CryptoLocker is a trojan distributed through a large-scale botnet. Experts believe that this trojan has been responsible for roughly $3 million in ransoms stolen from users, but it has now been isolated. Many other hackers, however, followed the idea for other ransomware variants.
Enterprise, Industrial, and Potential State-Sponsored Attacks
- W32.Dozer (2009): This malware is better known for its part in a large, Distributed Denial of Service (DDoS) attack against websites in the U.S., U.K., and South Korea. Targets included websites for the White House, the Pentagon, NASDAQ, and the South Korean Ministry of Defense. Many large enterprise infrastructures were unintentionally hosting the virus, including those for government-associated companies.
- Stuxnet Worm (2010): A sophisticated worm that targets SCADA systems in heavy industrial and research machinery. This worm is considered responsible for an attack against the Iranian nuclear centrifuge program, and global security experts consider it a cyberweapon gone out of control during an attack created by the United States (although this has never been confirmed).
- WannaCry (2017): During the 4-day outbreak of this ransomware 9which ended in the discovery of a killswitch), 200,000 computers and caused potentially billions of dollars in damages. This ransomware is notable for utilizing the EternalBlue exploit–a well-known vector used to compromise older versions of windows.
Equip Your Security and Compliance Against Malware with Continuum GRC
Malware is often always in the background of our security efforts. With the right anti-malware programs, we consider ourselves relatively safe. However, modern malware like Advanced Persistent Threats (APTs) are utilizing old tricks like phishing and backdoors to continue to wreak havoc.
Continuum GRC is cloud-based, always available and plugged into our team of experts. We provide risk management and compliance support for every major regulation and compliance framework on the market, including:
- NIST 800-53
- DFARS NIST 800-171
- SOC 1, SOC 2, SOC 3
- PCI DSS 4.0
- IRS 1075
- COSO SOX
- ISO 27000 Series
- ISO 9000 Series
And more. We are the only FedRAMP and StateRAMP-authorized compliance and risk management solution worldwide.
Continuum GRC is a proactive cyber security®, and the only FedRAMP and StateRAMP Authorized cybersecurity audit platform worldwide. Call 1-888-896-6207 to discuss your organization’s cybersecurity needs and find out how we can help your organization protect its systems and ensure compliance. | <urn:uuid:236e4a8c-f17d-4360-a197-4b2822709aab> | CC-MAIN-2022-40 | https://continuumgrc.com/cybersecurity-and-malicious-software-a-history-of-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00784.warc.gz | en | 0.93753 | 2,159 | 4.03125 | 4 |
It’s no secret that Active Directory (AD) credentials can easily be compromised. This is the reason why it is absolutely crucial for organizations to secure those credentials to protect against network breaches.
Active Directory – One identity source for all access
Today, Active Directory is still the primary source of trust for identity and access for more than 90% of organizations.
It provides ‘authentication services’ to verify the identity of the user, ‘authentication and authorization’ to allow access to resources on the network and ‘group policy processing’ to enforce security settings across users and servers in the company.
Nowadays, as more and more organizations allow or are forced to use remote working, users depend on RDP connections and VPN connections for remote access. VPNs rely upon an on-premises identity source – most of the time Active Directory – to authenticate users who are remotely accessing the company network.
Access is essential to stop attacks
Cyber-attacks on Active Directory are pretty common. In successful attacks, Active Directory is manipulated, encrypted or destroyed. It’s quite simple to explain: there are not many IT assets that allow criminals to spread after an initial breach, and one is above them all: Active Directory.
80% of data breaches involve the use of compromised credentials. They serve as an entry point into an organization’s network and its valuable data. Without compromising corporate Active Directory credentials, a criminal can’t do anything.
What’s important to understand is that this first access is only the way into your network. Most of the time it’s a low-level endpoint with no rights to access valuable resources. But, this is an initial foothold for the hacker and allows them to start moving laterally within the network to find data of value.
In fact, except for perimeter attacks (where attack methods such as SQL injections need no credentials to access data), all layers of access within your environment require a logon at some point. Think about it: endpoints require a logon for access, moving laterally requires authentication to access a target endpoint, and access to data first necessitates an authenticated connection.
To summarize, no logon, no access!
Access management for Active Directory environments
The concept of effective access management centers around five primary functions – all working in concert to maintain a secure environment:
Two factor authentication – Regulating user access involves authentication to verify the identity of a user. But authentication using only a strong username and password doesn’t cut it anymore. Two-factor authentication combines something you know (your password) with something you have (a token or authenticator application).
Access restrictions – Policies can be added on who can logon when, from where, for how long, how often, and how frequent. It can also limit specific combinations of logon types (such as console- and RDP-based logons).
Access monitoring – Awareness of every single logon as it occurs serves as the basis for the enforcing policy, alerting, reporting, and more.
Access alerting – Notifying IT – and users themselves – of inappropriate logon activity and failed attempts helps alert on suspicious events involving credentials.
Access response – Allows IT to interact with a suspect session, to lock the console, log off the user, or even block them from further logons.
By putting these sets of functionality together, access management puts a protective layer at the forefront of your network, ensuring use is appropriate.
Why should you use access management?
Many security solutions attempt to reside at the point of mischievous actions. However, access management tries to insert itself into the process, seamlessly, putting a stop to the threat action before it even occurs.
1. The logon is the foundation of every cyber attack
Like I said before, a hacker needs to logon for any attack to be successful. It can be done via a remote session, via PowerShell, leveraging a mapping of a drive, or by logging on locally to a console, in any case your network requires that a user logs in before he’s given any kind of access.
2. Automated access controls can really stop an attack
Most security solution that you can find on the market pretend they can stop attacks. However, there is a difference between alerting IT to a potential threat (which only stops an attack once IT intervenes) and taking action and actually stopping the attack.
Identifying a potential network breach with Access Management occurs before any access is achieved, which means before any damage has been done. With access management, you can automatically block the access if a logon falls outside a set of established rules.
3. Limit false positives
IT Teams don’t want a security solution that will generate a storm of alerts that are false positives. They need to have solutions in place that are certain about the threat potential.
Access management is configured based on the normal use of the environment only generating alerts when a logon is out of policy.
4. Seamless integration with Active Directory
Access management can integrate with Active Directory to extend, not replace its security. Solutions that work along the existing logon process don’t frustrate IT teams.
5. Easy adoption by users
To ensure a solution is adopted by the end users, it needs to happen behind the scenes. If it’s overwhelming and impedes productivity, users won’t be able to do their job correctly. Access management protects users and the network until the very moment the user is conflicting with security protocol.
6. Training-less Implementation
Training every user every time you implement a new security solution would be way too time-consuming. Access management doesn’t require any training, which makes its implementation easy in any organization.
7. Zero Trust Model
Zero trust principle is ‘never trust, always verify’. It emphasizes the need to see and verify everything that’s accessing and going on in the corporate network. Access management controls can be used to put more strict limits, alerts, and responses on the users with high risk.
8. Cost Effective
Security doesn’t have to be costly – however it has to be effective in relation to its cost. Access management makes sure you have the highest security protection with the least amount of money spent.
Effective access management solutions give organizations the ability to seamlessly secure connections on their Windows Active Directory network. It allows business to continue as normal but with the the scrutiny and control necessary to automatically stop suspicious activity at the point of entry. | <urn:uuid:b6740b39-21c4-47d4-b074-b0ed3bc83212> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/using-access-management-for-your-active-directory-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00784.warc.gz | en | 0.925955 | 1,346 | 2.828125 | 3 |
How dark web markets and cryptocurrencies work and how are they used together?
Innovations in computer software and technology are often created with good objectives. Unfortunately, criminals rapidly employ novel technology to enhance prevailing criminal practices or produce new forms of crime. One of the state-of-the-art crime forms is the usage of cryptocurrency to execute transactions, mostly illegal ones, on the dark web. This blog is specifically aimed to raise awareness for experts and enthusiasts of the cybersecurity field. Explaining how dark web markets work and how cryptocurrencies work and how they are used together.
Before we talk about dark web markets, cryptocurrencies, and cybercrime, it is imperative to look for the theoretical basis.
Cybercrime: old wine in new bottles
Cybercrime is nothing but old wine in new bottles. It has been examined in the context of different criminological theories:
- Routine activities’ theory: By Leukfeldt, and Yar (2016) in their paper, “Applying routine activity theory to cybercrime: A theoretical and empirical analysis”;
- Social learning theory: By Morris and Higgins (2010) in their paper, “Criminological theory in the digital age: The case of social learning theory and digital piracy”;
- Space transition theory: By Jaishankar (2008) in his seminal study “Space transition theory of cybercrimes”. It has been found to be most suitable in explaining cybercrime and cybercriminals in a virtual environment. As per the author, such crimes are usually conducted by individuals with repressed criminal behaviour. Certainly, anonymity is an important motivating factor that we need to consider.
Consequently, the development of the dark web market can be examined in the context of Space transition theory. A dark web market is a commercial space built on darknets such as I2P or Tor. Various types of business are done on these black markets:
- Brokering or selling transactions encompassing drugs;
- Counterfeit currency;
- Forged documents;
- Stolen credit card details;
- Weapons, cyber-arms;
- Unlicensed pharmaceuticals;
- Others Illegal products;
Therefore, fundamentally speaking, the dark web market is nothing but a mere transition of space from the physical to virtual realm. However, space in this context is highly dynamic and complex. Criminals in this environment who possess computer skills and cyber technology can get away from any surveillance or user identification.
Actually, the dark web functions similar to the regular internet, TCP/IP framework that sends FTP and HTTP traffic within and between networks. Nevertheless, it is not indexed by search engines (regular web) software such as Tor, I2P, and Freenet are needed to do that. The most used one is Tor which employs the idea of the ‘Onion Routing’. This technique in which the user data is first encrypted and then sent through various relays present in the Tor network:
Subscribe To Our Newsletter
Get the latest intelligence and trends in the cyber security industry.
Cryptocurrencies and the illegal market of the dark web
At present, there has been a considerable surge in the usage of dark web to purchase illegal products. For example, the UNODC World Drug Report 2019 estimates that people who purchased drugs over the dark web doubled from 4.7 percent in January 2014 to 10.7 percent in January 2019.
This extraordinary surge can be attributed to the usage of cryptocurrencies especially bitcoin. As the leading payment method on the dark web, it is characterized by three features: lack of a central issuer or any legal entity, absence of regulation or law which affect conventional fiat money, and less vulnerability to political or economic problems. In their essence, cryptocurrencies employ decentralised technology to allow the users to make secure transactions and store money. They operate on a distributed public ledger (Blockchain) which is a record of all transactions updated and held by nodes:
Thus, it can be stated that the anonymity offered by dark web browsers and the absence of payment oversight. This feature of Cryptocurrency has been well used by criminals as new form of crime known as cryptocurrency-backed. The darknets enabled cybercrime and a process named as dealer to doorstep by Afilipoaie and Shortis (2015) in their research, From Dealer to Doorstep—How Drugs Are Sold on the Dark Net. It involves a buyer exchanging fiat money into cryptocurrency and making a purchase. The market holding the cryptocurrency in its escrow account in exchange for some commission and vendor receiving the payment when order is finalized in the form of cryptocurrency again exchanging it in the fiat money. Crypto-mixers (services that let users mix their coins with other users) are often used in transactions for added anonymity.
Mutual data from four dark web marketplaces (Silk Road 3.1, Apollo Market, Empire Market, Elite Market) displays that drugs and digital products are the most prevalent category of illicit products purchased by cyber criminals using cryptocurrency. Especially in the COVID-19 pandemic period, the dark web has been dedicated to transactions of fraudulent vaccines, personal protective equipment, and hydroxychloroquine.
The trend can also be witnessed in Southeast Asia, where criminals are using the Tor to engage in the full range of illicit activities. Although there have been law enforcement operations targeting dark web cybercrime. These operations are the outcome of international investigations that began outside the region, with only a small number of cases originating within the region itself. Cybercriminals perceive Southeast Asia as a relatively high-gain/low-risk operational environment.
Consequently, to address cryptocurrency-backed darknets enabled cybercrime, following some basic steps can be a start:
- To develop a global policy that focus on research, training and capacity building support for different regions and countries;
- Increase stakeholders’ commitment and cooperation to sharing intelligence and enhancing international cooperation to counter dark web crime nationally, regionally and internationally;
- Increase specialist political, policy and working knowledge concerning dark web services, intelligence gathering and cryptocurrency investigations by each Southeast Asian country and the regulation of cryptocurrency users and exchanges, especially employing the FATF virtual assets risk-based approach guidelines.
Following these steps can be a beginning to control cryptocurrency-backed darknets enabled cybercrime, a Frankenstein monster, which lurks in the realm of dark web and haunts law-enforcement agencies and policymakers all over the globe. | <urn:uuid:fa555944-c9c3-45a2-a079-96a96cef1b4e> | CC-MAIN-2022-40 | https://cyberintelligencehouse.com/2022/02/24/dark-web-markets-and-cryptocurrencies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00784.warc.gz | en | 0.93185 | 1,319 | 2.765625 | 3 |
Have you ever wondered what forms the basis of May I know you page that facebook directed you when you were busy scrolling through or how your online signatures are verified ?
Remember the crime documentaries where graphologist analyzes murder’s handwriting for finding the real culprit. Long gone are the days when all these nitty gritty tasks were in human hands, now artificial intelligence has taken over these assessments.
In the modern era neural networks are assisting humans to survive the new age transitions in education, financial, aerospace and automotive sectors. But before knowing how they are giving different sectors a push, it is first important to understand the basic concept of neural networks and deep learning.
Neural networks are a part of deep learning, which comes under the comprehensive term, artificial intelligence. Neural networks are a set of algorithms that are modelled after the human brain. These networks are also known as artificial neural networks (ANN).
Sensory neurons, motor neurons and interneurons form the human brain. Artificial neurons, form the replica of the human brain (i.e. a neural network).
Artificial Neural Network (ANN)
Artificial Neural Network (ANN) is a collection of connected units (nodes). These connected units are known as artificial neurons. These units closely resemble the original neurons of a human brain. Every node is built with a set of inputs, weights, and a bias value. Weights of the neural network are held within the hidden layers.
Weights and biases are learning parameters of machine learning models, they are modified for training the neural networks.
Architecture of an artificial neural network
Applications of Neural Networks
Neural Networks are regulating some key sectors including finance, healthcare, and automotive. As these artificial neurons function in a way similar to the human brain. They can be used for image recognition, character recognition and stock market predictions. Let’s understand the diverse applications of neural networks
1. Facial Recognition
Facial Recognition Systems are serving as robust systems of surveillance. Recognition Systems matches the human face and compares it with the digital images. They are used in offices for selective entries. The systems thus authenticate a human face and match it up with the list of IDs that are present in its database.
(Must Check: Facial Recognition Work in Deep Learning?)
Convolutional Neural Networks (CNN) are used for facial recognition and image processing. Large number of pictures are fed into the database for training a neural network. The collected images are further processed for training.
Sampling layers in CNN are used for proper evaluations. Models are optimized for accurate recognition results.
(Related Blog: How does Basic Convolution Work for Image Processing?)
2. Stock Market Prediction
Investments are subject to market risks. It is nearly impossible to predict the upcoming changes in the highly volatile stock market. The forever changing bullish and bearish phases were unpredictable before the advent of neural networks. But well what changed it all? Neural Networks of course…
To make a successful stock prediction in real time a Multilayer Perceptron MLP (class of feedforward artificial intelligence algorithm) is employed. MLP comprises multiple layers of nodes, each of these layers is fully connected to the succeeding nodes. Stock’s past performances, annual returns, and non profit ratios are considered for building the MLP model.
Check out this video to know how the LTSM model is built for making predictions in the stock market.
3. Social Media
No matter how cliche it may sound, social media has altered the normal boring course of life. Artificial Neural Networks are used to study the behaviours of social media users. Data shared everyday via virtual conversations is tacked up and analyzed for competitive analysis.
Neural networks duplicate the behaviours of social media users. Post analysis of individuals' behaviours via social media networks the data can be linked to people’s spending habits. Multilayer Perceptron ANN is used to mine data from social media applications.
MLP forecasts social media trends, it uses different training methods like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Mean Squared Error (MSE). MLP takes into consideration several factors like user’s favourite instagram pages, bookmarked choices etc. These factors are considered as inputs for training the MLP model.
In the ever changing dynamics of social media applications, artificial neural networks can definitely work as the best fit model for user data analysis.
(Related Blog: Detection of Fake and False News using CNNl)
Aerospace Engineering is an expansive term that covers developments in spacecraft and aircraft. Fault diagnosis, high performance auto piloting, securing the aircraft control systems, and modeling key dynamic simulations are some of the key areas that neural networks have taken over. Time delay Neural networks can be employed for modelling non linear time dynamic systems.
Time Delay Neural Networks are used for position independent feature recognition. The algorithm thus built based on time delay neural networks can recognize patterns. (Recognizing patterns are automatically built by neural networks by copying the original data from feature units).
Other than this TNN are also used to provide stronger dynamics to the NN models. As passenger safety is of utmost importance inside an aircraft, algorithms built using the neural network systems ensures the accuracy in the autopilot system. As most of the autopilot functions are automated, it is important to ensure a way that maximizes the security.
Applications of neural networks
Defence is the backbone of every country. Every country’s state in the international domain is assessed by its military operations. Neural Networks also shape the defence operations of technologically advanced countries. The United States of America, Britain, and Japan are some countries that use artificial neural networks for developing an active defence strategy.
Neural networks are used in logistics, armed attack analysis, and for object location. They are also used in air patrols, maritime patrol, and for controlling automated drones. The defence sector is getting the much needed kick of artificial intelligence to scale up its technologies.
Convolutional Neural Networks(CNN), are employed for determining the presence of underwater mines. Underwater mines are the underpass that serve as an illegal commute route between two countries. Unmanned Airborne Vehicle (UAV), and Unmanned Undersea Vehicle (UUV) these autonomous sea vehicles use convolutional neural networks for the image processing.
Convolutional layers form the basis of Convolutional Neural Networks. These layers use different filters for differentiating between images. Layers also have bigger filters that filter channels for image extraction.
The age old saying goes like “Health is Wealth”. Modern day individuals are leveraging the advantages of technology in the healthcare sector. Convolutional Neural Networks are actively employed in the healthcare industry for X ray detection, CT Scan and ultrasound.
As CNN is used in image processing, the medical imaging data retrieved from aforementioned tests is analyzed and assessed based on neural network models. Recurrent Neural Network (RNN) is also being employed for the development of voice recognition systems.
(Must Check: Learning Recurrent Neural Network and applications)
Voice recognition systems are used these days to keep track of the patient’s data. Researchers are also employing Generative Neural Networks for drug discovery. Matching different categories of drugs is a hefty task, but generative neural networks have broken down the hefty task of drug discovery. They can be used for combining different elements which forms the basis of drug discovery.
7. Signature Verification and Handwriting Analysis
Signature Verification , as the self explanatory term goes, is used for verifying an individual’s signature. Banks, and other financial institutions use signature verification to cross check the identity of an individual.
Usually a signature verification software is used to examine the signatures. As cases of forgery are pretty common in financial institutions, signature verification is an important factor that seeks to closely examine the authenticity of signed documents.
Artificial Neural Networks are used for verifying the signatures. ANN are trained to recognize the difference between real and forged signatures. ANNs can be used for the verification of both offline and online signatures.
For training an ANN model, varied datasets are fed in the database. The data thus fed help the ANN model to differentiate. ANN model employs image processing for extraction of features.
(Related Blog: Hand Gesture Classification using Deep Learning with Keras)
Handwriting analysis plays an integral role in forensics. The analysis is further used to evaluate the variations in two handwritten documents. The process of spilling words on a blank sheet is also used for behavioural analysis. Convolutional Neural Networks (CNN) are used for handwriting analysis and handwriting verification.
8. Weather Forecasting
The forecasts done by the meteorological department were never accurate before artificial intelligence came into force. Weather Forecasting is primarily undertaken to anticipate the upcoming weather conditions beforehand. In the modern era, weather forecasts are even used to predict the possibilities of natural disasters.
Multilayer Perceptron (MLP), Convolutional Neural Network (CNN) and Recurrent Neural Networks (RNN) are used for weather forecasting. Traditional ANN multilayer models can also be used to predict climatic conditions 15 days in advance. A combination of different types of neural network architecture can be used to predict air temperatures.
(Must Check: Weather Forecasting Using Big Data Analytics)
Various inputs like air temperature, relative humidity, wind speed and solar radiations were considered for training neural network based models. Combination models (MLP+CNN), (CNN+RNN) usually works better in the case of weather forecasting.
Neural Networks have a myriad of applications, from facial recognition to weather forecasting the interconnected layers (human brain’s replica), can do a lot of things with some simple inputs. ANN algorithms have simplified the assessment and modified the traditional algorithms. With humanoid robots like Grace on its way the world can expect some sci- fi movies turning into reality pretty soon! | <urn:uuid:f45fde7f-5f8c-48c6-a4c8-f56f1ae68c53> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/8-applications-neural-networks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00784.warc.gz | en | 0.921068 | 2,060 | 3.578125 | 4 |
Network security firm Bkav Corporation has warned that a destructive virus called W32.XFileUSB has reportedly infected about 1.2 million computers in Vietnam.
According to researchers, the virus is being spread by hackers via disk drive spoofing techniques or forging data file shortcuts on the USB to infect targeted computers.
Once a computer is infected and the user plugs in his USB device, the virus erases all data on the USB device and replaces it with fake files containing malware. These newly replaced, malicious files have the same icon or similar drive as the original data file, so that users can be tricked into opening them, activate the virus and spread it to other computers.
BKAV’s Vice Chairman Vu Ngoc Son notes that data loss is a frequent security issue in Vietnam. He added that the virus not only wipes out data from USB drives resulting in data loss, but is also capable of taking control of computers to download other types of malicious code designed to spy on users or launch further targeted attacks.
This is not the first time the virus has caused widespread chaos in Vietnam.
In January, BKAV observed a similar incident in which around 41,000 were infected by the same destructive virus. Computers belonging to government agencies, businesses, and individual users were impacted in that attack
In 2017, Vietnam has suffered economic losses of up to VND 12.3 trillion ($541.2 million) due to by computer viruses, up by 18.27% from 2016 when it suffered losses of VND 10.4 trillion ($457.6 million) Losses in 2016 were 41% higher than in 2015, researchers said. | <urn:uuid:730736e9-08fc-44cc-a918-e49e2e58ac67> | CC-MAIN-2022-40 | https://cyware.com/news/over-1-million-computers-in-vietnam-infected-with-destructive-w32xfileusb-virus-via-usb-drives-55f7bc07 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00784.warc.gz | en | 0.962031 | 337 | 2.53125 | 3 |
Historic technology firsts changed the world and inspired the future. The course of innovation may not always run smooth, but trailblazers never lose their drive: Technology has come a long way from the first computing machines and the days of Alexander Graham Bell, but the spirit that drives the world’s greatest innovators never slows. Take a road trip down history lane:
California, 1971 - Pioneering the Personal Computer - Long before the rise of Silicon Valley, computer engineer John Blankenbaker put the California tech industry on the map—and transformed the future of computing—when he developed the first affordable personal computer in 1971.
Illinois, 1973 - Can You Hear Me Now? - In 1973, Motorola engineer Martin Cooper—inspired by the always-connected, work-anywhere communication devices on Star Trek—designed a two-and-a-half pound cell phone at his workspace in Chicago.
New York, 1979 - Mission to the Moon - When Apollo 11 lifted off from Florida in 1969, it took with it the incredibly complex lunar module that would land Neil Armstrong and Buzz Aldrin on the moon—a marvel of engineering (and modern connectivity) designed in Bethpage, New York.
New Hampshire, 1972 - Game On - End your journey in 1972 in New Hampshire, where Magavox released the Odyssey video game system—the first multiplayer, multiprogram console—designed by Ralph Baer.
Washington, 1976 – Fast Company – After budget cuts forced computer designer Seymour Cray to halt work on a supercomputer he was designing for his employers, he started his own business to continue working on the project. Cray Research released the Cray-1 supercomputer in 1976, setting a new standard for computing speed.
Ohio, 1974 – Setting the Bar – More than two decades after Joe Woodland filed a patent for the barcode—an information storage design inspired by his days learning Morse code as a Boy Scout—a supermarket cashier in Troy, Ohio, scanned the first item with a UPC label: a pack of Wrigley’s Gum.
The advancements of the last 50 years have fueled achievements and milestones in every tech sector. Where are you going next? With HPE solutions—including the cost-optimized hybrid cloud services of GreenLake, the guaranteed-available storage of your data, and proven as-a-service virtual desktop infrastructure—your business can leave its own mark on history. See resources that may help steer you in the right direction below. Questions? Chat in lower right or email us.
Prepare for your own landmark achievements with HPE solutions that centralize management of on-premises and cloud data; store and protect your essential information; and enable seamless remote work environments.
AI-driven, as-a-service storage built for the cloud securely stores, protects, and archives your data—allowing faster insights, optimized operational costs, and guaranteed availability.
Pick up the pace of your digital journey with pay-as-you-go GreenLake hybrid cloud services that scale up and down according to your needs, prevent overprovisioning, and centralize apps and data across all your locations.
As-a-service virtual desktop infrastructure empowers remote workers to stay connected, secure their critical data and programs, and collaborate smoothly, consistently, and efficiently.
Shave essential hours off your IT workload with reliable support services that allow your team to focus on the finish line with a 65 percent shorter time to project deployment.
A major grocery chain was concerned about making it easy for employees to connect at home securely. Our solution addressed their challenge quickly, helped them scale, and was easy to support.
When a large university needed high-performance computing (HPC) cluster manageability tools, they leaned on us to provide a solution that would be “easy to use and manage” by their staff.
Are you concerned with disaster preparedness and business continuity? See how TCU stood up active-active data centers with zero downtime with HPE 3PAR and VMware in this case study.
As part of a major data center relocation project, Rent-A-Center needed assistance migrating existing virtual machines. See how our deep VMware expertise saved our customer over $1M.
User Experience Assessment: Gain first-hand intelligence of where user experience problems exist with recommendations on how to resolve the problems in the most cost-effective way.
Are you looking for a way to simplify your IT purchases? Mobius Value Portal (MVP) has the ability to be custom tailored to your company’s needs and to automate the sales and procurement process. | <urn:uuid:efe0e52d-2035-4045-a31b-312dc7c4c3aa> | CC-MAIN-2022-40 | https://mobiuspartners.com/roadtrip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00784.warc.gz | en | 0.918757 | 940 | 2.796875 | 3 |
Introduction to Python Programming Training introduces professionals to the fundamentals of Python programming language. This technique familiarizes you with the programming best practices and helps you learn to store and represent data with the help of Python data types and variables. It teaches you to utilize loops and conditionals to manage the flow of your programs and harness complicated data structures like sets, dictionaries, lists, and tuples to keep collections of related data.
After this course, you will be able to define and document your custom functions, handle errors, and write scripts more efficiently. It will enhance your ability to search and utilize modules in the Python Standard Library and other-party libraries. This program covers the basic Python scripting elements and the best methodologies to explore Python's goal-oriented features. It is incredibly resourceful for individuals that want to learn Python programming and earn the Python programming certification.
Professionals who are new to Python and scripting.
This course has a 50% hands-on labs to 50% lecture ratio with engaging instruction, demos, group discussions, labs, and project work.
For many years, Microtek Learning has been helping organizations, leaders, and professionals to reach their maximum performance by addressing the challenges they are facing.
Python is an interactive object-oriented, programming language that allows lucid expression of creative concepts with fewer coding than in rest of programming languages. Python is widely used for processing numbers, images, scientific data and text. Introduction to Python Programming is the 1st step of your journey to master the Python Programming.
Our experienced instructors will help you to understand all the basics. You get hands-on training where you learn handling and delivering the Python packages. During the training you learn to use the latest coding practices with highly applicable in vivid industries. You get enough time to clear all your queries.
You will be proficient in using the Python with familiarity to its environment. You will be a confident and knowledgeable Python practitioner to use Python functions to process the data for the particular objectives.
On successful completion of the Introduction to Python Programming, you will receive a Python course completion certificate issued by Microtek Learning.
Our highly trained and experienced instructors use extensive set of advanced tools and techniques to deliver industry best online learning experience. You can simply log in to learning environment at schedule time from anywhere with facility to interact, communicate, discuss and view presentations. | <urn:uuid:ba72bd85-3490-4a37-adb6-ad57501dce6d> | CC-MAIN-2022-40 | https://www.microteklearning.com/introduction-to-python-programming-training/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00784.warc.gz | en | 0.903768 | 481 | 3.046875 | 3 |
When we hear such words, we tend to assume there’s been yet another cybersecurity breach. This time, however, it was something different. But it didn’t do much to increase the public’s trust in digital technology. In fact, it seemed to do quite the opposite.
A vote of no confidence
The setting was Iowa. The timing was Feb. 3, and then Feb. 4, and then Feb. 5, 6….
The culprit was a smartphone app that didn’t enable caucus chairs to report voting results. The backup hotline system didn’t work well either, requiring long holds and some hang-ups. That prevented the campaigns and media from receiving and reporting those results to the public. At least one candidate apparently viewed this as an opportunity to get free air time. Others were furious. Meanwhile, the delay helped feed conspiracy theories. And the situation created further distrust of political institutions, electronic voting and digital technology in general.
A new control system
We’re just two months into 2020. But at least one other significant event impacting consumer trust in digital technology occurred this year. This one is largely seen as a positive development. I’m referring to the California Consumer Privacy Act. The CCPA took effect Jan. 1 of this year. This newly enacted rule gives California residents greater control over their personal data. Under CCPA, these individuals can request – and expect to receive – the data organizations have on them. California residents can demand organizations delete their data. People who live in California also can forbid organizations from sharing their data with third parties.
A nation divided
Our research indicates such measures may engender consumer trust in technology and organizations using it. We surveyed more than 1,000 Americans as part of our research effort. Forty percent said their trust is higher when they can request their data be deleted. Forty-one percent said a feeling of personal data control equates to a greater sense of trust.
Twenty-eight percent believe they have more control over their data than they did a year ago. We learned that 26% feel they have less control of their data, or none at all. And 46% think they have the same personal data control as a year ago.
As in politics, the nation is divided in this arena.
A lack of trust
More than half of the survey group said they were willing to accept personal data security risk to do online shopping (60%) or banking (55%) or to make digital payments (54%). More than half (54%) said they are not willing to do the same for the convenience of online voting.
A third (33%) said they are less confident about U.S. election security now than they were during the last presidential election year. More than half the country – 59% — said they are unsure or definitely will not trust the 2020 election results.
The Entrust results also suggests that Americans are pretty evenly divided on whether electronic voting (30%), paper ballots (35%) or a combination of the two (30%) are best. At least that was the breakdown prior to the Iowa caucuses, when paper ballots saved the day.
A Means of Protection
At this time in which government and other organizations clearly need to build trust in how they handle and secure data, it may be useful to revisit advice from our former leaders. President Theodore Roosevelt famously said: “Speak softly and carry a big stick.” In today’s digital world, it’s important to secure and safeguard the privacy of personal data. Encryption and key management in this scenario can act as the big stick.
Nearly half (49%) of Americans said they trust that a company is safeguarding their personal data when it uses encryption. About a third said encrypted ballots (31%) and/or encrypted voter registration data (33%) would increase their trust in election security. Strong data security in the form of encryption can build – or rebuild – trust in our governments and democracy, in businesses, and in the technologies that Americans use every day.
A Form of Activism
Americans can play their part in cybersecurity and personal data privacy by practicing good password hygiene. But, as most of us know, that’s not always easy.
Nearly three-fourths (74%) of Americans said it is somewhat, very or just plain frustrating when they have to log in to applications at work multiple times a day. More than three-fourths (78%) said they have had to change their password because they forgot it on at least a few occasions. More than a fourth (28%) said they use the same passwords for work and personal uses.
These types of challenges helped inform Entrust Datacard’s decision to release its Passwordless SSO Authentication solution, which turns employee smartphones into biometrics-protected virtual smart cards that allow instant proximity-based login to both workstations and applications. The solution eliminates passwords and puts an end to the risk of bad actors stealing user credentials and compromising critical information.
Outside of the workplace, the average person can more effectively – and securely – shoulder the burden of passwords by using a password manager app. And Americans can protect themselves and their neighbors by following security and data privacy best practices.
A Word of Truth
There is at least one part of this FDR inaugural address that applies here. “This is preeminently the time to speak the truth, the whole truth, frankly and boldly.”
The truth is that we must all do our part to protect and secure data and devices. Doing so will go a long way in building trust in our always-on, data-driven world.
Please click here for more information about Entrust’s security solutions. | <urn:uuid:ba4af276-7c0e-48ab-aeb1-043637086a40> | CC-MAIN-2022-40 | https://www.entrust.com/it/blog/2020/02/americans-data-elections-encryption-and-the-matter-of-trust/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00184.warc.gz | en | 0.964218 | 1,168 | 2.609375 | 3 |
Recently, there was a huge uproar over a discovery made by an Australian Technologist concerning Facebook’s cookies. Nik Cubrilovic analyzed Facebook’s cookies and saw that once you log out of the site, the cookies aren’t removed or deleted but simply changed. Your account ID is still embedded in these cookies, so whenever you visit a site that uses any Facebook widget, data could be sent back to Facebook, essentially allowing the site to track your activities online.
Cubrilovic admitted that he could be wrong about his conclusions. But if he’s not, then Facebook has quite a lot to answer for. Such an allegation has very serious implications on our privacy, and it could destroy all the trust that Facebook has slowly been rebuilding through their enhanced privacy settings.
Cubrilovic’s discovery has caught the attention of several Internet tech authorities, and many of them have sent e-mails to Facebook requesting for an explanation. Facebook has yet to make an official statement about the matter, but ZDNet has received one response from Arturo Bejar, a Facebook engineer explaining what the cookies do and why they behave as such.
According to Bejar, Facebook does not track users’ activities whether they’re logged onto Facebook or not. He claims that the purpose of these cookies is for the users’ safety and protection. These cookies actually help in identifying spammers and phishers and help detect any unauthorized log ins. Also, in the event of a hacking, these cookies can help the user retrieve his or her account.
These cookies are also part of Facebook’s system for preventing minors from registering in the account. Once the cookies are in place, they can no longer try to re-register using a different birth date if they have erred once by giving their real one.
So, if all this is to be believed, the cookies that people have been raging against during these past days were actually pretty useful. The cookies do not track our online activity. In fact, instead of breaching our privacy and security, these cookies were helping to enhance it.
Bejar also stressed that Facebook does not sell the information that they receive when users visit sites which have the Facebook widget. They don’t even use it for their targeted ads. The data that they collect is deleted within 90 days. After which, the only thing they do keep is data that has been aggregated and ‘anonymized’. So, all those fears about Facebook keeping detailed records about the sites which you visit are, according to Bejar, unfounded.
It all boils down to trust. Facebook does appear to have the ability to track users in the manner set forth by Cubrilovic, but do you trust them not to? If you have any reservations at all, and are reluctant to trust Facebook completely, then the options for your protection are few:
- Use a dedicated browser for Facebook only. If you normally use Firefox, then use Chrome or Internet Explorer for Facebook.
- Hacker News explains how you can use Ad Block Plus with the following Facebook rules:
- Use a cookie cleaning utility or manually delete all Facebook cookies after each session.
- Set your browser to automatically delete cookies upon exit. | <urn:uuid:9326e206-76d3-4898-a843-72b273f39709> | CC-MAIN-2022-40 | https://facecrooks.com/Internet-Safety-Privacy/Facebook-denies-tracking-user-activity-after-log-off-Do-you-trust-them.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00184.warc.gz | en | 0.96273 | 659 | 2.78125 | 3 |
July 7, 2020
Whaling phishing is a targeted attack directed at high-level company employees, such as a CEO or CFO. Like other phishing attacks, the goal of whaling phishing is to impersonate a trusted person or brand and, by using social engineering tactics, trick the recipient into relaying sensitive information or transferring funds to the attacker.
In this article, we’ll delve deeper into the definition of whaling phishing, typical examples, and how you can prevent falling victim to a whaling phishing attack. But first, if you’re just catching up, check out the other articles in this series:
WHAT IS WHALING PHISHING?
There are different types of phishing attacks and they differ by their specific target:
Phishing: a wide net approach which uses the same communication, such as an email, on a broad group of people
Spear phishing: a more targeted, personalized attack on a single employee, a group of employees, or an organization
Whaling phishing: a targeted, personalized attack on important company personnel
In order to target high-profile individuals, phishing emails (or phone calls, texts, etc.) are carefully crafted using available data and information. The communication will appear legitimate. It may include impersonating a friend, colleague, CEO, or a trusted brand.
A whaling attack is more time consuming for a criminal actor because each attack requires the attacker to identify a specific target and craft a unique message. But the payoff can be incredibly high because of the additional access of high-profile targets. Targets could include IT systems, confidential data, bank account information, or access to transfer funds.
EXAMPLES OF WHALING PHISHING:
Snapchat A Snapchat HR employee received an email that appeared to be from the CEO, Evan Spiegel. The email asked for payroll information about both current and former employees and the employee believed the request to be legitimate. After the incident, Snapchat contacted the affected employees (and former employees) and offered them two years of identity-theft insurance and monitoring.
Seagate Technology In 2016, Seagate Technology fell victim to a whaling phishing attack. Again, like in the Snapchat example, an email request was sent. Appearing to be legitimate, the email asked for 2015 W-2 tax form information for current and former U.S.-based employees. The employee who responded to the email sent the information to the third-party without realizing it was a scam. And while the company offered identity-theft insurance and monitoring, just like Snapchat, to the affected employees, that likely didn’t protect them against tax refund fraud.
Mattel A phishing email targeted a Mattel finance executive in 2016 with what appeared to be a routine invoice request to a new vendor from the new CEO. The executive wired $3 million to the new vendor in China. With a focus on new business in China, the request aligned with current business operations. And with a new CEO, the executive who received the phishing email was eager to please the new boss. The Mattel protocol required two high-ranking managers to approve invoice payments. The attackers knew about this protocol, drafting the email to someone with power while also impersonating the CEO. While authorities and a bank holiday assisted in retrieving Mattel’s $3 million, others aren’t so lucky.
WHAT CAN YOU DO TO PREVENT WHALING PHISHING?
To prevent a whaling phishing attack at your organization, employ these suggestions:
1. Educate senior management. Include both traditional signs of phishing in your education and specific whaling techniques used on high-profile employees. Educate your senior management that they can be specific targets and what typical attacks might look like.
2. Minimize available data. Because whaling attacks (and spear phishing attacks) target specific people using personal information, minimize available public data, such as birthdays, hobbies, etc. from sources like LinkedIn and Facebook.
3. Mark emails from outside the company. Whaling attacks often attempt to impersonate another individual from inside the company (like all three examples listed in this article). Make sure external emails are flagged and staff is trained to understand that phishing emails will come from email addresses that look familiar, but are not the correct company address.
4. Employ a formal verification process. Ensure a process is in place for all sensitive actions like transferring funds or sending private information. This could include verifying with a specified internal third party via a separate communication method and known contact information. For example, verifying money transfer requests received via email could be verified by phone using a valid number in the company directory.
5. Employ an in-depth defense strategy. Make sure you’re employing an in-depth cyber defense strategy that includes requiring MFA (multi-factor authentication), timely software updates, and frequent backups.
To learn more about phishing check out the rest of the phishing series:
- The Definitive Guide to Phishing
- What is Spear Phishing?
Written by: [Patrick Browns] | <urn:uuid:5ebbe90a-3aef-4c7f-b2c5-c807320eb4d6> | CC-MAIN-2022-40 | https://measuredinsurance.com/blog/what-is-whaling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00184.warc.gz | en | 0.937004 | 1,049 | 2.84375 | 3 |
Technology benefit the healthcare industry
A revolutionary new technology that could improve the vision of thousands of children around the world has been receiving increased coverage in recent weeks. We’ve already seen cloud computing can benefit the healthcare industry from a provider’s side, but the latest vision-screening technology is one of many that has real potential to help patients.
The global healthcare statistics are nothing for the developed countries to be proud of. One billion people lack access to any form of professional healthcare, and an estimated 7.5 million children die from preventable causes. Although the developer of the technology (VisionQuest2020) won’t able to greatly affect the bigger picture, they do hope to help some of the world’s two million blind children.
In 1997 it was estimated that 45% of blind children were blind from avoidable causes. Today, even in the developed countries, as many as 1 in 4 school children have undetected and untreated vision disorders, while 48 percent of children under twelve have not had a professional eye exam. VisionQuest believe it to be a looming problem, saying “Not only is [the children’s] personal well-being and health being affected, but it is estimated that annual societal costs are more than $50 billion from the cost of treatment and lost productivity”.
Embracing the challenge, VisionQuest is now working with schools across the country to implement affordable school-based vision screenings. To undertake a screening, children wear special glasses whilst interacting with a video game that is designed to test the quality of their sight. The video game streams different tests from the company’s cloud-based database, depending on their age and preferences.
The cloud also plays an important role in the software’s appeal to the medical staff and opticians who are using it. Each child’s screening history is retained in an online database, enabling rescreening without record duplication as well as reports that can be reprinted at any time. It means if a child moves between schools or leaves the area they will maintain an easily-accessible record, while the same records can be opened from both a school and an off-site optician when required.
Screenings can even be performed when there is no internet connection. Once a connection is re-established, the results are consolidated into a password protected, HIPAA-compliant, cloud based data repository with restricted access.
The technology has been a great success in its trial period and has been used to screen more than 200,000 children for problems as diverse as cataracts, retina damage and colour blindness. The test’s main benefit is that the cloud removes the requirement for a professional to be present; the software’s real-time decisions and use of logic protocol to validate the results means a parent or other volunteer can perform the procedure.
With medical industry backing, the technology look certain to become more prevalent. Already its supporters are claiming that the big data gathered from widespread adoption of the system would be able to help guide public health policy decisions and provide information for continued public and private support and funding.
What role can the cloud play in the gathering of big data across all industries to help improve Government spending? Is a computer game an adequate replacement for a well-trained and high-experience professional? Let us know in the comments below.
By Daniel Price
Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs. | <urn:uuid:31c48125-bb6a-4f0d-8eb5-3f6c76201cb0> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/05/technology-benefit-the-healthcare-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00184.warc.gz | en | 0.959125 | 759 | 2.875 | 3 |
It’s the ‘nuclear option’ of network security. By separating a machine from any other computer, managers can all but guarantee that it cannot be penetrated remotely.
Air gaping, or network separation as it is often known, is the step taken by the serious neurotics among users or because the regulatory body you report to insist on it, those whom even after attending to all the basics of digital security, still need to go the extra mile. Most users who seek to air gap their system will go to some pretty extreme measures. Some will remove all wireless hardware from their machines. Others will simply use devices that don’t come with any.
Don’t get me wrong. Air gaping may, in fact, have its place in some special cases. Air gaping is an established and recognized security practice often used in the more sensitive sectors, such as military, intelligence, and critical infrastructure. The potential consequences of these systems being compromised are sufficiently bad to justify this extreme measure.
The question is not whether or not air gaping should be practiced at all, but if it is a method that should be applied more broadly within the IT world.
To answer this question, we’ll have to take a closer look at the real cons of an air gaped system.
The Surprising Costs
Users tend to be more familiar with the logistical barriers created by network separation.
An air gapped network has zero connection to the outside world. All remote communication, collaboration, and even the simple act of sharing files and documents are impossible. There are infrastructural challenges as well. An air gap requires the creation of a whole new network with independent servers, routers, and other management tools. That network needs to be built from scratch in order to deliver the expected work demand.
Interestingly enough though, one of the biggest drawbacks to an air gap can actually come in the form of weakened network security. With an air gap in place, network users can become lax in their safety practices and take essential security basics for granted. A poor security culture means human error can give malicious actors a way into the system. Take for example the scenario of employees ‘taking it easy’ with network rules and using their private, insecure emails to transfer network date.
What’s more, the air gap itself can in fact be penetrated. And no, I’m not just talking about the highly sensationalized niche ‘air gap hacks’ some creative researchers have come up with. We’re referring to much more realistic concerns. Relying solely on an air gap to maintain the safety of a network means that just one connection with the outside world creates a single point of failure. If (or more accurately, when) a user creates a wireless connection with a private device or hardware, it can literally compromise the entire system.
The Air Gap Authentication Challenge
Even more importantly, air gaps create problems for one of the most basic elements of network security: Identity and Access Management (IAM).
Even though air-gapped networks are closed off from the outside, each user still needs to prove their identity before accessing a given work station. For a computer unable to receive data remotely, many modern innovations for authenticating users such as push notifications and other multifactor platforms that rely on a connection to the web, are not an option.
This means a separate network basically needs to rely on one of two methods: Smart Cards or passwords. Smart cards are often not readily compatible with modern machines, which only adds to the infrastructural challenges of air gaping. As for the second option, using the outdated, weak option of passwords only undermines the security managers are trying to build.
In summary, even after taking the air gap-route, administrators will still need mechanisms to fill the security void.
Authentication for Air-gaped Networks – FIDO2 to the Rescue
Taking in consideration the challenges airgap networks hold, push authentication utilized by the Octopus Authenticator could not solve the authentication problem these sorts of networks hold.
To tackle the issue and to assist users who do not wish to use their phone as an authenticator we joined the FIDO alliance to offer a scalable on-premise authentication solution that includes Single Sign-On (SSO) capabilities and Multi-Factor Authentication (MFA) without any need for outside communications. | <urn:uuid:c44727a4-47a7-4713-97a2-86bfbb5a8c58> | CC-MAIN-2022-40 | https://doubleoctopus.com/blog/access-management/air-gap-network-multi-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00184.warc.gz | en | 0.933458 | 893 | 2.578125 | 3 |
ODNI releases Global Food Security assessment
The overall risk of food insecurity in many countries of strategic importance to the United States will increase during the next 10 years because of production, transport and market disruptions to local food availability, lower purchasing power and counterproductive government policies, according to an assessment released on October 14 by the U.S. intelligence community.
The inter-agency assessment, “Global Food Security,” was prepared under the leadership of the National Intelligence Council’s Strategic Futures Group within the Office of the Director of National Intelligence, and drafted principally by the CIA.
The assessment also notes:
- Demographic shifts and constraints on key inputs, such as land and water, will probably compound the risk. In some countries, declining food security will almost certainly contribute to social disruptions and political instability.
- Simply growing more food globally will not lead to more food-secure countries because sustainable access will remain unequal; millions lack access to land or income sources to buy sufficient food.
- Augmenting traditional approaches to agricultural development with lesser-used strategies such as reducing crop and food waste, generating off-farm income activities, conducting research in minor crops and fostering technical education in agriculture would improve the resilience of local and global food systems. Such strategies can help Washington and its allies to develop creative complements to standard approaches and help resolve inherent tensions between goals such as producing more food and conserving water and other natural resources.
- The intelligence community conducted detailed unclassified research on food security issues in multiple countries and across six food-related commodities: wheat, rice, coarse grains, oil crops, sugar crops and fish.
- Principal demand factors that will affect food security in the long-term (beyond 2025) are demographic changes-to include urbanization-and income growth in emerging and developing countries. These trends will influence dietary preferences.
- The principal supply factors will be: weather, the rate of agricultural technology development and deployment, the availability of resources, and government policies.
- Agricultural markets, energy availability, agricultural technologies, and supporting infrastructure will not lead to dramatic, “discontinuous” changes in food supply or demand by 2025.
The timeframe for the key judgments is out to 2025, however, the assessment discusses longer-term trends that might affect U.S. national security interests.
Key Judgment A: We judge that the overall risk of food insecurity in many countries of strategic importance to the United States will increase during the next 10 years because of production, transport and market disruptions to local food availability, declining purchasing power and counterproductive government policies. Demographic shifts and constraints on key inputs will compound this risk. In some countries, declining food security will almost certainly contribute to social disruptions or large-scale political instability or conflict, amplifying global concerns about the availability of food.
Key Judgment B: Prospects are poor for countries grappling with food insecurity. The majority of countries already experiencing high-to-extreme food insecurity face risk factors that could worsen their food security through 2025; some countries that have low-to-moderate food insecurity today are at risk of experiencing worsening conditions during the next 10 years. The intersection of food insecurity with governance gaps will probably result in social disruption, political turmoil or conflict.
Key Judgment C: We judge that augmenting traditional approaches to agricultural development with innovative, but lesser-used strategies — such as reducing crop and food waste, generating off-farm income activities, conducting research into minor crops and fostering technical education in agriculture — will improve the resilience of local and global food systems. This combination will probably increase the ability of individuals to acquire food and reinforce U.S. developmental strategies more than either approach alone.
Key Judgment D: Developing creative complements to traditional approaches to improve global food security will take a worldwide effort. Opportunities exist for the United States — already viewed as a leader in promoting global food security — to align with long-standing allies as well as new partners. Some countries offer nontraditional models of how to resolve the inherent tension between goals such as producing more food and conserving water and other natural resources. Emerging economies with growing food security expertise can offer solutions more palatable to countries with low levels of development and technology. Food-insecure countries themselves will also be an important part of the effort; those taking complete or partial ownership of programs designed to build local food security are likely to see more sustainable results.
A full copy of the report can be found here.
A copy of the selected emerging agriculture technologies matrix can be found here. | <urn:uuid:8dc2fd24-f2d7-4634-ac5a-42f1cf5458cd> | CC-MAIN-2022-40 | https://intelligencecommunitynews.com/odni-releases-global-food-security-assessment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00184.warc.gz | en | 0.936012 | 929 | 2.75 | 3 |
What is a Cyberattack?
A cyberattack is a malicious activity that cybercriminals launch using different tactics against systems and networks. Hackers use cyberattacks to expose, gain unauthorized access, alter, steal, destroy, or make unauthorized use of information assets.
Cybercriminals engage in offensive maneuvers that target information systems, infrastructures, computer networks, and personal devices to access information, restricted areas, and controls of systems without authorization.
- A cyberattack is a malicious activity that hackers launch to steal data and disable systems
- Cybercriminals use various methods like malware, ransomware, and denial of service to launch cyberattacks
- A cyberattack can be active, passive, insider, or outsider incident
- About 3.5 billion people lost their data in the top two of the 15 most significant cyberattacks.
- In the first six months of 2017, cyberattacks impacted 2 billion data records, and ransomware payments reached US $2 billion.
- Various cybercriminals, including groups and individuals, employ cyber attacks with malicious intent.
Impacts of Cyberattacks
Individuals, groups, sovereign states, societies, activists, and other organizations employ cyber attacks with malicious intent.
A cyberattack can result in various unfavorable effects, such as disabling computers and systems, stealing data, and using breached devices to launch attacks on other computers.
Types of Cyberattacks
Cybercriminals use different methods and tools to launch a cyberattack on their target. Depending on the hacker’s intent, a cyberattack can be targeted or random. Cybercriminals devise new ways to throw targets off their defenses.
A cyberattack can be active or passive. An active attack attempts to alter system resources and affect their operations. A passive attack attempts to learn or make use of information from the target without affecting the system resources.
A cyberattack can originate from outside or inside the organization. The unauthorized or illegitimate user causes an outside attack. These external attackers include hostile governments, pranksters, organized criminals, script kiddies, and international terrorists.
An inside attack originates from an entity inside the security perimeter. An insider can be an employee, vendor, or contractor authorized to access system resources and information but uses them in an unapproved manner.
Some of the cyberattacks include:
- Malware – this is a form of malicious software that harms computer users. Malware includes dangerous programs like computer viruses, worms, spyware, trojan horses, and adware.
- Ransomware –ransomware is a prevalent malware that hackers use to lock a victim’s computer files through encryption and demanding payment to unlock the files
- Social engineering – this cybersecurity threat leverages human interaction and trust to trick users into performing actions that enable criminals to steal sensitive information
- Phishing – Phishing is a form of fraud where hackers send fraudulent emails and messages that resemble details from reputable sources. Phishing attacks enable cybercriminals to steal sensitive information, such as credit card and login credentials.
- Denial of service attacks – hackers cause a denial of service attack by sending overwhelming traffic to networks and servers, therefore preventing systems from meeting legitimate requests
- Man-in-the-middle (MITM) – cybercriminals use MITM attacks to secretly interpose between users and a web service they are trying to access. MITM allows attackers to harvest any information the user sends to the service
- SQL Injection – in this attack, a hacker exploits a flaw to take control of a victim’s databases. The hacker writes Structured Query Language (SQL) commands into a web form that collects user information such as name and addresses. Poorly programmed websites and databases will execute the malicious commands
This list is not exhaustive. The industry OWASP Foundation maintains a list of the top 10 cyberattacks hackers use against web applications. You can have a look at the list on OWASP’s website.
Popular Cyberattacks Incidents
Today, cyberattacks affecting millions of users are far too common. About 3.5 billion people lost their data in the top two of the 15 biggest cyberattacks.
Some of the most prominent incidents in recent memory include:
- Adobe – in October 2013, hackers stole 153 million user records belonging to Adobe customers. The information included usernames, hashed passwords, customer names, IDs, debit, and credit card information. An August 2015 agreement called for Adobe to pay $1.1 million in legal fees and an undisclosed amount to settle user claims for violating the Customer Records Act.
- Adult Friend Finder – Hackers breached the Adult Friend Finder in mid-October 2016. The stolen data spanned 20 years on six databases and included names, email addresses, and passwords of approximately 412 million accounts.
- Canva – Australian graphic design tool Canva experienced an attack that exposed email addresses, usernames, names, location addresses, and passwords of 139 million users. The hackers also managed to view files with partial credit card and payment information.
- Equifax – Hackers exploited an application vulnerability in one of Equifax’s websites, compromising the personal information (social security numbers, birthdates, addresses, drivers’ license numbers) of 143 million customers. The breach also exposed credit card data of more than 200,000 users. | <urn:uuid:446e6f14-12a9-4ec7-a4c3-04ea1b1736f9> | CC-MAIN-2022-40 | https://cyberexperts.com/encyclopedia/cyberattack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00385.warc.gz | en | 0.892983 | 1,090 | 3.84375 | 4 |
Introduction to cloud computing
Cloud computing is when you access computing services—like servers, storage, networking, software—over the internet (“the cloud”) from a provider like Azure. For example, instead of storing personal documents and photos on your personal computer’s hard drive, most people now store them online: that’s cloud computing.
Cloud computing platforms, like Azure, tend to be less expensive and more secure, reliable, and flexible than on-premises servers. With the cloud, equipment downtime due to maintenance, theft, or damage is almost non-existent. You can scale your compute and storage resources—up or down—almost instantly when your needs change on Azure. Also, you typically pay only for the services you use, which provides a level of convenience and cost-control that’s almost impossible to achieve with on-site infrastructure.
Cloud computing eliminates the expense of setting up and running on-site datacenters, which often have added costs such as employing staff and buying and maintaining land, buildings, and computer hardware. The cloud allows businesses to access the computer resources they need in real time to match their business needs on demand.
Security is a key focus of cloud providers, who invest huge sums of money into securing their infrastructure. Cloud providers typically also offer a broad set of policies, compliance, technologies, and controls that strengthen your security posture by protecting your data, apps, and infrastructure from threats.
More efficiently develop and manage your applications with nearly unlimited cloud computing resources. Cloud providers continuously update their datacenter networks with the latest-generation hardware, providing you with fast, efficient computing resources that never go obsolete and would be more costly to implement in a single datacenter.
Cloud computing runs on data centers around the world, providing overall resiliency and reliability by allowing your data to be backed up in more than one geographic location. This also allows your IT resources to be delivered from specific geographic locations when required.
What is Azure?
Azure is an ever-expanding set of cloud computing services to help your organization meet its business challenges. Azure gives you the freedom to build, manage, and deploy applications on a massive, global network using your preferred tools and frameworks.
With Azure, you can:
Continuous innovation from Microsoft supports your development today—and your product visions for tomorrow. Build on the latest advancements in the cloud, including more than 1,000 new capabilities released in the last year.
Operate hybrid seamlessly
On-premises, in the cloud, and at the edge—Azure meets you where you are. Integrate and manage your environments with tools and services designed for hybrid cloud.
Build on your terms
You have choices—with Azure’s commitment to open source and support for all languages and frameworks, you’re free to build how you want and deploy where you want.
Trust your cloud
Get security from the ground up—backed by a team of experts and proactive, industry-leading compliance that’s trusted by enterprises, governments, and startups.
Azure is secure
Security is a given in the cloud industry, but Azure’s proactive approach to security, compliance, and privacy is unique. Microsoft leads the industry in establishing and consistently meeting clear security and privacy requirements.
Between its industry-leading compliance and privacy certifications to built-in security controls and unique threat intelligence, Azure has everything you need to identify and protect against rapidly evolving threats.
Start with a secure foundation
Reduce costs and complexities with a highly-secure cloud foundation that takes advantage of multi-layered security provided by Microsoft.
Streamline your compliance and enable business transformation
Use built-in controls, configuration management tools, implementation and guidance resources, and third-party audit reports to simplify your compliance needs.
Detect threats early
Identify new threats and respond quickly with unique services informed by real-time global cybersecurity intelligence delivered at cloud scale.
Azure is global
With datacenters in more regions than any other cloud provider, Azure provides a global reach with local presence that many businesses and organizations need, allowing them to reduce the cost, the time, and the complexity of operating a global infrastructure while meeting local data residency needs.
The advantages of Azure over AWS
Organizations all over the world recognize Microsoft Azure over Amazon Web Services (AWS) as the most trusted cloud for enterprise and hybrid infrastructure for many reasons:
AWS is 5 times more expensive than Azure for Windows Server and SQL Server. Azure matches AWS pricing for comparable services.
Achieve more with open source on Azure
Use any open-source OS, languages, and tools on Azure. Azure made the most contributions to GitHub in 2017 and it’s the only cloud with integrated support for Red Hat.
Enhanced proactive security and compliance
Compare AWS and Azure and you’ll find that Azure’s compliance offerings—including 70+ compliance certifications—and are more comprehensive.
Get more value from your existing Microsoft investment
Keep using your organization’s existing tools and knowledge: get a consistent experience across your on-premises and cloud technologies by integrating them with Azure Active Directory.
Azure is the future
The advent of the cloud and smart technologies is revealing new scenarios that were simply not possible until now. Smart sensors and connected Internet of Things (IoT) devices now allow us to capture new data from industrial equipment: from factories to farms, from smart cities to homes. And whether it’s a car or even a refrigerator, new devices are increasingly cloud connected by default.
Simultaneously, hybrid cloud is evolving from being the integration of on-premises datacenters with the public cloud, to becoming units of computing that are accessible even in the world’s most remote destinations.
By bringing these two new realities together—and with artificial intelligence running across all systems—we have entered the era of the intelligent cloud and intelligent edge.
The intelligent edge is a continually expanding set of connected systems and devices that gather and analyze data—close to your users, the data, or both. Users get real-time insights and experiences, delivered by highly responsive and contextually aware apps. The Azure platform is built to provide an agile and secure experience across the intelligent cloud/intelligent edge in alignment with these enduring principles:
01 Consistent application development
02 Holistic security
03 Unified identity
04 Simplified device management
05 Artificial intelligence across all devices
06 A robust intelligent edge portfolio and ecosystem
Your cloud journey starts here
Find everything you need to make a successful move to the cloud using the Microsoft Cloud Adoption Framework for Azure. The framework contains proven guidance designed to help customers and partners create and implement strategies for achieving their cloud goals. | <urn:uuid:cc1b8629-55f2-4bdb-b02f-5f1342da7d24> | CC-MAIN-2022-40 | https://www.drware.com/microsoft-azure-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00385.warc.gz | en | 0.913682 | 1,404 | 3.0625 | 3 |
Application container technology – such as Docker and Kubernetes – is revolutionizing application development and bringing previously unimagined flexibility and efficiency to the application software development process.
Application containers like Docker containers are lightweight with rapid provisioning (we’re talking milliseconds) and provide an alternative to virtual machines that can consume a high amount of system resources and have a long boot time.
Containers allow companies to operate at an unprecedented scale and maximize the number of applications running on a minimum number of servers. This results in responses to multiple users in a timely and efficient manner even as demand fluctuates for different parts of an application.
What Does Docker Have to Do With Kubernetes?
Containers are portable and lightweight alternatives to virtual machines, and Docker is a containerization platform. Docker has become the most popular container technology in the world. However, Docker technology alone is not enough for managing containerized applications. Kubernetes, among other platforms, is used in tandem with Docker to address the container management and orchestration challenges.
Kubernetes (or “k8s”) is an open source platform that automates container operations. It is one of the most popular container management and orchestration methods, and for good reason.
Kubernetes eases the burden of configuring, deploying, managing, and monitoring even the largest containerized applications. It helps manage container lifecycles and related application lifecycles and issues, including high availability and load balancing.
Kubernetes helps to manage clusters easily and efficiently with groups of hosts (dedicated servers or virtual machines) that run the Kubernetes ‘master node’ (the control plane) and the Kubernetes worker nodes (the workers that run the containers). Version 1.14 and up of Kubernetes supports Windows-based worker nodes that run Windows containers as well as Linux-based worker nodes that run Linux containers.
A Kubernetes node is typically a host with either a master or worker node functionality. The master node runs things like Kubernetes APIs (i.e., for kubectl, the native command-line interface for Kubernetes). The worker nodes have everything necessary to run the application containers, including the container runtime.
A Kubernetes pod is one or more containers running together. Kubernetes gives pods their own IP addresses and a domain name for a set of pods.
A Kubernetes service is a way to expose an application that is running on a set of pods as a network service. Pods come and go, and therefore sometimes have a short lifespan. Services help the other pods find out and keep track of which pod IP address they should connect to.
Operators are clients of the Kubernetes API that control custom resources and enable automation of tasks such as deployments, backups, and upgrades by watching events without editing Kubernetes code. The key attribute of an operator is the active, ongoing management of the application. This includes failover, backups, upgrades, and autoscaling. Operators offer self-managing experience with knowledge baked in from the experts.
A Kubernetes secret is a Kubernetes object storing sensitive information, such as an OAuth token or SSH key. This makes it so that the information is only accessible when necessary.
Leaseweb customers are already self-installing and managing their Kubernetes nodes on either bare metal dedicated servers or virtual machines. The installation of a Kubernetes cluster is made easier by using deployment tools like kubeadm. Next to installation support, the Kubernetes.io website also walks through management best practices. For more information click here to visit the Kubernetes setup page. | <urn:uuid:8c609642-9bc0-4395-8a75-d74640b9fbc0> | CC-MAIN-2022-40 | https://blog.leaseweb.com/2021/05/03/what-is-kubernetes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00385.warc.gz | en | 0.906584 | 807 | 2.84375 | 3 |
By allowing digital information to be distributed but not copied, Blockchain is an undeniable technology advancement due to its decentralization of data in a trustless environment.
Traditional ledgers are centralized and use 3rd parties and intermediaries to approve and record transactions. On the other hand, Blockchain safely distributes ledgers across the entire network and does not require any middleman. Moreover, the technology maintains multiple replicas of the distributed ledgers using different nodes in the network.
Data generation and analysis is the backbone of IoT but protecting data throughout its lifecycle is critical. Managing information at all levels is complex because data flows across many administrative boundaries with different policies and intents.
Blockchain comes in as a great technology to assure the protection of data in a public infrastructure and the ability of different applications to interact directly with each other.
IoT and Blockchain can work together to leverage the capabilities of the Ethereum Blockchain protocol for building large class decentralized applications.
Here are some examples:
Supply Chain — Track objects as they navigate the import/export supply chain while enforcing shipping and line of credit contracts and expediting incremental payments
Warranty — Parts Tracking/History — Maintain indelible history of parts and end assembly through the supply chain, potentially including critical events that affect life or scheduled maintenance.
Interconnecting Devices — Enable distributed devices to request and pay for services through distributed role management and micropayments
Device as a Service (DaaS) — Companies opting for a DaaS model can lease devices for a length of time, and customers pay a monthly fee for the services used. Payments can be made automatically using a token-based smart contract platform
Equipment or Business Process History — Track equipment or business process history in an immutable record and enable easy sharing of this information with third parties
Below is an example of a Blockchain IoT Platform. Let us call it Nagarro iQChain.
Nagarro iQChain is a platform that provides Blockchain integration for Smart Devices, allowing a user/enterprise to communicate, get information, analyze, and monitor any smart IoT Device.
Nagarro iQChain has 3 main components:
1. A cloud IoT Platform which is used to:
- connect devices
- gather, display and store telemetry data
- view device insights
- control devices
2. A smart contract platform based on Ethereum Blockchain. The platform will have tokens minted which will be used by the end-customers for different operations. The tokens will be bought by the users at exchanges using FIAT currency or cryptos like Bitcoin, Ethereum or Litecoin.
Store all the transactions between the customers and the IoT Platform:
- device registrations
- platform utilization fees
- device/machine interactions
Each machine connected to Nagarro iQChain will have a private wallet where the administrator from the customer side can transfer tokens which can be automatically used by the machine for maintenance tasks (order consumables).
3. A middleware platform that facilities the interaction between the IoT Platform and the Smart Contract Platform. Smart contracts live isolated, and they cannot fetch external data on their own. Tools like Oraclize or ChainLink allow smart contracts to interact with the outside world. These platforms act as a data carrier, a reliable connection between an external service and the Smart Contract Platform.
Use Case 1: Parts and Service Management
Keeping service and maintenance history in a secure and unalterable store, using Blockchain technology.
Let us assume a device is broken:
- A smart contract is initiated in the platform and stored on the Blockchain
- Using the middleware platform, the smart contract creates a ticket in the IoT Platform
- The device is fixed by an external entity
- Information regarding the fixing and the actions that were performed are written back from the IoT Platform to the Smart Contract Platform using the middleware platform
Use Case 2: Advanced analytics
Industrial CNC milling machine purchased by a customer produced in series.
Usually, these machines work in a 24/7 environment, and any interruption can cause production loss. That is why is very important to detect problems in early stage and replace the parts that can cause a machine breakdown and production loss.
The IoT platform contains milling machines from the same producer, used by different customers, which upload telemetry data to a cloud storage.
Using Machine Learning algorithms based on the historical data, predictions can be made based on the incoming telemetry readings.
Here is a flow:
1. The customer requests an analysis from the IoT Platform
2. The IoT Platform initiates a contract
3. The customer pays using the tokens minted by the platform
4. The Smart Contract Platform creates a new block to store the transaction
5. The required analytics are provided to the customer
The advantage of this method is that the customer will not have to buy complicated analytics tools and will leverage the IoT Platform computing power and insights.
So, although it is still in the early stages, the integration of Blockchain technologies into IoT can become game changer in future ecosystems. | <urn:uuid:3c8a507f-7f8c-4415-abda-f237a10be8d7> | CC-MAIN-2022-40 | https://www.nagarro.com/en/blog/blockchain-iot-m2m-communication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00385.warc.gz | en | 0.897636 | 1,042 | 2.65625 | 3 |
In a nutshell, email spoofing is the creation of fake emails that seem legitimate. This article analyzes the spoofing of email addresses through changing the From header, which provides information about the sender’s name and address.
SMTP (Simple Mail Transfer Protocol, the main email transmission protocol in TCP/IP networks) offers no protection against spoofing, so it is fairly easy to spoof the sender’s address. In fact, all the would-be attacker needs is a tool for choosing in whose name the message will arrive. That can be another mail client or a special utility or script, of which there is no shortage online. | <urn:uuid:1a6d16c9-52d0-4708-a4d4-230f49a6d6d4> | CC-MAIN-2022-40 | https://dataprotectioncenter.com/security/email-spoofing-how-attackers-impersonate-legitimate-senders/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00385.warc.gz | en | 0.915112 | 131 | 3.203125 | 3 |
Let’s take a crash course in data science terminology.
Machine learning (ML), or the study of computer algorithms that improve automatically through experience, has demonstrated vast potential for applications across a variety of industries and fields of study. Considered to be a subset of artificial intelligence and related to computational statistics, the latter’s focus mainly being prediction via computers, ML algorithms build mathematical models on training or sample data to automatically make future decisions without explicit oversight or human supervision. Algorithms may inherently seem complex, but they’re simply a collection of rules or instructions for a problem-solving process or computation, generally done by a computer. Perhaps it is this interchangeability in rules and outcomes that make them so applicable across a variety of problems and situations.
Of all the things made possible by machine learning, perhaps the most tangible is self-driving cars. Organizations like Google and Uber are actively pursuing autonomous vehicle research and development, and others like Tesla are already implementing their current technology. Other examples of how machine learning is used in practice include computer vision, or classification, detection, restoration, and segmentation of images by machines. Another is natural language processing or computer understanding of human speech and text. These examples may give the impression of recent advancements via ML as science fiction made a reality, related to technology not necessarily applicable yet in everyday situations. However, most of us encounter ML regularly without knowing it.
Image credit: https://www.nasa.gov/image-feature/active-regions-on-the-sun
Personalized ads on Google and Facebook use ML to predict what content may be most impactful to the individual. Netflix was an early adapter of ML and held a competition as early as 2006, awarding a cash prize to the best recommendation algorithm based on their then publicly made data. Eleven years later, in 2017, an estimated 80% of TV shows on Netflix were discovered through their recommendation system, the one they were looking to optimize years prior. From a business perspective, it is an intelligent thing to do to not only retain viewers or customers but also introduce them to new content or product. ML is not only useful technologically but also when it comes to savvy business practices.
One of the most potentially impactful developments of ML is in fusion energy. The sun and other stars are natural fusion reactors, their stellar nucleosynthesis, or fusion of two or more atomic nuclei into a heavier nucleus and releasing energy. The idea of harnessing fusion power as renewable energy has been around for quite some time; however, until now, there’s always been more energy expense than energy output in fusion reactor prototypes, rendering them unproductive and not commercially viable. Yet given recent advances in ML, several groups from Massachusetts Institute of Technology to TAE, a fusion company in southern California, to a 35-nation project in France estimate to be only a few years away from commercialization and viable reactors! As a renewable source, it produces less radioactive nuclear waste compared to traditional fission nuclear energy, hence the continued interest and research into its eventual production in a hopeful near future.
These are just a handful of countless examples of how machine learning can be effectively used to bring to life ideas and concepts that until recently were thought to be science fiction. From savvy business practices to harnessing the power of stars, the future applications of machine learning are bright. | <urn:uuid:603f6c53-69e5-49f0-a3c2-1957e27dc8a3> | CC-MAIN-2022-40 | https://plat.ai/blog/the-future-is-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00385.warc.gz | en | 0.946485 | 682 | 3.625 | 4 |
Biochar, a carbon-rich, charcoal-like substance made from oxygen-deprived plant or other organic matter, has both delighted and puzzled scientists. As a soil additive, biochar can store carbon and reduce greenhouse gas emissions, and it can release nutrients to act as a non-toxic fertilizer.
But the precise chemistry by which biochar stores nutrients and promotes plant growth has remained a mystery.
Now, Colorado State University experts, has illuminated unprecedented detail and mechanistic understanding of biochar’s seemingly miraculous properties. The study demonstrated how composting of biochar creates a very thin organic coating that significantly improves the biochar’s fertilizing capabilities.
A combination of advanced analytical techniques confirmed that the coating strengthens the biochar’s interactions with water and its ability to store soil nitrates and other nutrients.
This improved understanding of biochar’s properties could trigger more widespread commercialization of biochar fertilizers. Such a change could reduce global dependence on inorganic nitrogen fertilizers that have served as modern food-production workhorses for more than a century.
To characterize a super-thin carbon coating on a carbon substrate is nearly impossible, Borch said. Our international team used many different advanced techniques to perform the analyses. Robert Young led our group’s contribution of ultra-high resolution mass spectrometry to investigate the coating and probe its elemental makeup.
The authors set out to investigate biochar before and after composting with mixed manure. Using a combination of microscopic and spectroscopic analyses, the researchers found that dissolved organic substances played a key role in the composting of biochar and created the thin organic coating.
This organic coating makes the difference between fresh and composted biochar. The coating improves the biochar’s properties of storing nutrients and forming further organic soil substances. The coating also developed when untreated biochar introduced into the soil only much more slowly.
Excessive use of mineral nitrogen fertilizers or liquid manure in agriculture has serious impacts on the environment. Such fertilizers cause the emission of nitrous oxide and result in nitrates leaching into the groundwater. As an eco-friendly alternative, scientists suggested adding biochar as a nutrient carrier into the soil. But, the use of biochar on a large scale has not viable economically because how it stores and releases nitrates.
Using biochar without adding nutrients or with pure mineral nutrients has proved to be far less successful in many experiments.
More information: [nature communications] | <urn:uuid:34871e89-589e-4a7b-ba2c-b56726cf3eb9> | CC-MAIN-2022-40 | https://areflect.com/2017/10/20/organic-coating-on-biochar-explains-garden-greening-power/?amp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00385.warc.gz | en | 0.917902 | 512 | 3.703125 | 4 |
According to LinkedIn, data scientist is the second fastest-growing profession (opens in new tab) in the U.S. While there’s a 650 percent growth rate in available roles, the pool of qualified professionals is still relatively small. Considering that only 35,000 U.S. workers (opens in new tab) have data science skills, there’s a tremendous opportunity to carve out an ideal career path.
Data is produced in immense quantities across all sectors, but many people simply don’t know what to do with this wealth of information. To make the most of this mountain of data, data scientists have begun to work together to become stronger and figure out how to move the field forward. As such, data science is built on the premise of collaboration.
Open-source libraries like OpenCV (opens in new tab) have become relatively commonplace, and some of these self-service options are actually more useful than more expensive pay-to-play services. Entry-level data science jobs can now be done by virtually anyone. For a data-driven executive looking to pick up a competitive advantage, deploying data science is easier than ever before, and it will only continue to get easier.
The greater accessibility of such advanced tools and techniques means that the value of data scientists is changing as well: Tasks once reserved for trained statisticians can now be handled by ambitious young professionals with a passion for data science and a willingness to learn. Even now, with only a little bit of training, a new employee could build a machine learning model and set it to work on a company’s data.
But for businesses truly seeking an edge, this kind of data analysis only skims the surface. To develop a true data strategy and put it to work for a company, trained data scientists still have a lot to offer.
The importance of combining tools with talent
The digital universe is expanding at an exponential rate, doubling in size roughly every two years (opens in new tab). By 2020, analysts predict that there will be 44 trillion gigabytes of data online — nearly as many gigabytes as there are stars in the universe. Much of this data surge stems from the growth of smart devices, which create and transfer data around the clock.
Whether we like it or not, we’re submerged in data from all sides. Almost every function of a business either relies on or produces data, and the companies who can put that data to use will quickly separate themselves from those that cannot.
While newly developed tools can make data science simpler, they still require trained professionals to harness that power. Anyone can now use data science techniques, but businesses benefit the most from data scientists who have deeper understandings of how those techniques work and what their value is.
This problem isn’t unique to data science. Photoshop is an incredibly powerful platform for editing images, for instance, but the software is useless without a skilled person behind it to realize its full potential. Even basic software like Excel and PowerPoint takes on a whole new meaning in the hands of a professional.
Without making the most of your data, you could wind up sinking resources into misguided efforts based on “instinct” rather than real data. It’s the equivalent of not tracking your company’s finances or organizing your inventory. If your data is strewn all over the place, you could miss out on potential clients and revenue. To institute a culture that respects and utilizes data science at your business, you need culture leaders — and here, again, is where trained data scientists can lead the way. Because they understand these tools and techniques deeply, they can better communicate their value to the organization.
Data is constantly evolving, but truly successful data scientists are the ones who are able to keep pace with the industry. While machines and software are able to prepare and analyze data before visualizing insights, a data scientist is necessary to identify pertinent questions and interpret the results to craft the right solutions. Think of data scientists as the translators between data and your company.
Why you need to embrace data science
There’s no way to sugarcoat this: The rules of business have changed, and so must the players. You wouldn’t run your company without at least some internet component, and data science is the next big technological shift. AT&T has seen the writing on the wall, and the company partnered with Georgia Tech (opens in new tab) to launch a program focused on analytics.
Right now, data science is like what iPads were like 10 years ago. Back then, nobody knew what an iPad was or how a business could use one; now, companies often give each employee an iPad for company use on his or her first day.
Data science is already a necessary function of forecasting and business diagnostics. It’s difficult to know where you are or where you’re going without the proper data to back things up. Business intelligence is the engine that will drive your company, but it’s impossible to accomplish without a trained professional available to gather the relevant data and present it in a digestible manner.
Without a means of analyzing data, your company will lack important information. A new software platform can certainly help, but you’ll also want a team member who’s capable of using that software to translate your data into actionable insights and correctly communicate what you intend to do. An in-house data scientist can better prepare your company to take advantage of these changes, but he or she must stay on top of industry trends to remain relevant. If you don’t already have someone on staff, it's perfectly viable to hire someone who has a passion for data science and train him or her up in a relatively quick manner.
Data scientists are in demand for a reason. It’s time to either recruit someone with the skill set your company needs or find a way to train an existing employee to make the most of available data. Right now, because innovation has been moving faster than businesses can keep up with, companies who adopt robust data science teams and strategies have the chance to shape how the industry will develop moving forward. But failure to jump on the data bandwagon could hamstring your company and cause you to lose out to competitors — so don't think twice and get going.
Kirill Eremenko, Founder and CEO of SuperDataScience (opens in new tab)
Image Credit: Alexskopje / Shutterstock | <urn:uuid:a999f144-6e8d-47f0-a815-80a85828f807> | CC-MAIN-2022-40 | https://www.itproportal.com/features/its-time-to-become-data-driven-or-die/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00385.warc.gz | en | 0.940558 | 1,314 | 2.671875 | 3 |
When we talk about System Hardening we are referring to the analysis done on systems that will host the application in search of services, default configurations, logic gates and other unnecessary things for that application.
Whenever we deal with web Application Security with our customers we make it very clear that there is no web application security if it is not supported by a well configured and protected system.
Performing hardening is to seek to reduce the attack surface!
The attack surface of a web application is every combination of vulnerabilities and other attack vectors present in the application and the infrastructure supporting the application.
This not only includes outdated systems and firmware but also configurations that have been implemented wrongly and thus can lead to application risks.
In addition to these points, we need to think as attack surface passwords and users left by default in the application code (hard-coded) as well as the failure to properly implement encryption solutions.
Reducing the risk of malware attacks and other security threats is minimized by reducing the attack surface, but there are also a number of other benefits.
Systems where hardening settings have been made are easier to maintain because the amount of active components is smaller.
In addition, the hardening process also improves the performance of the application and the system itself, as unnecessary functionality has been eliminated that could drain valuable resources.
The system hardening process only brings positive points to your application and that’s why it’s one of the most important points that DevOps teams must observe when building an environment that will host your application.
Some ways to execute hardening
Hardening systems is not only good practice, in some areas it can be a regulatory requirement and always with the aim of minimizing safety risks and ensuring information security.
For example, if your system processes medical patient data, it may be subject to data protection requirements based on the new General Data Protection Act (GDPA).
Another example is if your system operates in the processing of payments using a credit card. In this case your system will have to conform to the controls indicated in PCI DSS.
As we can see, not always the simple introduction of a new system in our infrastructure should be understood as just the initialization of a system, we have several aspects that can strongly impact this product.
DevOps teams should always be aware of cases where there are regulations or even contractual requirements.
Furthermore, there are several organizations that create and publish their own standards and/or procedures that can be adopted by companies that can thus present to their customers and/or partners a demonstration that they are willing to invest in security.
Some of these examples can be seen when we look at the documentation produced by the Internet Security Center (CIS) or the International Organization for Standardization (ISO) or the National Institute of Standards and Technology (NIST).
Some of the leading software vendors also provide their own product-specific protection guides.
Have a Checklist
Whenever we are going to develop an action that can be repetitive, such as the validation and or execution of hardening of some services, it is advisable to have this type of action organized and validated in some way.
Thinking about it, the construction of a list with all the necessary steps to execute the hardening is advisable.
Your checklist will vary depending on the infrastructure, applications, and security configuration.
An application deployed in a cloud-based framework will require very different actions from a complete physical infrastructure, but the objectives are the same.
For the creation of your list we suggest you start by building an inventory of all assets that are relevant, both software and hardware.
To complete this first inventory of yours, look for identifying surfaces of external attacks, which can be achieved through specialized audits.
In addition, it is advisable to perform intrusion tests, vulnerability scans, and other methods that can help identify weaknesses in your external structure.
When lifting your web applications, it may be possible to identify applications that should already be disabled or even present serious failures that would further increase system risks.
Evaluate the system and the users account
Today we still find structures where by deploying the systems that support the web application, there are users brought by the system in a standard way and with their proper passwords and permissions.
This is one of the first points that must be observed to improve the security of an application, remove any user that is not necessary for the execution of the application.
This type of action should be done regardless of the physical or logical structure you are currently evaluating.
What we want at this point is to improve and elevate data access control, which is still one of the biggest security problems in applications and systems.
This concept should apply to all levels of software and hardware, because its main objective is to prevent improper access to systems and data.
At this point the first control that should be put is to use as standard rule the “deny all”, that is, all accesses are denied by default and only the necessary accesses are released so that each user has the possibility to work in the system correctly.
After this validation, define the criteria of the password directives to apply strong passwords and password rotation, as necessary.
Within your framework, always seek to impose data centralization policies that facilitate protection and management, and don’t forget to protect your backup files in an encrypted manner.
Look up to net of servers
Securing the server is the main aspect of protecting a web application.
However this does not mean that only the servers that support the web applications will be protected.
Protection should extend to all servers that support the entire solution and this includes database and file servers, cloud storage systems and interfaces to any external system.
The first step is to start by removing or disabling software and/or services that are not required for application support, and this includes services such as file sharing and or FTP.
Have few ways to access the systems, prefer more secure systems such as connections made through protocols such as SSH and when possible disable your web-based administrative interface.
Ensuring network security is critical in system hardening.
Ensuring that a failure is fixed as soon as possible can be the difference between ensuring the security of your system or having your system compromised.
Therefore, apply the latest security patches after testing them outside the production environment.
To ensure maximum efficiency is delivered to the upgrade tasks, evaluate using it whenever you can to automate the upgrade process and generate alerts for outdated products.
How to maintain your area secure
If everything was done, now the question arises “But how do we always keep everything up-to-date and safe?”
The process of hardening systems is not static and should not be executed in a single time, on the contrary it is a dynamic and continuous process.
The first time you run your system hardening should generate the procedure that will be used as a model, a base guide for the other runs.
After that, any and all changes should be evaluated and again go through the hardening process, thus ensuring that all evaluations were made and that the necessary procedures were followed.
What we have to remember is that the security scenario is constantly changing and new threats appear every day and we must always be aware of these changes.
If we understand that the threats that appear daily are just some of the threats our system faces, we will always seek to improve our protection capacity.
To ensure the security of the system, it is a good idea to plan the use of tools to check vulnerabilities always aligned with the execution of Web intrusion tests using experienced and qualified professionals and not bet all its chips on tools that only deliver reports without the least refined analysis of the results. | <urn:uuid:6a5bb846-f3de-4217-9328-2e0de3562118> | CC-MAIN-2022-40 | https://blog.convisoappsec.com/en/system-hardening-what-it-is-and-how-to-execute-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00585.warc.gz | en | 0.943882 | 1,572 | 2.609375 | 3 |
Graph data has become ubiquitous in the last decade, because graphs connect relationships based on context as its foremost feature. Add to that an organizing principle like semantics and what you get is a knowledge graph – ie. a graph with more context.
- What is a knowledge graph
- How to use knowledge graphs for data management and data analytics
- Where knowledge graphs can help improve data fabrics, contextual AI and digital twins | <urn:uuid:52b43403-69e9-4dd9-a074-8e5ebcddda26> | CC-MAIN-2022-40 | https://hyperight.com/accessing-new-knowledge-with-graphs-stefan-wendin-neo4j/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00585.warc.gz | en | 0.933399 | 87 | 2.53125 | 3 |
Corporate smartphones and tablets store a significant amount of valuable data. Combine that with their mobile nature and they’re particularly vulnerable to being compromised or stolen. Everyone, including the National Security Agency (NSA), is looking for the next big thing in mobile security, and it might just be virtualization.
US government approved
The NSA maintains a program named Commercial Solutions for Classified (CSFC) that tests and approves hardware to assist government entities that are optimizing security. For example, if a public sector network administrator is deciding which mobile devices to purchase for office staff, CSFC has information about which devices are approved for various government roles.
Offices in the intelligence community usually require virtualization hardware and software as a minimum for laptops and tablets. But until now, no smartphones that included the technology have passed the tests. However, a recently released model of the HTC A9 phone includes mobile virtualization functionality that got the green light.
What is mobile virtualization?
Virtualization is an immensely complicated field of technology, but when it comes to mobile devices the process is a little simpler. Like any mobile device management plan, the goal of mobile virtualization is to separate personal data from business data entirely. Current solutions are forced to organize and secure data that is stored in a single drive.
Essentially, current phones have one operating system, which contains a number of folders that can be locked down for business and personal access. But the underlying software running the whole phone still connects everything. So if an employee downloaded malware hidden in a mobile game, it would be possible to spread through the entire system, regardless of how secure individual folders are.
With mobile virtualization however, administrators can separate the operating system from the hardware. This would allow you to partition a phone’s storage into two drives for two operating system installations. Within the business partition, you could forbid users from downloading any apps other than those approved by your business. If employees install something malicious on their personal partition, it has no way of affecting your business data because the two virtualized operating systems have no way of interacting with each other.
Although it’s still in its infancy, the prospect of technology that can essentially combine the software from two devices onto a single smartphone’s hardware is very exciting for the security community. To start preparing your organization for the switch to mobile virtualization, call us today. | <urn:uuid:bf06504e-008c-497e-bf7d-792d3b20dd89> | CC-MAIN-2022-40 | https://www.datatel360.com/2017/05/31/nsa-to-secure-phones-with-virtualization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00585.warc.gz | en | 0.942881 | 483 | 2.53125 | 3 |
The relaxing atmosphere of surfing at the beach makes it easy to forget about the sharks. Connecting to an unsecured network poses serious risks to your laptop and data. In a recent study, Bitdefender labs revealed 85% of people choose to connect to a free Wi-Fi, despite clear warnings that their data can be viewed and accessed by a third party.
Surfers can lose sensitive information to hackers in a bewildering variety of ways – especially if they access the Wi-Fi networks available in public locations:
- Around you, others connect to the same network, and one of them might happen to have the proper tool to scan your laptop for vulnerable software and use it to plant backdoors or access login credentials if, for instance, they are sent unencrypted.
- A mid-level techie can set up a network, give it a generic name such as “free Wi-Fi” or “Secure public Wi-Fi,” and monitor the traffic of all users that connect to his network in sniffing or man-in-the-middle attacks. They can read all data sent in that network.
- Someone sniffing data packets can snatch session cookies to access your resources, including social networking, online banking and online shopping accounts during that open session. Imagine someone changing your status or uploading a photo on your behalf.
- Accessing online banking and online payment websites or making e-shopping transactions through public Wi-Fi hotspots might be convenient, but cyber-criminals can still use a fake SSL certificate to circumvent a secure connection, have the user approve it and use it to sniff login data and such.
Best practices to protect your data while using a Wi-Fi connection:
1. Access only encrypted websites while on public hotspots. Make sure you type “https://’ before the URL of the website or look for the locked padlock that shows you are using a secure connection, meaning you are using encryption over a public Wi-Fi.
2. Ask an employee (bartender, hotel receptionist) for the exact name of the hotspot you intend to use so you don’t accidentally access a network set up by someone with a secret agenda. You can also ask the hotel receptionist if they use AES with their wireless network. But if you access over a wireless connection websites that are not using encryption, someone in the same network can still sniff data packets and see what you send in the network.
3. Make sure the Wi-Fi, or the automatic sharing options are switched off when you are not using them. With Wi-Fi automatically enabled, you risk having your laptop trying to connect to an unsecure network without you even realizing it.
4. Don’t check your account balance sheet or shop online on a public Wi-Fi. If you do, use a dedicated payment solution that helps you securely connect to your bank account or e-payment website from an unencrypted hotspot.
5. Password-protect and encrypt your device. In case someone steals or finds your device, make it harder to access information stored there. Also encrypt your data with dedicated software, or – if your device supports it – with the default encryption option. Use anti-theft programs to help track your device and lock or wipe your data from afar.
6. Install anti-virus software and keep it up-to-date. Installing an antivirus and a privacy security solution on your laptop is imperative. A good security solution with anti-malware, anti-spyware and anti-spam modules offers an effective shield against all kinds of threats. This will help you steer clear of fake security apps, worms, Trojans and viruses.
By keeping your OS and apps up-to-date, you give your system the most recent patches for all known vulnerabilities to protect you against the latest threats. Many pieces of malware target unpatched vulnerabilities. Once patched, they cannot harm your device or your data.
7. Turn off the laptop when you are not using it. You want to keep your laptop always on so you can access it the instant you need or want to, but this can be a bad practice. In case your system is infected with a botnet, the malware may continue to use your resources even when you are not using it.
8. Your firewall must be on at all times. The firewall is crucial for joining this kind of network. When surfing without a firewall, your PC is visible to others, along with your network shares you might have left open for friends at the office or for your family at home.
Author: Loredana Botezatu, Bitdefender | <urn:uuid:7be550ec-47d6-4bf3-ae5d-0573882a8ed8> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2013/08/01/keep-your-laptop-safe-while-using-wi-fi-hotspots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00585.warc.gz | en | 0.912541 | 961 | 2.703125 | 3 |
Does sparing the rod spoil the child?
In schools, at least, a growing consensus over the last 50 years is that it does not. Corporal punishment declined dramatically over that span, in large part due to findings that hitting or spanking students does more harm than good to their mental health. Many parents view it as a form of child abuse.
This summer, however, a school district in Missouri chose to disagree. On June 16, the school board of the Cassville R-IV School District added a new policy, “Corporal Punishment,” to its manual. Starting with the new school year, teachers may now use “physical force as a method of correcting student behavior.” It applies for elementary school and high school students. | <urn:uuid:ca41071a-9749-4205-b83f-47aaed001099> | CC-MAIN-2022-40 | https://legalcurated.com/know-your-rights/is-corporal-punishment-legal-in-schools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00585.warc.gz | en | 0.977048 | 155 | 2.71875 | 3 |
Red Teaming and Penetration Testing are useful practices for organizations looking to improve their cyber security. Learn more about the key differences between the two.
What is red teaming and how is it different from penetration testing?
Red teaming is the practice of simulating the tactics, techniques and approaches of another (typically malicious) actor. Red teaming engagements are run by a "Red Team" that plays the role of the threat actor. The term 'Red Team' originates from a military setting but is used in many other fields including that of cyber security.
Red teaming is different from penetration testing in that it directly simulates an adversary, such as a national state or Advanced Persistent Threat (APT), whereas Penetration Testing (typically run by an ethical hacker) is often limited by time and scope and the key objective to identify as many vulnerabilities within a given period, rather than to ultimately breach an organisation without detection over a longer period of time.
Why is it called a red team and what is its purpose?
A red team is known as such because they play the role of the 'dangerous enemy' or malicious actor. Whilst their engagement is deliberately benign their objective once a breach has been demonstrated is to then provide security feedback to the client to help them better protect and secure their defences. By contrast, a "blue team" will play the role of the 'defenders' against the red team threat. The blue team's sole objective is to detect, stop and defend against the red team's activity.
The purpose of a red team is to act as realistically as possible by mimicking a genuine cyberattack - albeit within controlled parameters that lower the risk of actual impact to the client. The red team will typically use the same tools, techniques and approaches as the malicious actor they seek to emulate would do. The overall objective and key purpose of red team engagements are to improve the overall security posture of the target (client) estate and infrastructure.
By simulating cyber-attacks via a red team, your organisation will be better prepared to defend against a real attack, having gained the additional insights as to what your blue team, or Security Operations Centre (SOC), should be looking out for and monitoring as part of their BAU routines.
Who should use red teaming?
All organisations should consider red teaming as part of their offensive security program. Malicious hackers continually seek ways to breach organisations and a red teaming engagement - especially one from CovertSwarm, that provides Constant Cyber Attack - can help all organisations to outpace their genuine cyber threats.
Why is red teaming important?
Red teaming is important because it is one of the only ways you can simulate a real cyber attack against your organisation: you likely already regularly run fire drills within your organisation to simulate a response to a fire in the building. Think of red teaming like running a drill for a cyber attack.
Is red team testing more effective than penetration testing?
Red team testing and penetration testing are two different disciplines. Each has a different focus and target outcome. They are closely linked and the right choice for your organisation really depends on the value you are looking to obtain.
If you have a compliance obligation to perform regular penetration testing, or penetration testing is mandated by a third party that you supply - then it is likely the best first option to explore. Additionally, if you do not yet feel that your organisation is ready for a fully simulated cyber attack then penetration testing against certain assets only may provide the initial insights and results you are looking for.
However, if you wish to truly identify how your organisation would respond to a real cyber-attack or how an attacker might breach your organisation then red teaming is the best answer for your organisation.
The challenge for both penetration testing and red teaming is that they are both "point in time" and creates a cyber risk gap. This is where CovertSwarm and its Constant Cyber Attack service modernise and challenge the cyber service market.
The benefits of using a red team
The benefits of using a red team are:
You will be working with ethical hackers who will think like a malicious hacker, using the same techniques and approaches they do but their focus is to help you raise your security bar and strengthen your organisation's defences to cyber-attack;
Only through working with a red team can you truly understand the threats and risks your organisations face from cyber-attack;
Board rooms will normally listen if you can demonstrate a genuine point of compromise or breach. It is challenging to distil this from the noise created from other offensive security service reports, such as those created by traditional snapshot Penetration Testing engagements.
How penetration testing & red team operations are executed
Penetration Testing is typically executed by ethical hackers and red team engagements by 'red teamers', however, it is common for ethical hackers to also be involved in red team engagements and for 'red teamers' to occasionally perform penetration testing - despite their differing objectives.
The two terms and disciplines, despite having differences from a delivery and engagement perspective have similarities in terms of the underlying knowledge and skillsets required.
Both Penetration Testing and Red Team Operations are executed against a set methodology, often created by the offensive security services provider with direct input from their employed pen-testers or red teamers.
The methodologies and approach between a penetration test & a red team operation do differ, however: Pen testing tends to have a limited, set scope that requires only a set number of ethical hacker days to deliver and identify as many cyber vulnerabilities as possible in that time. Conversely red teaming is typically a longer, slower engagement whose objective is to breach the organisation whilst remaining undetected. Often red team operations will run over an extended period of time - sometimes many weeks or months. For most red team engagements the scope is a whole organisation, rather than a set technical scope as with most penetration testing engagements.
The red team approach and methodology explained
CovertSwarm's red team approach and methodology form the foundations of our Constant Cyber Attack offering.
Reconnaissance and Information Gathering
Upon the commencement of CovertSwarm's Constant Cyber Attack service to our clients, our Swarm performs an initial discovery phase to enumerate all assets whilst gathering additional, detailed information relating to the organisation.
This phase of our attack process includes the use of passive discovery techniques before we move to employ more overt, active techniques such as port scanning and manual probing. Depending upon the organisation’s public presence a significant amount of information relating to staff members, infrastructure deployments, and application data is normally obtained through open-source intelligence (OSINT) gathering. An example of this would be the discovery of corporate IP addresses; hostnames; and ranges, along with information that may be beneficial during social engineering attack vectors which may include phone numbers; email; and username details of staff members as well as public-facing APIs; applications; and the operating hours of offices etc.
CovertSwarm often returns to this reconnaissance stage during an engagement as each step within an attack chain may require additional information discovery in preparation for one of our bespoke attacks.
The purpose for us is to always and accurately replicate an Advanced Persistent Threat.
Research & Exploit Development
Following our initial reconnaissance and information gathering phase CovertSwarm then increases the size of the client-focused pool of ethical hackers during the delivery of our research and exploit development phase. This step is normally disclosed to stakeholders at the time of the planning and scoping process.
The purpose here is to utilise specific, advanced skillsets within the team to further benefit CovertSwarm’s delivery and the resulting output for customers.
Utilising the information gathered in the previous phase a number of areas are included in the research and development elements (note: the list below is non-exhaustive):
Performing dedicated research for new exploitation of previously disclosed vulnerabilities (for example, expanding upon proof-of-concept exploit tools to action into a working exploitable vulnerability);
Developing new tools and techniques to evade anti-malware, EDR/XDR, SIEM and similar security controls, and to create bespoke attack infrastructures such as C2 platforms;
Constructing attack scenarios for social engineering vectors, such as phishing, vishing, etc.
This phase of the engagement is crucial to the success of our operations against our clients and takes time to plan, craft and deliver.
CovertSwarm then moves to actively engage in delivering an attack(s) during this phase. All of which is made possible using the information gathered within the research and development phase.
The goals of this phase include the successful compromise of at least a single system; application; individual person(s); or physical locations - without triggering a detection by the customer’s blue team or SOC.
The purpose of this is to evade any possible detection from blue team members who monitor firewall or Intrusion Detection logs - any detection of our activities is a positive sign for our clients that highlights them possessing a strong security posture and that the defences they have in place are effective.
All our activities during the Attack Execution stage are logged in order to support the blue team in terms of ad-hoc and ongoing remediation of possible attack vectors, which includes times, dates, IP addresses, targets, etc. Our audit trail also helps the client blue team to 'fine tune' their monitoring to ensure detection in the future - effectively helping to upskill and tighten their security posture.
At any point during this phase of an engagement, CovertSwarm may return to previous stages (reconnaissance and research/development) to further explore and strengthen possible attack vectors as additional information is uncovered organically: an example of this would be where specific information relating to the anti-malware software, internal software and operating systems, or similar systems information are disclosed to our Swarm during phishing engagements or other social engineering attacks.
The output from our engagement is delivered on a frequent, dynamic basis as general updates throughout our cyclical attack process and is also summarised at both a high level (C-Level audience) and in granular technical detail (Technology team audiences) within formal reports that are produced. All reporting is delivered exclusively via our unique Offensive Operations Centre portal - never via insecure channels such as email.
CovertSwarm operates an open communication policy to ensure that clients are kept abreast of our Swarm's discoveries in real-time. As noted above, general updates are regularly produced (at least weekly) and critical notifications are actioned at the time of discovery to the key stakeholders - with a clear escalation path always being maintained and followed where necessary.
Upon completion of testing the respective Hive Members involved in that cycle of attacks provide a formal debrief to the client by presenting their findings, and making themselves available to answer any questions that may arise from the results.
Client-side stakeholders and senior CovertSwarm leadership members normally take part in the debrief meeting that concludes each significant round of testing. This meeting also often includes the customer’s blue team, SOC and more broadly involved Hive Members within CovertSwarm; the purpose of this is for us to provide as many insights and educational touch-points as possible areas regarding identified areas of cyber risk and weakness. Debriefs can go into great technical and audit-level detail whether the attack exposed an operational process run by the client's Security Operations Centre (SOC), or uncovered a more technical vulnerability. Typically we share details of our exploitations or bypasses, involving timestamps and metadata details of our attacks so that the blue team are able to review logs and technical controls to explore the root cause of compromise and to tighten their monitors and alarms. | <urn:uuid:6ae8eb86-30f9-40e4-ada7-f82561650bd8> | CC-MAIN-2022-40 | https://www.covertswarm.com/post/red-teaming-vs-penetration-testing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00585.warc.gz | en | 0.944745 | 2,399 | 2.59375 | 3 |
Machine learning models, referred to as EHRs (electronic health records) or EMRs (electronic medical records), help clinicians better manage patient health records through population management, diagnostics, and smart documentation. Clinicians also use these EHR data-based models to perform other clinical tasks.
EHR management places a heavy burden on clinicians: studies have shown that over half of a clinician’s workday can be spent on their use! The massive amount of data that EHRs require cause disruption of workflows and have limited interoperability. They also suffer from data overload when the models are not optimal. However, machine learning-based models continuously improve, helping clinicians spend less time maintaining records.
Data utilized in EHR systems includes all kind of clinical data: administrative and billing data, patient demographics, progress notes, vital signs, medical histories, diagnoses, medications, immunization dates, etc. However, artificial intelligence models go further, using the data above to analyze clinician preferences and patient feedback to improve the system.
A good machine learning model should be trained with large amounts of data from many different EHR systems. These include reports involving patient treatments, the outcomes of those treatments, and the equipment used during a patient’s visit. Additional data may include demographic information on patients.
External tools like predictive analytics and language processing also help system management and improvement.
Further external data to use in a good EHR model includes feedback forms from the Consumer Assessment of Healthcare Providers and Systems (CAHPS). Similar data from accredited health organizations are also useful.
As mentioned, EHR management imposes challenges in data input and overload as well as interoperability difficulties.
In addition to the vast amounts of EHR data, clinicians question which goals the AI-powered EHR management system should aim for. In the same vein, clinicians question which tools are most likely to help the systems reach their goals.
Flatiron Health, a data and analytics-driven cancer care service recently acquired by Roche, bought a company with a web-based EHR and tailored it to fit its OncoCloud EHR for community-based oncology.
H1 Insights Medical Affairs collects information on patients to understand their procedures and behaviors which contribute to diagnosis
Definitive Healthcare’s Hospital & IDNs Database provides benchmark data for hospitals and IDNs to compare against competitors and identify growth opportunities.
Zeta-Tools Health Research conducts research among physicians, general population, and patients for marketing needs.
Eixos Economic Data comes from analysts who collect data themselves from out in the field. Sources include news, location, and industry data
Delta Projects In-Home Support helps those with developmental and intellectual disabilities live in their own home with necessary support | <urn:uuid:eaa43f8a-68de-4f7e-96ec-7fcdb430876e> | CC-MAIN-2022-40 | https://www.data-hunters.com/use_case/emr-ehr-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00585.warc.gz | en | 0.936974 | 563 | 2.71875 | 3 |
The Oak Ridge National Laboratory (opens in new tab), a major energy, research and high-performance computing centre in Knoxville, US, has reported a concerted and elaborate plot to get into their network.
The ORNL houses the second-fastest supercomputer in the world, an open-research, 101.7-teraflop Cray XT3/XT4 known as "Jaguar,". It has plans to build another supercomputer.
Around 12,000 potential victims have been identified, all of whom were visitors to the lab between 1990 and 2004.
The culprits managed to get access by sending Trojan laden e-mails to ORNL employees, out of the 1100 emails that were sent, 11 employees opened the infected attachment but did apparently not report the infection.
Although the identity of the hackers is not known, Arstechnica speculates (opens in new tab) that the nature of the attack as well as the target can only point to one possibility: a foreign government.
According to a statement (opens in new tab) released on the ORNL website, "The original e-mail and first potential corruption occurred on October 29, 2007. We have reason to believe that data was stolen from a database used for visitors to the Laboratory."
Parts of the US Government infrastructure like military networks and telecommunications systems are under constant attacks from rogue elements and fortunately, until now, only a small fraction have succeeded in breaching the security barriers. | <urn:uuid:11124c35-ae22-44c2-ab7d-07cee184c37f> | CC-MAIN-2022-40 | https://www.itproportal.com/2007/12/10/hackers-attack-us-supercomputer-labs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00585.warc.gz | en | 0.964481 | 299 | 2.65625 | 3 |
There is a growing demand for big data professionals across the world. According to a recent Forbes article, the top jobs in demand on LinkedIn today are data scientists, machine learning engineers, and big data engineers. People are turning to a career in big data with an eye on building high-salary and exciting careers.
The role of a big data engineer is taking center stage in many organizations across diverse industries. Data engineers are at the core of major business activities in all leading industries. Organizations need data engineers to streamline, categorize and decode the vast amounts of data flowing into their systems every day. Without data engineers, data would remain an indecipherable mass of valueless information. They will lose out on huge business opportunities.
If you are planning a career in data, it is a smart move, and the time is right too. Big data is the buzzword in career pathways. It can set you up for an exciting future.
Before we delve into how to become a data engineer, let’s understand their role and responsibilities.
What Does a Data Engineer Do?
A big data engineer is entrusted with the responsibility of developing, managing, and maintaining an organization’s data infrastructure. They collect and convert data for storage in the company’s databases in an appropriate and accessible format. They collaborate with data scientists and analysts to achieve this critical business objective.
How to Become a Big Data Engineer
You now have a broad idea of what role a data engineer is expected to perform in an organization. Are you convinced that data is where you want to build your career? Here are the steps to follow to become a data engineer.
Get the Relevant Degree
It would help if you began with a degree in computer science or allied fields such as software engineering, math, or physics. The degree will help you develop the basic skills needed to build a launch pad for the future. It will also provide the foundation to become a big data engineer.
Many colleges are offering degree apprenticeships. It works in the same way as a traditional degree but is better in many ways. The course combines academic learning with real work experience, as you can work with a company. It gives you exposure to real-world experience. Many people use this option because it costs less than a full-time degree. The prospects of landing a job are also higher when you graduate with a degree apprenticeship.
Enroll in a Specialized Course
If you have a degree in a different field and the big data inspiration has come later, you can still become a big data engineer. You can take an online course in relevant areas such as data analytics. This option can fast-track your efforts as this is a targeted course. It will also cost you less than what you would pay for a full degree course.
Online courses are available to develop your skills in key areas like big data architecture and data analytics. A comprehensive course will arm you with the basic tools needed to become a data engineer.
Gain Hands-On Experience
The fact is that data engineering is not the first job title you will get, even when you have the relevant degree. You will have to develop into a discipline specialist. To get there, you must start with entry-level jobs and gain some experience in data-specific fields.
In data engineering, experience is a relevant term. It would help if you have worked previously as an analyst or dealt in data science as an intern. You can build on that experience.
Sharpen Your Knowledge of Databases
Brush up your general knowledge of databases. Research the trending tools in the industry to understand them better. Databases form the core of data engineering. They work as the foundation on which larger infrastructures are built. Also, familiarize yourself with Structured Query Language or SQL and NoSQL. Dabble with systems like MySQL or PostgreSQL. These are open-source systems and can help you further sharpen your skills.
Develop Expertise in Handling Diverse Toolset
Develop your skills and sharpen your knowledge by using a diverse set of web-based data engineering tools. It can improve your job prospects vastly. There are hundreds of options to choose from. The most popular ones include cloud architecture (AWS), Cloudstack (Apache), and SQL Server Management Studio (Microsoft).
It is not practically possible to gain expertise in all these tools. However, it is important to know them at a basic level. You need to understand the underlying principles. There are numerous data engineering tools out there, so it’s common to find them all integrated.
Consider Alternative Jobs
To take the right step into big data engineering, you might have to transverse a long road. Don’t be disappointed if you don’t find your dream job immediately. Most people don’t get their deserving jobs straightaway. Consider big data engineering as a long-term objective. This approach will not leave you dejected at initial failures. Keep trying your hand at other related jobs. They can offer you the learning curve you need to climb ahead.
For example, you can start working as a developer. You can get a developer’s job more easily than in other roles. You can also consider working as a data analyst in the early stages of your career. It has proven to be an excellent stepping stone toward data engineering.
Likewise, any computer-specific or data-related job will provide you with hands-on experience to develop important skills. It is seen that the most effective data engineers are those who have persevered. They have gained a breadth of expertise from different jobs and at different levels.
Data engineering is a vast field. There is a growing demand for skilled professionals as companies face the challenge of managing colossal data volumes. This field has enormous potential for those willing to persevere and develop their skills.
Many institutions are offering postgraduate diploma courses in software development. Specialized programs in big data are also available for working professionals. Most of these courses are online. They cover major programming languages and tools. You can also sharpen your skills at practical hands-on workshops. They offer rigorous learning opportunities and also come with job placement assistance with leading IT firms. | <urn:uuid:6a5554c1-51c8-4c63-9380-8c98ac0d5b81> | CC-MAIN-2022-40 | https://www.baselinemag.com/careers/how-to-become-a-data-engineer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00585.warc.gz | en | 0.956199 | 1,268 | 2.8125 | 3 |
Digital inclusion and the European Accessibility Act: both a necessity and an opportunity for TMT players
The transition to a more-digital world provides many new opportunities that can affect our everyday lives. However, the digital world is not accessible to all. Indeed, 8% of 16–74 year olds in the EU did not use the internet at all in 2021.1 The digital divide is even greater on a global level: more than a third of the world’s population (that is, 2.9 billion people) did not use the internet in 2021.2
The European Accessibility Act aims to improve digital inclusion
People with disabilities and the elderly are the most likely to be excluded from the digital world. Here, we take Sweden as an example, even though the country had one of the highest levels of internet usage in the EU in 2021.1 Internet usage among various user groups in Sweden differs significantly. Only 6% of Swedish people aged 16 and above did not use the internet in 2021.3 This figure grew to 20% among people with disabilities (of all ages) and to 33% among people aged 76 and above (Figure 1).
Figure 1: Internet usage among various user groups, Sweden, 20213
Source: Internetstiftelsen, 2021
The European Accessibility Act (EAA) addresses the digital divide by aiming to improve the internal market in the EU for accessible products and services.4 Improved accessibility will increase digital inclusion by helping people with disabilities to participate in our ever more digitalised society. The EAA is therefore a step forward in reducing the barriers for people with disabilities in the EU. The act will come into force in 2025 and will cover a range of products and services in the TMT sector, including computers and operating systems, smartphones, telephony services, TV equipment related to digital television services and e-commerce. The affected products and services must be designed and produced to fulfil a range of requirements with regards to accessibility, including the use of text-to-speech technology, the availability of instructions via more than one sensory channel and the provision of software and hardware for interfacing with the assistive technologies.
TMT players must start preparing now in order to be able to comply with the new act.
TMT players currently have a relatively limited understanding of the consequences of the European Accessibility Act
TMT players will face several challenges when integrating digital accessibility and inclusion into their businesses. The level of readiness varies, but in many cases, it needs to improve. Analysys Mason performed a study in 2021 on behalf of the Swedish Post and Telecom Authority covering the biggest telecoms operators in Sweden. The results show that operators’ knowledge of the specific requirements of the EAA is still relatively limited. The results further highlight the importance of co-ordination and collaboration between different departments because complying with the EAA will involve legal/compliance expertise as well as technical and commercial knowledge. The costs and benefits of adhering to the EAA from a business perspective have proved to be difficult to estimate, thereby further complicating the planning and preparation process. However, our interview-based study that examined the consequences of EAA in Sweden concluded that there will be material costs.
Nonetheless, complying with the act is also expected to benefit TMT players. Improving the accessibility of products and services may bring additional revenue because more people will be able to make use of them. Expanding the customer base and improving the customer experience across the board provides TMT actors with further incentives to embrace digital inclusion.
Analysys Mason supports TMT players in improving their digital accessibility
Analysys Mason has performed a number of projects in the area of digital inclusion and accessibility and can provide TMT players with guidance about how to improve the digital accessibility of their products and services in order to adhere to the EAA. We have experience in supporting clients to integrate a user perspective throughout the development process, including methods such as usability testing. We have also developed a framework to perform cost/benefit calculations based upon our study of the costs and benefits of the EAA in Sweden.
For further information, please contact Maria Tunberg, Principal, Analysys Mason.
1 Eurostat (2021), Digital economy and society statistics – households and individuals. Available at: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Digital_economy_and_society_statistics_-_households_and_individuals#Internet_usage.
2 International Telecommunication Union (2021), Measuring digital development. Facts and figures. Available at: https://www.itu.int/en/ITU-D/Statistics/Documents/facts/FactsFigures2021.pdf.
3Internetstiftelsen (2021), Svenskarna och internet. Available at: https://svenskarnaochinternet.se/rapporter/svenskarna-och-internet-2021/.
4 European Commission, European Accessibility Act. Available at: https://ec.europa.eu/social/main.jsp?catId=1202#navItem-1. European Commission, European Accessibility Act – Improving the accessibility of products and services in the single market. Available at: https://ec.europa.eu/social/BlobServlet?docId=14869&langId=en.
Rural broadband presents new opportunities and unique challenges for the telecoms industry
Artemis 1 is the first public step in a bold venture to return to the Moon
Now is the time to invest in the new cloud-based AR/VR market | <urn:uuid:71f7f278-f8d6-4147-b31c-9cd8ba6b2ebd> | CC-MAIN-2022-40 | https://www.analysysmason.com/about-us/news/newsletter/digital-inclusion-eaa-rma08/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00585.warc.gz | en | 0.920294 | 1,166 | 2.734375 | 3 |
No matter how small your business is, there should be an awareness and basic understanding of the threats posed in the cyber-world. It will protect your digital assets, intellectual property, business reputations and the business itself. Any information stored on your systems might be interesting to cyber criminals to steal. It could be an individual or a small company. The current top five key cyber threats are given below with brief explanation.
It is a form of malware that attempts to encrypt the victim’s data and then extort a ransom to release an unlock code. The common vector that ransomware can take to access is through phishing emails attached with some files. Once these files are downloaded in the victim’s computer, it will take over the computer by stealing data. There are many different danger ransomware such as Sodinokibu, Maze , Mac, Crypto ransomware and many more. It is very important to take key steps to protect your company.
- Cybersecurity awareness and training: All the staff and third party vendors should be given some knowledge and awareness regarding suspicious emails and texts.
- Malware protection: There are good antivirus and malware protection software in the market to buy and to prevent from the ransomware attacks.
- Software updates and patch: Every day there is a new attack, new changes in software and technologies. It is very important to update your software and applications up to date.
- Data backups: In the worst case scenario, data backups will play an important role to protect a company’s sensitive data. However the Maze ransomware does not allow the victims to reset and restore their data from backups. Thus, taking preventive measures is always better than responding after the ransomware attacks.
The FBI’s Internet Crime Complaint Center reported that people lost $57 millions to phishing schemes in one year. The main reason why people fall under such scam is because they do not have any cybersecurity awareness and they tend to share their sensitive data to these scam emails and phones. People are not aware how cyber criminals misuse their personal data in Dark Web. Here are a few steps you can use to protect yourself.
- Make use of anti-virus software
- Make sure to have spam filters turned on and check them regularly in case they have accidentally trapped innocent emails.
- Do not click on any links listed in the email message, and do not open any attachments contained in a suspicious email.
- For US-CERT Security Tip: Avoiding Social Engineering and Phishing Attacks.
3. Data leakage
The unauthorized transmission of data from within an organization to an external destination or recipient is known as a Data leakage. It can be transferred electronically or physically. Data leakage threats usually occur through the web and email and mobile data storage devices such as optical media, USB keys, and laptops. It is a huge problem for data security, and it damages any organizations regardless of size of the company and as well as an individual. The following steps are some preventive measures to protect from data leakage.
- Ensure that you have strong mobile passcodes.
- If your device is lost, make sure to have data backup restored and also to wipe out the data in the loose device remotely.
- Be aware of any phishing emails or text messages.
- Make sure to keep an update with your bank and credit card statement.
Hacking is an unauthorized access to or control over computer network security systems for some illegal purposes. The one who is intelligent and highly skilled in computers. The main target of hackers are financial institutions, attempting to gain access over bank accounts, steal data to make fake credit cards and to sell it on the dark web. The use of phishing emails and social engineering, tricking staff and users into revealing usernames and passwords, remains a threat.
- Make sure to turn on Firewall in your devices.
- Never give your sensitive data in phone calls.
- User awareness and training programs should be provided.
5. Inside Threat
Someone close to an organization, with authorized access to some data and operators, misuses the authority for personal gain including for fun, or financial gain. Such a type of person is called an Inside threat. This person does not have to be an employee, this person could be a third party vendor, contractor, and a partner could pose a threat as well. According to Accenture, 69% says their organizations have experienced an attempted or successful threat or corruption of data in the last 12 months. To mitigate the size of any data leak, these below steps can be taken.
- Limit how much data staff has access to. The principle of ‘least privilege access’ should apply to all IT systems.
- Control the use of portable storage devices, such as USB memory keys, portable hard drives and media players.
- Consider using applications in certain situations to monitor staff behavior − who copies what. | <urn:uuid:d3727ad2-cd17-43cc-893a-995c9715ea65> | CC-MAIN-2022-40 | https://www.lifars.com/2020/06/key-cyber-risks-and-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00585.warc.gz | en | 0.933383 | 1,009 | 2.546875 | 3 |
Basic Operation of SQLMAP & enumeration of Server through automatic SQL Injection.
SQLMAP is a database pentesting tool used to automate SQL Injection. Practically using sqlmap, we can dump a whole database from a vulnerable server. SQLMap is written in python and has got dynamic testing features. It can conduct tests for various database backends very efficiently. Sqlmap offers a highly flexible & modular operation for a web pentester. It can act as a basic fingerprinting tool and till upto a full database exploitation tool.Simply we can say that there will be no web application testing without sqlmap. All in all, fully loaded..!
Features of SQLMAP
- Microsoft SQL Server, Microsoft Access,
- IBM DB2,
- SQLite, Firebird,
- SAP MaxDB
- Supports 6 types of Injection Techniques
- boolean-based blind,
- time-based blind,
- UNION query-based,
- stacked queries
- Ability to perform operations on specific DBs,tables,columns or even dump whole database. Offers multiple database capabilities also.
- Supports execution of arbitary queries and system commands
- Ability to inject backdoors.
- Specific attacker functions on databases.
- Multicolored output indicating different messages.(Green=Info; Yellow=Warn; Red=Critical; BOLD Green=Interesting etc.)
Bit About SQLi
SQL injection is a code injection technique, used to attack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution – Wikipedia
Sql injection is basically making the backend database server to execute unintended queries to gain information or to bypass authentication or to execute a command in the remote host and various other malicious purposes. These unintended queries are usually executed by inputting special operational characters(dependent on the backend DBMS) through input forms in web pages like login forms. By performing SQLi an attacker can perform various types of tasks on the remote machine. SQLi is the most widely found vulnerability among websites. Click here to view some statistics.
Attacker Machine: Kali Linux 2.0 (VM)
Target: OWASPBWA (VM), IP Addr: 192.168.0.104, Application: Mutillidae
Target URL(Scope) : http://192.168.0.104/mutillidae/
Lab 1 : Banner Grabbing
In this lab, we are simply grabbing the banners from the remote machine. Details like backend DBMS, Web application technology, Server OS, Web server type & version etc are retrieved from this operation. For this we need to specify in the exact url or a file which contains the request to the url. In this tutorial, we are performing the operation with a file containing the request. We can take this request with the help of burpsuite. We can turn ON the intercept & forward the request from our browser to burpsuite. Seeing the request we can copy the request & paste it in a file. Refer to tutorial on burpsuite here to learn how to start with burpsuite.
Step 1 : Take Request
Open the login page of the Mutillidae(or which ever target you have).
Open Burpsuite & turn ON intercepting proxy. Also configure browser to send connections to burpsuite as a proxy. Refer here to see how to do this.
Come back to browser & give some data in the text boxes & submit.
See request intercepted at burpsuite. Copy the entire request to a new file. Here I am using “mut-sqlmap-bypassauth-post.req”. Then save the file.
Note: After turning ON Intercepting in Burp, select the POST request only. The request should be the one which you would do when performing a browser based manual SQL Injection.
Edit the file in any text editor to make the username & password blank. Give 2 single quotes.
Step 2 : Run SQLMAP with the file
Command: sqlmap -r mut-sqlmap-bypassauth-post.req<replace with yours> --threads=10<optional> -b
Sqlmap asks couple of questions during the execution. You can answer yes (‘y’) for all of them but do read them carefully.
You can get to see various messages & the actual operation done by sqlmap and finally the results are shown.
Here the webserver, backend database web technology & the system OS are displayed. All this information is stored in a local directory also. You can try reading them also.
Mutillidae Download Link: http://sourceforge.net/projects/mutillidae/
OWASP BWA Download Link: http://sourceforge.net/projects/owaspbwa/?source=directory | <urn:uuid:ef98df18-fbf0-4d2f-bd73-d14c7d021a08> | CC-MAIN-2022-40 | https://kalilinuxtutorials.com/sqlmap/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00585.warc.gz | en | 0.827248 | 1,050 | 2.65625 | 3 |
When charging lead-acid batteries during a boost in battery charging, or overcharging, or when a cell has achieved approximately 95% of its charge a chemical reaction occurs between the water/sulfuric acid solution and the lead plates which produces hydrogen gas. This is generated from the battery due to the action of the electrolysis of water contained in the electrolyte solution.
Common Battery Charging Applications
Golf Cart Charging
Both hydrogen and oxygen are separated through this process and can leak through the battery vents and disperse into the surrounding environment. The combination of Hydrogen and oxygen/air can be explosive if there is enough Hydrogen build-up. The amount of Hydrogen produced depends on the size and number of batteries and the size of the room plays a factor in the safety risk.
The best way to mitigate these risks is with a fixed gas detection system within the charging area. The Macurco GD-6 combustible gas detector is designed to detect, alert, and control devices to ensure dangerous levels of hydrogen are reduced. | <urn:uuid:a18e9f17-6a8d-4cba-af19-344262b2e7ed> | CC-MAIN-2022-40 | https://macurco.com/applications/batterycharging/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00585.warc.gz | en | 0.933119 | 230 | 3.203125 | 3 |
This past year, over 5,000 new phishing sites were created each day. These numbers are projected to grow throughout 2017.
Why do attackers leverage phishing as their primary method of attack? Simply put: it’s easy. It’s easy because they exploit the one thing that will always be constant: human error. Despite all the advances in security, attackers know that they can rely on human error to gain access to your critical data. With phishing attacks on the rise and human error as a constant, it’s important for you to explore the inner workings of the most common forms of phishing attacks, and learn the best methods to stop them.
Jordan Wright is an R&D Engineer at Duo Security as a part of the Duo Labs team. He is the R&D Lead for Duo Insight, Duo’s free phishing assessment tool, and the creator of GoPhish, an open source Phishing toolkit. | <urn:uuid:0825ecaf-30f1-4a86-b406-939551550475> | CC-MAIN-2022-40 | https://duo.com/resources/videos/the-3-most-popular-phishing-attacks-and-how-to-protect-your-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00785.warc.gz | en | 0.950933 | 203 | 2.625 | 3 |
CWDM network, as an easy-to-deploy and cost-effective solution, has been applied in many areas. Although CWDM network is not as perfect as DWDM networks in data capacity, it still can satisfy a wide range of applications in optical applications. And CWDM is a passive network, allowing any protocol to be transported over the link, as long as it is at the specific wavelength. This article is going to describe several application cases of 10G CWDM network in different areas.
Although 40G and 100G networks are developing rapidly, many of them still need to grow on the basis of 10G networks. And due to the high cost of 40G and 100G, 10G networks are still the most common networks to be deployed. Here are the main benefits of 10G CWDM networks.
- CWDM Mux/Demux is a passive component and requires no extra power, offering a cost-effective choice for network designers.
- Increased network connections and easy to evolve from 10G to 40G and 100G networks. For example, 10G CWDM network can combine DWDM wavelengths using the 1550nm channel on CWDM Mux/Demux. And if an operator want to upgrade its 10G network to 40G or 100G, there are various fiber components in market that can help him realize this conversion.
- Lower cost. 10G hardware has become cheaper, which make 10G CWDM network more economical. For example, buying one pcs 8 channels CWDM Mux/Demux which is the most often used in CWDM networks needs less than 330 dollars in some stores. And 10G CWDM optical transceivers are also very cheap now.
As has mentioned above, 10G CWDM network has been widely deployed in different areas. Here are the common CWDM network infrastructures.
A point-to-point CWDM network is the simplest network structure of CWDM networks, but it is the basis of other complex network infrastructures. By adding other components like CWDM OADM, the point-to-point CWDM network is easy to be changed into more complicated networks. The following figure shows a point-to-point CWDM network using 8 channels CWDM Mux/Demux.
CWDM ring links are suitable for interconnecting geographically dispersed LANs and storage area networks. Business can benefit from CWDM by using multiple Gigabit Ethernet. As shown in the below picture, the four buildings are connected by several 8 channels CWDM Mux/Demuxes.
CWDM uses different wavelengths to carry different signals over a single optical fiber, which offers many benefits to service providers that need to better utilize the existing fiber infrastructure. In this application, two Cisco switches are connected together through four 8 channels CWDM Mux/Demuxes. Signals are multiplexed and then transmitted through two strands fiber cables.
As the scale expansion of many campus, the need for adding bandwidth of new applications is increasing too. And the new campus, school teaching and student life Internet require a lot of bandwidth resources, so building a new network is undoubtedly needs a large investment. Then how to make a full use of existing fibers is a problem needed to be resolved.
In this case, four 8 channels CWDM Mux/Demux with expansion port are used to double the capacity on the existing fiber without the need for installing or leasing additional fibers, which reduce cost and labor.
As the development of WDM technology and market, the deployment of CWDM network will be more lower. FS.COM provides affordable CWDM network components at a low price. Following is a list of our products.
|42945||8 channels 1290-1430nm dual fiber CWDM Mux Demux|
|43099||8 channels 1470-1610nm dual fiber CWDM Mux Demux with expansion port|
|19367||Cisco Compatible 10G CWDM SFP+ 1470nm 80km DOM Transceiver|
|31290||Cisco Compatible 10G CWDM SFP+ 1290nm 40km DOM Transceiver| | <urn:uuid:6c28b709-4dbd-4495-8af0-de5eeb07f741> | CC-MAIN-2022-40 | https://www.fiber-optic-components.com/category/cwdm-network | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00785.warc.gz | en | 0.935045 | 891 | 2.671875 | 3 |
Potatoes packed with vitamins and minerals, though the variety and preparation method can affect the nutritional content.
A good source of antioxidants, which may reduce the risk of chronic diseases like heart disease, diabetes and certain cancers.
Contains resistant starch, which may help reduce insulin resistance. In turn, this can help improve blood sugar control.
Resistant starch in potatoes is a source of nutrition for beneficial gut bacteria.
They convert it to the short-chain fatty acid butyrate, linked to reduced inflammation in the colon, improved colon defenses and a lower risk of colorectal cancer.
Naturally gluten-free, which makes them an excellent food choice for people with celiac disease or a non-celiac gluten sensitivity.
Try boiling, baking or steaming them and consuming them with the skin intact.
For more tips, follow our today’s health tip listing. | <urn:uuid:f83cb51a-cc6c-480d-83ef-1ba472afb6e9> | CC-MAIN-2022-40 | https://areflect.com/2019/10/01/todays-health-tip-benefits-of-potatoes-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00785.warc.gz | en | 0.886883 | 182 | 2.609375 | 3 |
Cybercrimes are on the rise, affecting more of the population each year. Resembling an online battle, government agencies like the FBI are working to combat these crimes, finding new ways to take on the savvy online criminals who are destroying people’s credit, as well as their lives.
Some states, however, appear to be more susceptible to cybercrimes than others, although all are affected to some degree. While you can most likely guess a few of those states topping the list, others might surprise you. Knowing if your state sits in the top ten of the states that are mostly affected by cybercrimes can prompt you to take notice and take actions now to protect yourself and especially those who own small businesses.
What are Cybercrimes?
A cybercrime is a computer and network-oriented action which damages another, usually financially. These crimes affect both individuals and companies.
Common Types of Cybercrimes
Six of the most common types of cybercrimes include:
- Phishing: Scams in which a hacker or cybercriminal attempts to lure personal or sensitive information from an individual or company computer user
- Malware: Cybercrime involving malicious software creating viruses, worms, and spyware on a computer
- Identify theft: Accessing personally identifiable information, such as social security numbers, and using them fraudulently (both for credit and debit fraud)
- Online harassment
- Invasion of privacy
Read More: Lifelock versus Experian Comparison
Top Ten States Affected by Cybercrimes
According to the 2019 FBI Internet Crime Report, ten states are at higher risk of cybercrimes. Their report compiles statistics, including:
- Number of cybercrime victims (based on the number of complaints)
- Total monetary losses in the state
- Number of cybercriminals
- Total earnings by the identified cybercriminals
This list of top ten states mostly affected by cybercrimes is ranked according to total monetary losses resulting from cybercrimes alone, not on the number of victims reporting these crimes.
Total Losses: $573,624,151
Residents of California report more cybercrimes than in any other state. This is likely due to the large population and number of businesses. The total number of cybercriminals identified in the state in 2019 was 17,517. There were 50,1323 victims, with losses of on average $11,442 each.
Total Losses: $293,445,963
Florida is known for its high number of senior citizens residing in the state, and this may partially explain why it ranks so high up on this list. The total number of identified cybercriminals is 11,047, with 27,178 victims suffering losses of around $10,797 each
Total Losses: $264,663,456
A surprise at #3 is Ohio, a state with only average internet access and an overall lower median household income. Yet, with just under 10,000 victims, losses soar to $28,000 on average for each one. Reportedly 2,508 cybercriminals milked almost $15,000,000 out of victims here in 2019.
Total Losses: $221,535,479
As the second-largest state in the country, Texas resident population is huge, so you have more opportunities for cybercrimes based on the sheer number of people. The total number of identified cybercriminals in the state comes in at just over 10,000, with earnings over $126 million.
Total Losses: $198,765,769
While New York is full of companies and a dense population, its total losses from cybercrimes, just under $200 million, put it in fifth place.
Total Losses: $107,152,415
Home to several major companies, including industrial complexes and large state universities, Illinois has one of the country’s best broadband access levels. Because of this, it has become a target of cybercriminals. Total losses to cybercrime in the state in 2019 equaled $107,152,415, with over 10,000 victims involved.
Total Losses: $106,474,464
With such close proximity to New York, the state of New Jersey holds its own when it comes to the highest median income of its residents. Company headquarters and various businesses are found in the state, leading to high temptation for cybercriminals. Total losses to cybercrime in the state in 2019 topped out at $106,474,464.
Total Losses: $94,281,611
Thanks to its high-speed technology, top universities, and company headquarters, Pennsylvania finds itself in the top ten states affected by cybercrime. Victims here lost on average $8,639 each in 2019.
Total Losses: $92,467,791
Located near Washington, DC, Virginia is densely populated in areas with government workers and contractors. It makes sense, then, that cybercriminals are attracted to this state. Just under 5,000 cybercriminals earned almost $25 million in 2019 alone here.
Total Losses: $84,173,754
With one of the highest median household incomes and full internet coverage across the state, Massachusetts attracts a wide variety of cybercriminals.
So, in what state will you least likely experience a cybercrime? According to the FBI report, Vermont is your best bet.
Ways to Prevent Cybercrimes from Happening to You
Whichever state you live in, there are steps you can take to protect yourself from being a victim of cybercrime.
- Use anti-virus software: allows for regularly scanning, detecting, and removing Cybercrime threats from your computer.
- Update your operating system and other software: keeping everything updated provides the latest security patches for protection.
- Create stronger passwords and change them regularly. Consider keeping track of passwords with a password manager program.
- Avoid opening spam emails and any attachments.
- Decline providing personal information or clicking on suspicious links asking you to confirm or update personal information, including passwords.
- Beware of non-legitimate websites and odd-looking URLs.
State laws differ when it comes to cybercrimes, so you may want to gain knowledge on your rights within your particular state. For example, the states of Florida and New Jersey have in-depth laws with detailed factors and classifications for felonies and misdemeanors. When cybercrimes cross state lines, the FBI can be involved as well.
Cybercrime led to $3.5 billion in overall damages throughout the states in 2019. Even with the passing of more legislation and detailed state laws, the number of crimes is expected to continue to rise. The best thing all state residents can do to help is to be aware of cybercriminals’ motives and do what they can to protect themselves and their businesses.
Last Updated on | <urn:uuid:bcc2c4d4-d324-467e-bef1-342439acbe5d> | CC-MAIN-2022-40 | https://www.homesecurityheroes.com/states-affected-cybercrimes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00785.warc.gz | en | 0.938872 | 1,448 | 2.828125 | 3 |
We know that conventional datacom links use single-mode fiber (SMF) for long-distance, high-speed links and multimode fiber (MMF) for shorter links. Early datacom applications, including ESCON, Token Ring, FDDI, Ethernet, and ATM, operated at relatively slow data rates (4-155 Mbit/s), using low-cost infrared light-emitting diode transmitters (LEDs). And this article will focus on OM3 multimode fiber.
The earliest fibers, called Optical Multimode 1 (OM1), featured a large core than is used today and a bigger numercial aperture. As the technology matured, smaller core MMF was typically rated for a minimum bandwidth-distance product around 160 MHz*km for 62.5/125 micron fiber at 850 nm wavelength; 500 MHz*km for 50/125 micron fiber at this wavelength; and 500 MHz*km for both fiber types at 1300 nm wavelength. This fiber was compatible with various industry standards, including CCTIT recommendation G.652, and was defined by the ISO standards as “optical multimode 2” (OM2) fiber; it is also commonly known as “FDDI grade” fiber, The fiber bandwidth was measured using an overfilled launch (OFL) test procedure, which replicated the large spot size and uniform power profile of a LED. Since a LED consistently fills the entire fiber core, the fiber bandwidth is determined by the aggregate performance of all the excited modes. However, LED sources typically have a maximum modulation rate of a few hundred Mbit/s; with the growing demand for higher data rates, laser sources operating over SMF were required.
Single-mode links using Fabry-Perot or distributed feedback lasers operating at long wavelength (1300nm) tend to be higher cost due to their tighter alignment tolerances and higher performance characteristics. There is lower cost alternative; the recent deployment of short-wave (780-850 nm) vertical cavity surface emitting lasers (VCSELs) has made it possible to use MMF at higher data rates over longer distance. Compared with LEDs, VCSELs offer higher optical power, narrower width, smaller spot size, less uniform power profiles, and higher modulation data rates. This means that a VCSELs will not excite all of the modes in a MMF; the fiber bandwidth is determined by a restricted set of modes, typically concentrated near the center of the core. Older MMFs experienced significant, often unpredictable variations in bandwidth when used with VCSEL sources due to defects or refractive index variations in the fiber core and variations in the number and power of excited modes due to fluctuations in the VCSEL output or between different VCSEL transmitters.
In response to these problem, the datacom industry developed a new type of laser-optimized or laser-enhanced MMF specifically designed to achieve improved, more reliable performance with VCESLs. Precise control of the refractive index profile minimizes modal dispersion and differential mode delay (DMD) with laser sources, while remaining backward compatible with LED sources (the dimensions, attenuation, and termination methods for laser-optimized and conventional fiber are the same). The first laser-optimized fibers, introduced in the mid-1990s, were available in both 50-microns and 62.5-micron varieties and designed for 1-Gbit/s operation up to a few hundred meters. These fibers were not always capable of scaling to higher data rates; with the increased attention on 10-Gbit/s links, never types of reaching about 35 meters at 10-Gbit/s, it became apparent that the smaller core diameter and reduced number of modes in 50 micron fiber made it the preferred choice for these data rates. Today, laser-optimized fiber is commonly available only in 50-micron versions, with an effective bandwidth-distance product around 2000 MHz*km for 850 nm laser sources. The bandwidth must be measured using a restricted mode launch (RML) test, instead of the conventional OFL method. This fiber was defined in the TLA-568 standard as “laser-optimized multimode fiber, ” and in the ISO 11801 (2nd edition) by its more common name, “optical multimode3” (OM3) fiber. Click to buy OM3 fiber patch cables.
An early example of laser-optimized fiber is the Systimax Lazer SPEED fiber introduced by Lucent, which uses a green jacket to distinguish it from existing multimode (orange) , single-mode (yellow) , and dispersion-managed (purple) fiber cables. Attenuation is about 3.5 dB/km at 850 nm and 1.5 dB/km at 1300 nm; bandwidth is 2200 MHz*km at 850 nm (500 MHz*km overfilled) and 500 MHz*km at 1300 nm (no change when overfilled) . Another example is the Corning Infini-Core fiber, which typically uses an aqua-colored cable; the CL 1000 line consists of 62.5-micron fiber made with an outside vapor deposition process that achieves 500-m distances at 850 nm and 1 km at 1300 nm. Similarly, the CL 2000 line of 50-micron fiber supports 600-m distances at 850 nm and 2 km at 1300 nm. Here is a figure of OM3 multimode fiber for you.
Most recent installations of Ethernet, Fibre Channel, InifiniBand, and other systems use the preferred OM3 multimode fiber (for example, the OM3 SC to LC), and many legacy systems including ESCON are compatible with this fiber. In order to avoid the associated with installing new fiber, most standards attempt to accommodate various types of MMF. While the idea of backward compatibility works reasonably well up to 1 Gbit/s (distances of a few hundred meters can be achieved) , it begins to break down at higher data rates when the achievable distance is reduced even further. Designing a future-proof cable infrastructure under these conditions becomes increasingly difficult; at some point, new fiber needs to replace the legacy MMF. Although SMF should be a good long-term investment, the short-term cost premium for SMF installation and ports on many switches, servers, and storage devices remains a concern. Since the cost of short-wave transceivers is presently lower than long-wave transceivers, there is still some question as to the preferred fiber to install and the best mixture of 62.5-micron and 50-micron MMF. In general, 50-micron fiber has been widely deployed in Europe and Japen, while North America has primarily used 62.5-micron MMF until recently. The IEEE has recommended using 62.5-micron MMF in building backbones for distances up to 100m, and 50-micron fiber for distances between 100 and 300 m.
Mixing OM2 and OM3 fibers in the same link results in an aggregate bandwidth proportional to the weighted average of the two cable types. Care must be taken not to mix 50-and 62.5-micron fibers in the same cable plant, as the resulting mismatch in core size and numerical aperture creates high losses. This can make it difficult to administer a mixed cable plant, as there is no industry standard connector keying to prevent misplugging different types of MMF into the wrong location.
About the Author:
I am working in Fiberstore to share the fiber optic networking knowledge and products’ information with people. Fiberstore is a largest supplier of optical network solutions worldwide. You can get the cheapest fiber optic patch cords here. | <urn:uuid:781769b7-64da-4689-b039-983c6f3f2b96> | CC-MAIN-2022-40 | https://www.fiber-optic-cable-sale.com/tag/om3-multimode-fiber | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00785.warc.gz | en | 0.928055 | 1,593 | 2.75 | 3 |
SSL Beyond the Basics Part 1: Protocol Selection
Part 1: Protocol Selection
Here at Delinea we have a wide range of recommended security best practices for our customers, and one of the first things we recommend is setting up SSL, or Secure Socket Layer, for Secret Server.
Setting up SSL is fairly trivial once an SSL certificate is obtained. Once it’s set up, SSL provides a few different security layers. The first security layer is that web traffic is encrypted between the client (such as a browser) and the server. This prevents eavesdroppers from seeing data communicated between the client and server. The second is providing confidence that the client is communicating with the server it believes it is communicating with, which mitigates Man in the Middle attacks.
There is a lot that can be done to enhance the protection provided by SSL, and the first step is understanding the SSL protocol versions and features.
SSL and the Version Negotiation
SSL is commonly used interchangeably for SSL and TLS, or Transport Layer Security. TLS is SSL’s successor. There are different versions of TLS and SSL, and each version supports different features.
When a client establishes a connection with a server, the first step is a negotiation, where the client and server have to agree on what protocol version to use. The negotiation between client and server usually starts with the most secure option and progresses to least secure. Both the client and server have the right to refuse a protocol version, and they go down the list until an agreement can be made.
A key reason for this negotiation is it provides the ability to remove old and untrustworthy versions of SSL or TLS. If a vulnerability were ever to be discovered in one of the versions, servers and clients can be updated to refuse communication through that version. Likewise, new protocols can be added to a client’s or server’s list of supported options. Because a server could support a newer protocol version than the client browser supports, the version negotiation is needed to settle on a version that is supported by both sides.
Disable Insecure Versions: SSL 2.0 and 3.0
SSL 2.0 is widely considered broken, and unsafe for secure communication regardless of the strength of the digital certificate. Most clients will refuse to use SSL 2.0 for this very reason. Just in case, it’s a good idea to disable SSL 2.0 from the server-side where Secret Server is installed to proactively stop clients from using an old version of SSL.
Fortunately, Windows Server 2012 and 2012 R2 already disable SSL 2.0, so there is no action to take there, but if you use other versions of Windows Server, such as 2008 and 2008 R2, you will have to disable SSL 2.0 manually.
Disabling SSL 2.0 is easy enough for those older platforms. Microsoft has a support article available on their site under KB 187498 that walks through the process and provides a tool to automate the change.
On October 14th, 2014, Google announced a vulnerability for SSL 3.0 called POODLE, which outlines a design flaw of SSL 3.0. It is strongly recommended that SSL 3.0 be disabled by following the same instructions for disabling SSL 2.0. SSL 3.0 by default is enabled for all versions of Windows today.
Enable Secure Versions: TLS 1.1 and 1.2
As we mentioned above, TLS offers better security than SSL, with TLS 1.2 offering the best. TLS 1.2 offers support for authenticated ciphers, such as AES-GCM, and an improved pseudorandom function to rely on SHA256.
Windows Server 2012 and 2012 R2 offer both of these protocols by default, so there is nothing that needs to be done to enable them.
Windows Server 2008 R2 supports these protocols, but they must be turned on manually. Microsoft’s KB 235030 explains a more detailed way to enable these protocols. The steps are this:
- Create a registry key
- Under the new TLS 1.1 key, create a Client subkey and Server subkey.
- Create a 32-bit DWORD in both the Client and Server subkey called “Enabled” and set its value to 1.
- Create a 32-bit DWORD in both the Client and Server subkey called “DisabledByDefault” and set its value to 0.
- Repeat this process for TLS 1.2 by substituting “TLS 1.1” with “TLS 1.2” in the first step.
- Reboot the server.
Unfortunately, for Windows Server 2008 and older, TLS 1.1 and 1.2 are not available at all. In this case, we recommend upgrading the operating system. Another option is to use a reverse proxy that does support TLS 1.1 and 1.2, either hardware or software, to offload the SSL responsibility from the server to the reverse proxy.
Our takeaways are:
- There is no practical reason for SSL 2.0 to be enabled anymore. Disabling it ensures unsecure clients don’t attempt to use it.
- SSL 3.0 should not be used anymore due to the POODLE attack.
- Enabling TLS 1.1 and 1.2 offers more robust security than SSL.
- Windows Server 2012 and 2012 R2 already come configured with SSL 2.0 disabled and TLS 1.1 and 1.2 enabled out of the box.
A key component to SSL is the cryptographic algorithms underneath the covers. These algorithms make up the cipher suite. The cipher suites accepted during the negotiation step above have a big impact on security. This can be improved by disabling weaker cipher suites and setting up a preference for the more secure suites. Check back next week for more details!
This post was updated on October 14th, 2014 to recommend disabling SSL 3.0 after the POODLE flaw was announced.
Read All Parts: | <urn:uuid:5a2b85ed-f5cd-4e53-8118-cf72de436719> | CC-MAIN-2022-40 | https://delinea.com/blog/ssl-beyond-the-basics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00785.warc.gz | en | 0.912324 | 1,279 | 2.5625 | 3 |
To better understand how Americans think about hacker motivations, consumer versus business security responsibilities, ransomware and the political climates impact on the threat landscape, Kaspersky Lab and HackerOne surveyed over 5,000 U.S. consumers at least 16 years old.
The results of the study, revealed at RSA Conference 2017, are as follows:
State of cybersecurity and politics in the U.S.
The research shows that Americans remain divided regarding what impact the new president will have on the nation’s cybersecurity protection.
- Nearly half of U.S. adults surveyed (44%) believe that North America will be more vulnerable to cyber-espionage or nation-sponsored cyberattacks with Donald Trump as president of the United States.
- Of the U.S. millennials surveyed more than half thought that North America would be more vulnerable to cyber espionage or nation-sponsored cyberattacks with Donald Trump as president (56%).
Cyber security practices affect consumers purchasing decisions
The results revealed that consumers are beginning to make purchasing decisions based on the cyber security practices of businesses; and younger generations, who are considered digital natives, see value in companies hiring hackers to help protect consumer data.
- More than one in five (22%) U.S. adults are more likely to make a purchase if they know a company hired hackers to help boost security.
- Only 36 percent of U.S. adults said that they would choose to be a customer of their own employer knowing what they know about their company’s cybersecurity practices.
- 29 percent of Americans, 35-44 years old claim they are more likely to make a purchase if a company works with hackers for data protection, while Americans 55 years or older claim that it would not impact their purchasing decision (55%).
Responsibility of cyber security
The survey found that the majority of Americans are looking to others to take responsibility for their security; however, the younger generations believe that they should take ownership for protecting their own data when making purchases online.
- 73 percent of U.S. adults believe retailers should be responsible for protecting consumer data, followed by credit payment companies at 64 percent.
- More than half (63%) of adults ranging in ages 25 to 43 years old admit they should take responsibility for protecting their own data when purchasing online, while 74 percent of adults over 55 years old and older say retailers should be responsible for the protection of data when purchasing online.
Should companies to give-in to ransomware demands?
Ransomware attacks on businesses are on the rise – from an attack every two minutes in January 2016, to every 40 seconds by October 2016, according to a Kaspersky Lab report; however, fewer Americans believe companies should pay a ransom to get data back.
- Nearly two in five U.S. adults do not expect companies to pay a ransom if they were hacked.
- When asked what types of data they would expect a business to pay a ransom for in an attempt to get the information back, 43 percent expect companies to pay for employee social security numbers, followed by customer banking details (40%) and employee banking details (39%).
- Women are more likely than men to expect a company to pay a ransom if the organization falls victim to this type of cyber-attack (63% of women vs. 58% of men).
“This study helps to highlight the ongoing confusion among Americans, both at home and while at work, regarding cybersecurity,” said Ryan Naraine, head of the U.S. Global Research and Analysis Team, Kaspersky Lab. “Cybersecurity is everyone’s responsibility, and it’s imperative that the security community, businesses and governments routinely work together to educate Americans on cyber threats. We need to ensure that consumers and organizations are not only educated on the risks, but also know the best solutions for safeguarding sensitive data from cybercriminals.”
“Every business online today is vulnerable to new risks that are inevitably being passed down to their customers,” said Alex Rice, CTO and founder of HackerOne. “The data from this report highlights a growing trend that consumers cast votes of confidence in the businesses that proactively work with hackers to keep their data safe from breaches.” | <urn:uuid:cb5b9381-2fb4-4e19-91fc-8be72721df0c> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2017/02/14/us-consumers-views-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00785.warc.gz | en | 0.950861 | 866 | 2.546875 | 3 |
Almost all mobile apps connect to a mobile backend. In fact, without the connection to the mobile application servers, most would not function. In order to connect with its backend servers, the mobile app needs to store important network information such as API keys, server URLs and passwords, SSL and client certificates, and more. Without mobile data encryption, any data is vulnerable and exposed to a wide range of exploits originating from or targeted at the mobile application, which itself is one of the weakest links in any data security and protection model. Protect mobile data and users from harvesting, data theft, interception, deception and trickery, and abuse of normal system and application functions that weaponize mobile applications against the very people who need, use and build them.
Mobile Data Encryption Is Hard
Mobile developers are mostly focused on adding new feature functionality to their apps. They want to get their apps out for user acceptance testing (UAT) so that they can incorporate customer feedback. In addition, most app developers consider adding encryption and other security elements at the end of the development cycle. This is because encrypting mobile app data is hard to do. It takes time and specific skill and doing so during the software development life cycle (SDLC) complicates the coding process.
For every type of data inside the app, developers must choose the right combinations among the many options for each of component of the encryption model (encryption algorithm/protocols, cipher suites, key strength, key derivation technique, key protection, safeguarding, etc). This model must be suited to fit the security and performance requirements for a highly variable set of data format characteristics. And a small error or miscue in the implementation can have drastic effects on the app’s performance or usability.
The real complexity lies is the large number of possible permutations developers can choose from, coupled with the need for precision. Getting it wrong, will have an impact on both performance of the app and strength of the protection. And this is the struggle with manual encryption or using 3rd party libraries or SDKs.
Shift Left for Mobile Data Encryption
Developers don’t want to have to do this work twice, so they tend to wait until after the app is built to add encryption. The danger with waiting until the end of the development cycle, is that all the important network information is stored in the app unencrypted. If a cybercriminal gets their hands on UAT versions of the app, the so called “keys to the kingdom” are in the clear, ready to be stolen. And once these bad actors get their hands on that data, their work is halfway done, regardless of the app getting encrypted before production release.
Mobile data encryption the DevSecOps way means shifting security left and including mobile app security and encryption as part of the SDLC. DevSecOps is all about automation, dropping solutions into the existing development flow and automating workflows. It is answering the question “How to ensure that all data generated and stored in the app, is automatically encrypted at each step of the development cycle, fully integrated in the existing build process, and without a developer having to do any additional work”
Plugging Encryption into Existing DevSecOps Processes
The term DevSecOps is used to describe a security focused, continuous delivery, software development life cycle. For mobile apps, DevSecOps means releasing an app securely, fast, and easy with the least amount of work.
Appdome plugs into existing DevSecOps processes to provide a fully integrated, automated, validation that the required security features are built inside each app. This security release management process includes several steps. It starts with BUILD. This is where you select all the security features that comprise your multi-layered security model, and build those protections into mobile apps, without coding. Appdome’s automation builds the security model directly into the iOS or Android app with the full context of how the application was built – bridging framework limitations, incompatibilities, or mismatches between the application, OS or any frameworks or libraries to deliver a cohesive outcome build-by-build.
This sets up the solid foundation going into the COMPLIANCE step, where passing a pen test to ensure compliance with the organization’s security requirements or external regulations might come into play depending on your industry specific requirements. Then, as part of the CERTIFICATION step, Appdome Certified Secure provides the documented proof that your app fulfills all the necessary compliance requirements.
Lastly, there’s the RELEASE of your app to production. Appdome automates each of the above steps. Without Appdome, most organizations lack a well-defined security release management process that captures and codifies the security release process for mobile applications and ensures that all of the many moving parts are in sync and constantly moving the process through. With Appdome, DevSecOps teams can add mobile data encryption at different moments of their existing processes.
Mobile Data Encryption, the DevSecOps Way
At any point in the build process developers can encrypt all data-at-rest, data-in-memory and data-in-transit – in just a few clicks, with no coding or SDK required. Developers can also encrypt all data stored in the code itself, such as strings.xml and shared preferences in Android apps, as well as their equivalent locations in iOS apps (like CFString, NSString, application resources, and app preferences (such as NSUserDefaults). Appdome customers can set and lock down their approved encryption model as pre-defined mobile app security templates called Fusion Sets.
Appdome’s technology automates the process of implementing the encryption model that’s best suited for your app’s specific data types. This eliminates an enormous amount of painstaking trial-and-error work if you tried to do this with 3rd party libraries or SDKs, where the burden of the work falls squarely on the laps of mobile developers. With Appdome, you get a guaranteed and instant security outcome in a fraction of the time and cost, along with a certified and secure audit trail which documents each and every security component implemented in the actual builds.
Guaranteed Secure Outcome Without Tradeoffs
Not only do you get a guaranteed secure outcome, but Appdome gives you the flexibility to optimize the encryption model and fine tune your data protection model to achieve the optimal mix of performance and security, without the usual painful tradeoffs of SDK based or DIY encryption solutions. With Appdome, you don’t need to sacrifice the user experience or app performance in order to deliver the highest levels of protection for all mobile data and meet the demanding expectations of all key stakeholders.
Step Up to DevSecOps
Most DevOps teams today don’t have a workflow to do security release management. Appdome helps them step up to DevSecOps and offers a new security release management workflow that they can plug into their existing processes. The key benefit is that developers can encrypt their apps at every step of the build process, versus waiting until the very end. Appdome offers a guaranteed way to build, verify, certify and release mobile apps at scale in the fastest and most efficient manner possible. This allows developers to get their secured mobile apps into the hands of their end users at a rapid pace, while aligning all stakeholders to a common shared set of objectives and outcome and eliminating the sources of friction commonly found between DevOps and Security teams. | <urn:uuid:34267c5e-1563-42fd-92d6-f552f3c07ebd> | CC-MAIN-2022-40 | https://www.appdome.com/dev-sec-blog/mobile-app-security-series/mobile-data-encryption/mobile-data-encryption-the-devsecops-way/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00185.warc.gz | en | 0.916705 | 1,534 | 2.828125 | 3 |
Conversational AI is a type of artificial intelligence that facilitates the human like conversation between a human and a software system in real time.
SwissCognitive Guest Blogger: Utpal Chakraborty, Chief Digital Officer, Allied Digital Services Ltd., AI Researcher
Introduction to Conversational AI
Conversational AI is a type of artificial intelligence that facilitates the human like conversation between a human and a software system in real time. It is a piece of software that a person can talk to, like chatbot, social messaging app, interactive agent, or smart device. These applications enable users to ask questions, get opinions, find support, or complete tasks remotely. Conversational systems are powered by a conversational engine named NLP (Natural Language Processing, a branch of AI that deals with linguistic and conversational cognitive science). They make use of large volumes of data processed with machine learning, and natural language processing to aid imitate human interactions, recognizing speech and text inputs and translating their meanings in different languages. Businesses can setup automated chatbots or virtual assistants that can communicate with humans via voice or text and in different languages of user preferences.
Conversational AI in Banking
Providing Banking & Financial products and services to the customers through variety of digital channels like chatbot, social media, and voiced based virtual assistants has become a necessity for the banks and financial institutions today. It has also been forecasted that conversational way of banking will take over all other forms of banking and the customers would love to avail the banking and financial services over multiple digital channels of their choice and ease.
The Rise of Conversational AI in Banking Domain
An aspect of AI that is redefining customer engagement in banking is conversational banking. This has been fuelled by a rise in conversational AI solutions and natural language processing (NLP) technology that allows us to interact, transact, and collaborate using natural chat. Technically field of NLP has two subvertical, NLU (Natural Language Understanding) and NLG (Natural Language Generation) that makes NLP engines capable of understanding user queries and generating answers accordingly. Conversational AI has also now been powered by machine learning and deep learning algorithms.
As we move from using visual interfaces to using conversational AI, a whole new model of engagement is made possible. Today, conversational interfaces represent one of the biggest shifts in banking user interfaces to date and are transforming how they acquire and retain customers and build their brand identity. The popularity of messaging apps, like Facebook Messenger, WhatsApp, Slack, Microsoft Teams or SMS, and the adoption of voice- activated assistants such as Amazon Alexa, Google Home, or Apple’s Siri are bringing conversations back into our digital banking experiences.
Conversational AI has huge potential for improving customer service and overall customer experience specially in the retail banking sector.
AI can help banks and financial institutions in many ways. Helping customers to choose where they are spending their money based on their spending pattern and manage their finances better. Predictive AI helped the global financial industry transform an operationally intensive service delivery model to one that is smart and built around self-service solutions. A higher share of AI-based, self-service solutions helps customers get tasks done without the rigors of navigating a manual process which improves customer satisfaction. It also helps banks and financial institution to save on human capital and reduce cost.
Banks in India and across the globe have the opportunity to go beyond nascent chatbots and shift to AI-enabled conversational banking platform to simultaneously improve customer experience, improve maintainability and reduce cost to serve.
Advantages of Conversational Banking
- Faster operation, lower waiting times
- More customer satisfaction as they are able to constantly keep track of their account balance, perform any transactions with just a voice command or few taps and manage the information on their personal profile and portfolios.
- Automation of repetitive tasks. Same kind of repetitive questions can be answered even better by one piece of software within practically no time. Multilingual, customer can interact with the conversational systems in any language.
- Round the clock consistent service. Conversational banking services are available 24 by 7.
- There is no need to stand in lines and wait for the bank timings to open in order to carry out certain operations.
- For a bank rendering their products and services in these light-weight conversational channels are cost effective both from infrastructure and implementation perspective, quicker to develop and deploy, low maintenance cost and cheaper when it comes to adding a new product or service to the conversational channels.
- Highly secure and compliant with the regulatory requirements of the central banks. Although there could be few limitations of conversational banking in terms of maturity of NLP engines to interpret any types of conversations from the end customer in any vernacular; but overall there are huge advantages overpowering the conversational AI in banking making daily operations much easier, cost reduction, saving customer’s time, available anytime-anywhere and providing services efficiently to the customer.
Future of Conversational Banking
Banking that considers on conversational interfaces is going to lead the near future. Apart from answering to common questions, conversational AI will handle complex questions, perform complex transactions, understand customer sentiments, and facilitate highly personalized conversations.
Conversational banking can eventually be transformed into a proactive platform that can notify customers about low balance, premium dues or giving recommendations based on previous purchases and financial background. It will be more of a personal financial concierge that will guide customers not only considering the current financial status of an individual, but it will also consider the liabilities and future plans and provide suitable recommendations.
How Conversational Banking upholds Sustainability
Conversational Banking is not only transforming the traditional financial services but also, they are already playing a major role in revolutionizing sustainable finance or sustainability at large.
Firstly, using the power of cutting-edge technologies, like NLP, Machine Learning it has been able to enhance the digital reach of financial services to those segments of population that did not have access to the services through traditional channels. For example – Financial Inclusion is one of those areas where Conversational Banking has been playing a major role by bringing a large section of unbanked population under the financial umbrella using disruptive digital technologies. Also, rendering financial services and making those available to the customers wherever they are available- like, social banking, WhatsApp banking, mobile banking etc. are some of the biggest attempts towards sustainability and green finance. The second major impact of Conversational Banking is that it has also been able to make financial services affordable for all by bringing down the cost with the use of transformative technologies.
FinoAllied Conversational Banking Platform
FinoAllied is a Conversational AI Platform developed by Allied Digital Services Ltd., specifically designed for Banking & Finance. This is a Digital Fabric which will wrap over banks existing Enterprise Service Bus connected to the bank’s IT systems. So, Its a Plug & Play and Configurable Architecture. The product is designed in a cloud based AI NLP engine, it’s cloud native & on DevOps backbone.
It can be deployed in hybrid/multi cloud architecture. Currently 50+ Banking & Financial Services are inbuilt and can be rendered quickly via different digital channels of banks. The product leverages hyper- automation wherein bot can be rapidly customized and administered to deliver all standard and customed banking services over multiple digital & social channels, open channel Digital Banking as a Service (DBaaS).
The pandemic has propelled the adoption of Conversational Banking in a great way and the banks has already been rendering more and more services in the digital channels. Faster implementation timeline, low cost, easy maintainability, and huge customer adoption rate are some of the major benefits of Conversational AI in the banking arena.
Many small and mid-size banks have now been able to make use of these new channels a fundamental part of their banks’ wider customer engagement strategy. Banks have been able to maximize ROI and fast track digital adoption of their products and services leveraging the power of cutting-edge conversational AI technologies.
It is also forecasted that conversational banking will feature a rapid growth in coming years and will take over all other forms of conventional methods of banking. A platform like FinoAllied provides a perfect solution for the banks looking for ready to use, plug and play, subscription based flexible model wherein the platform can be integrated with banking ecosystem with three to four weeks’ time. | <urn:uuid:29c031fa-3f8e-4844-a936-993716fe3ec4> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/06/07/conversational-ai-platform-as-digital-fabric-for-banks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00185.warc.gz | en | 0.935881 | 1,723 | 3.015625 | 3 |
Every morning we sit down at our computers and provide our credentials to the network; user name and password. Because it has become such a ubiquitous part of modern life, we have a user name and password to everything, we even have password management applications. This system of challenge and response is designed to prove to the system who you are or authenticate you as a valid user. As discussed in a previous blog post, who you are and what you do also may determine your permissions within the system if Role Based Access Controls are in place.
Multi-factor authentication (MFA) is a method of more securely verifying the identity of a user of any given system. The multi-factor comes from requiring more than one piece of identifying information. In the challenge response example above, you know your user name and password. MFA requires two or more pieces of information from the following categories:
- Knowledge: something you know (user names, passwords, PIN)
- Possession: something you have (secure token, bank card, cell phone)
- Inheritance: something you are (fingerprint, retina, biometric)
A subset of MFA is two-factor authentication (2FA), which is a widely implemented version. Originally patented in the early 1980s for use with automated teller machines, customers need their bank card, and they need to know the PIN (something they know and something they have). Two-factor authentication has become extremely common, especially in the Internet and ‘app’ space. A common method of 2FA is when providers text a code to your mobile phone after a successful challenge and response. Something you know is your user name and password; something you have is your mobile phone.
Most service providers support 2FA but you may need to request that it be enabled for your account. You can check if your provider supports 2FA by checking https://twofactorauth.org/. | <urn:uuid:8630de13-d160-49f9-980e-7b58e8f0d376> | CC-MAIN-2022-40 | https://www.dataprivacyandsecurityinsider.com/2016/02/mfa-multi-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00185.warc.gz | en | 0.943961 | 390 | 3.578125 | 4 |
Traditionally, protecting valuable IT resources like user data and intellectual property was heavily based on perimeter security strategies. The IT industry dominated the operational business world with firewalls and other network-based security elements and tools. These security strategies were mainly used to inspect and provide authentication & validation for users going in and out of the network firewall.
Fast forward to the modern-day and age, businesses and organizations have been forced to shut down their brick and mortar operations and fully embrace a forced digitalization. As this increased remote work access, creating a requirement for a smoother IT structure – it turns out Zero Trust has been around all along. Given the amount of data processing, the move to hybrid cloud infrastructure was inevitable.
According to a Deloitte survey poll, almost 40% of IT security professionals say that the lockdown allowed their organizations to speed up their efforts of acquiring Zero Trust. After all, it was quite evident that depending on network perimeter security structures is no longer sufficient. As organizations continued to experience security threats, disruptions in business operations like supply chain, disaster recovery, and recurrent cyberattacks, it was high time we found a structural hybrid between the traditional open and closed IT ecosystems.
The Pre-Pandemic IT Ecosystem
The pre-pandemic IT environment was the relatively closed-up security structure designed by the department to work for multiple devices in a single working environment. The main security zones of trust structure would allow the users to generate strong user authenticity and validation prospects through VPNs. However, on the individual level, internal networking for both the users and the services provided minimal authentication. For instance, usernames and passwords.
Relatively Open IT Ecosystem
In comparison to the closed security structure that was traditionally in place and being used mainly within organizations before the Covid-19 pandemic – the general immediate lockdown brought on changes within the ecosystem. The circumstances required a quick fix for the time being, so without relying on any VPN with minimal perimeter layers in place, a relatively open IT environment was created. Services were conducting the main functions of access to applications like Google Authenticator or other cryptographic mechanisms.
The Hybrid of the Two – Zero Trust
Soon enough, finally, when the IT technicians and the experts could come around to deducing a way to merge both the relatively open and close structural IT ecosystems, they concluded using Zero Trust. Zero Trust has been around for quite some time but has recently gained recognition as the new IT buzzword amongst professionals and organizations. Basically, Zero Trust is a hybrid security framework containing elements from both relatively open and close environments. It requires all the users, whether they are within or from outside the organization’s network, to be authorized, validated, and continuously require authentication for security configuration purposes. It is like a mandated part of all the steps that need to be taken before being granted and allowed to keep access to the company’s data and applications.
Why Is Zero Trust The Answer?
The following are some of the main reasons why Zero Trust has proven to be the choice for organizations that were forcefully digitalized after the Covid-19 lockdown and have now found ways to run business operations smoothly.
1. Modern Technology
Zero Trust is based on the assumption that no traditional network edge per se limits security access. Under Zero Trust, networks can present in the cloud, localized, or hybrid these resources with the users present in any location. That is what makes Zero Trust the modern technology framework that can easily secure infrastructural inputs and data for the modern-day challenges presented with doing business remotely.
2. Much Needed Protection
Zero Trust is basically a strategic preemptive security initiative that found it is coming with the Covid-19 lockdown. The forceful digitalization of businesses had put security accesses in overdrive. So if it wasn’t for Zero Trust successfully eliminating data breaches via mitigating the general concept of trusting the organization’s network edge – we still wouldn’t have come around working normally despite things going haywire.
3. Architectural Framework
In Zero Trust architecture, you recognize a protective surface that is made up of the network’s vital data application servers known as DAAS. These protect surfaces pertain to each organization, containing some of the most critical and valuable company operations. This can easily mean that we are putting the company at considerable risk with almost any other platform besides Zero Trust.
4. Expansion of Remote Access
Zero Trust has allowed the business to expand its remote working potentials so much that many organizations don’t want to look back and go back to their traditional ways of operating. Because Zero Trust can run many business applications while opening up access at an exceptional speed for others, the expansion of remote excess to business networking and data has become easier than ever.
5. Minimum Requirements
One of the best parts about acquiring Zero Trust is that it works with generic IT minimum requirements like data, devices, user identity, network identity, analytics, automation, visibility, workloads, orchestration, network, and endpoint. Basically, all the broader spectrum of a general IT ecosystem security capabilities and experiences coming into account but much more advanced and tailored for each organization’s user and data size.
Summary of Zero Trust Better Than Lock Down
Implementing Zero Trust might have been one of the better things to come out of the Covid-19 lockdown. If executed rightfully, Zero Trust allows organizations to speed up their business processes while almost guaranteeing their connection is highly authorized through one of the best security systems out there. Based upon these points, we answer is zero trust better than lock down?
Further blogs within this Chrome And VS Extensions category. | <urn:uuid:6f56e481-10ac-40a5-86b3-58968eb1805b> | CC-MAIN-2022-40 | https://cloudcomputingtechnologies.com/why-is-zero-trust-better-than-lock-down/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00185.warc.gz | en | 0.947629 | 1,150 | 2.703125 | 3 |
A Comprehensive Guide to LAN Cable
A LAN cable is used almost everywhere on this planet. It plugs into ethernet ports on various network devices to help create home or business networks to share files, information, and data between each other and establish internet connections. This blog will give you a comprehensive guide to help you understand more about a LAN cable.
What Is LAN and LAN Cable?
LAN stands for Local Area Network. A LAN is a private network consisting of routers, cables, access points, switches, and other units that let devices link to web servers, internal servers, and other LANs using Wide Area Networks (WAN). The most straightforward technique you can use to set up a LAN is by using an ethernet cable.
So, what is a LAN cable? It is a networking cable that connects different devices. For example, if you have a printer that connects to your router with a cable, that cable is a LAN cable. A LAN cable helps in the connection of computers and hardware to form a LAN and is best for usage in small distances.
Is LAN cable the same as Ethernet cable?
No, it is not. An Ethernet cable is basically another type of LAN cable. Explained below are the types of LAN cables.
Types of LAN cable
There are three main types of LAN cables:
copper twisted pair
fiber optic cable
Each of the above three LAN cables has different designs and functions, which power your network connections. Get a thorough description and illustration of all these three LAN cables by reading this great article: Fiber Optic Cable vs Twisted Pair Cable vs Coaxial Cable.
You will most likely be using coaxial cables for connection from your Internet Service Provider to your modems. The coaxial cable powers that connection, which provides your network with internet access. Many people use coaxial cables because their shielded design lets the center conductor quickly transmit data while being protected from damage and interruptions.
Copper twisted pair is the ordinary networking cable used to connect computers, routers, switches, and IP cameras. Two insulated copper wires are twisted around one another to minimize crosstalk or electromagnetic induction between pairs of wires. For specific business locations, the networking cable is enclosed inside a shield that works as ground. This is referred to as Shielded Twisted Pair (STP). The ordinary wire to your home is unshielded Twisted Pair (UTP).
The twisted-pair Ethernet cable comes in different categories including:
Fiber optic cables are faster and more efficient at transmitting network signals. Think about the speed of light, and you will understand how they operate. These cables are faster, more reliable, and carry larger bandwidth than the copper wire can handle. Fiber optic cables enable end-to-end connections, where you don't share signals with other users on the network. This limits internet speed slowdowns during peak usage hours. Feel free to check out the Fiber Optic Cable Types: Single Mode vs Multimode Fiber Cable.
Fiber optic cables usually have different connector types, some of which range from:
SC to LC
SC to SC
ST to LC
ST to SC
How to Choose a LAN Cable?
Always choose a LAN cable with the performance and range you need. What will you need to consider before choosing?
Start by checking the speed of your home/office network. If your internet runs at 1Gbps, an old LAN cable is a pitfall. If you have a slow internet connection lying between 10 to 20 megabits per second, you can use a Cat 5 cable or a newer model. Cat means "Category." The following number after Cat represents a specification version supported by the cable.
Establish your desirable transmission speeds. Longer LAN cables have slower transmission speeds than short ones. The 100-meter rating only works for large professional projects.
Opt for robust LAN cables. Today, most routers are faster, more capable, and facilitate faster network speeds. This is why you need to choose robust LAN cables, which can promote faster network speeds and future-proof your network setup.
Is LAN cable faster than Wi-Fi?
Yes, a plugged-in LAN cable is faster than Wi-Fi. Even though Wi-Fi speeds today have significantly increased, courtesy of standards like 802.11ac and 802.11n, which provide maximum speeds of 866.7Mb/s and 150 Mb/s, respectively, the use of Ethernet through a LAN cable to access the internet is still better and faster than Wi-Fi.
Alternatively, a LAN cable can provide up to 10Gb/s if you have a Cat 6 cable. The exact maximum speed of your LAN cable depends on the type of LAN cable you are using. But still, even the commonly used Cat 5e cable supports up to 1 Gb/s. Unlike Wi-Fi, this LAN cable speed is consistent.
The primary determinant of your internet activities should be based on the internet speeds offered by your ISP. HOWEVER, a LAN cable is unique as it will affect the speed between devices on your network. For instance, if you want to transfer files faster between two computers in your house, a LAN cable is faster than Wi-Fi. Since your internet connection type is not involved in transferring files, it is upon the maximum speeds that your local network hardware can provide.
All in all, a LAN cable is used to connect your networking devices via wireless routers or other network switches. So, choose the best LAN cable type that can meet your needs and provide value for your hard-earned money. | <urn:uuid:91e63e53-1d7e-44dc-9fdb-c9c1c384a0e9> | CC-MAIN-2022-40 | https://community.fs.com/blog/a-comprehensive-guide-to-lan-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00185.warc.gz | en | 0.907762 | 1,196 | 3.328125 | 3 |
GPT-3's ability to write code, letters even novels, often with the input of one or a few words, is downright spooky. But there are two things you should know:
1, It has no idea of the meaning of what you say (type).
2. It has no idea of the meaning of what it outputs.
It's all math, trained neural networks.
Before we dig into GPT-3, a quick review of NLP is in order:
By now, everyone is familiar with conversational NLP like Siri, Alexa, or Cortana. This is Natural Language Processing.
There are a few sub-disciplines in NLP, such as:
- Optical Character Recognition - Converting written or printed text into data.
- Speech Recognition - Converting spoken words into data or commands to be followed.
- Machine Translation - Converting your spoken or written language into another person's language and vice versa.
- Natural Language Generation - The machine producing meaningful speech in your language.
- Sentiment Analysis - Determining the emotions expressed by language.
For NLP-enhanced business analytics, the conversation may be, "Download the latest pricing analysis to my phone." The critical thing to remember is that the computer does not understand what you are saying. It can process and answer but make no mistake -- it's all done with math.
Organizations that offer NLP capabilities don't have to start from scratch. There are open-source Python libraries that software can integrate with, such as spaCy, textacy, or neuralcrret, and a few in other languages such as CoreNLP in Java. John Snow Labs developed and maintained an open-source NLP library, Spark NLP.
The steps a natural language processor goes through to satisfy your question:
- Sentence segmentation, break the words apart.
- Word Tokenization: words = tokens.
- Predict the part of speech for each token. Feed the token with some surrounding tokens for context into a trained part of speech classifier.
- Text Lemmatization: know the base form of every word and its inflections; finding the most basic form of every word.
- Identify "stop" words (such as a, an, the, …) and filter them out.
- Dependency parsing.
- Find noun phrases: groups of words that talk about the same things.
- NER (Named Entity Recognition): Detect and label nouns to real-world concepts. Names of people, companies, geolocation, dates and times, amounts of money, names of events, etc.
- Coreference resolution: attach meaning to words like pronouns, or it.
The above steps are employed to understand your written, typed, spoken, or even machine-generated request. The underlying implementation of the technology is machine learning, typically various kinds of neural networks.
A little background on NLP models
Google's developed the BERT model that was trained with 340 million parameters from millions of books and Wikipedia. It was designed to handle straightforward Question and Answer queries. The accuracy of the model was pretty good. Microsoft and Facebook together developed BERT-based models, RoBERTa and codeBERT. The industry concluded that larger natural language models improved accuracy. Microsoft, on their own developed Project Turing, releasing a product called the Turing Natural Language Generation (T-NLG), at 17 billion parameters, which was the most massive model ever trained in early 2020.
About six months later, OpenAI debuted their beta version of GPT-3, based on an unparalleled 175 billion parameters. To put that in perspective, 175 billion is almost all of the internet.
For any of you, like me, who have always found SQL to be tedious and nuanced (I never get it right on the first try), GPT-3 can accept as input "Display the total number of employees in the HR department," and instantly construct a well-formed SQL statement, "Select count(*) from Emp where dept_id = ‘HR'.
Don't have time to write an email? GPT-3 can help:
"Compose an email from Neil to Mark at abchotels to make a reservation for a suite at Menlo Park location for three nights with my usual preferences"
To Mark at abc hotels, from Neil Raden, subject: request for reservation at Menlo Park location.
Greetings Mark. I am writing to request a reservation for a suite at your Menlo Park location for three nights. I am currently in the Bay Area on business and would like to make the reservation for April 1, 2 and 3. I would also like to request a room with a view of the Golden Gate Bridge (OK, I made this up. You can't see the GGB from Menlo Park, I don't think). Please let me know if you have availability, and I will be happy to provide my credit card information over the phone. Thank you.
The training data for GPT-3 are controversial. A great deal of it is available openly, such as Twitter "tweets," a repository notorious for its violent, racist and misogynist language. Though the model performs to a greater extent, researchers fear it can heavily threaten disinformation, where bad actors can use it to create an endless amount of fake news, spread misinformation etc.
Here is the tweet by the Sam Altman, the CEO of OpenAI, creators of GPT-3:
The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.
— Sam Altman (@sama) July 19, 2020
OpenAI is currently training a GPT-4, rumored to have ONE HUNDRED TRILLION parameters.
This approach has its critics. Stuart Russell, a computer science professor at Berkeley and AI pioneer, argues that "focusing on raw computing power misses the point entirely […] We don't know how to make a machine really intelligent - even if it were the size of the universe."
There is another element: GPT-3 costs around $4.6 million in computing. That would put a price of $8.6 billion for the computer to train GPT-4. There is some pushback that these monstrous models are out of control.
There is another issue, too. Sam Altmman believes that each iteration of GPT will get closer to the inevitable AGI (Artificial General Intelligence), but there is as credible fallacy. "Why AI is harder than we think" - that's the title of a recent paper by Melanie Mitchell at the Santa Fe Institute. Her contention is that the prevailing attitude, and definitely one at OpenAI, is that narrow intelligence is on a continuum with general intelligence. Mitchell, however, argues that advances in narrow AI aren't "first steps" toward AGI (Artificial General Intelligence) because they still lack common-sense knowledge.
The implication is that the path to truly thinking machines is not through ever-more-enormous computers, but better theories leading to better, more economical algorithms. That fits perfectly with my training in topology, where I had a professor who would not accept a proof longer than two pages. | <urn:uuid:53b7753c-9098-40c5-8337-6afaae440164> | CC-MAIN-2022-40 | https://diginomica.com/gpt-3-demystified-spooky-good-ai-or-overrated-text-generator | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00185.warc.gz | en | 0.94741 | 1,541 | 3.203125 | 3 |
in this example,R1 received 220.127.116.11/24 from R1 and the result in R1 bgp table
18.104.22.168/24 next hop 22.214.171.124 but when we`re looking at 126.96.36.199/24 ,that is not the same r> 188.8.131.52/24 next hop 184.108.40.206 why both routes into bgp table are not the same if both routes are the same case ?
Why does 220.127.116.11/24 have ‘r’ but not 18.104.22.168/24?
why doesn’t 22.214.171.124/24 have a ‘r’ on the line that has the ‘>’? The line where the source is 0.0.0.0 .
How come it doesn’t have a ‘r’? Instead it has a ‘*’
why 126.96.36.199/24 does not has r ?
when i configure the C route for 188.8.131.52/24 and it receive a bgp route that means C route preferred than the bgp route and the bgp update should have r
but when i configured the static route it has ad=1 and when it get a bgp update the static route for 184.108.40.206/24 with ad=1 will better than the bgp update and it has r
what is the difference between c and s ?
that is the question
I think the difference here has to do with the BGP rib and how the best path is chosen. In the case of 220.127.116.11 you aren’t injecting that prefix into BGP on R1 so it’s essentially telling you that your best path is via a different source i.e., static route which is not in
the BGP RIB. Where as for 18.104.22.168 you injected that prefix into BGP and you are receiving that prefix from R2 via BGP. The best path is the local BGP route you injected into BGP in that case.
ok sir,but why 22.214.171.124/24 has r and 126.96.36.199/24 don`t ?
Because in the case of 188.8.131.52 you don’t have a valid route in the BGP RIB, whereas, for 184.108.40.206 you do, i.e. 0.0.0.0. If you where to inject the static route for 220.127.116.11 into BGP you should see the same, however, it wouldn’t make practical sense to do that in the design you have.
Because in the case of 18.104.22.168 you don’t have a valid route in the BGP RIB, whereas, for 22.214.171.124 you do
what is the difference between having a route in the global RIB vs having a route in the BGP RIB?
in my case, i have a valid static route in the global RIB on R1 for 126.96.36.199/24,but i have a C route for 188.8.131.52/24
what is the difference here ?
for 184.108.40.206/24 , R1 received this route via B route and should inject it into the global RIB which is something that doesn`t happened
maybe you want to say the route in the BGP RIB sohuld be in the global RIB ? | <urn:uuid:8750adb1-2b13-4cf9-b92b-9fcf8c36174b> | CC-MAIN-2022-40 | https://community.ine.com/t/bgp-rib-failure/2828 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00385.warc.gz | en | 0.92291 | 782 | 2.703125 | 3 |
Alert! "Invisible" Doors OpenedFocus on what is being more exploited
You have one or several digital services that can be reached from anywhere over the Internet. You might have as well one or more wireless devices allowing employees to access corporate services and visitors to access resources on the Internet. These are just two examples of how information technology enables organizations to run their operations broadly. In either case, there is something essential to do daily: checking whether the software components allowing users to do their work are updated and free of vulnerabilities.
Recently, the alert AA21-209A was published on the Cybersecurity & Infrastructure Security Agency (CISA) website, coauthored by this organization and other American, Australian, and British agencies. The message is simple: there are a bunch of known vulnerabilities that are being routinely exploited.
In that document, you can find the details of these vulnerabilities affecting the software of many world-renowned vendors like Microsoft, VMware, and Fortinet, to name a few. Want to prevent a hack or data breach? Have a look at the list, and make sure you have addressed these CVEs. But, don’t stop there: make sure your organization has a process to continuously check whether your software, especially the components that can be reached from the Internet or by visitors or intruders in your corporate network, is free from known vulnerabilities.
Types of these often-exploited vulnerabilities
In our work with many organizations, we routinely find software components that are vulnerable to known exploits. We always provide our customers with the information to address these weaknesses over the systems they entrust us for hacking. This is the main reason we wrote this piece: aligned with the alert, we have evidence that organizations may be more exposed than they think with the outdated software they might have, but this is something they fail to address quickly.
Figure 1. Top Routinely Exploited CVEs in 2020 (Source: Alert AA21-209A - Top Routinely Exploited Vulnerabilities).
What are these exposures? Let’s make a summary of what is in the alert document.
Path traversal (see
Fluid AttacksDocumentation: Path Traversal). In short, a software component can be hacked if it allows accessing files that are not supposed to be accessed. By using strings like
../(a string used as a command to navigate across folders in an operative system), attackers can bypass the boundaries of the software and gain access to sensitive information or functionalities.
Remote code execution (see
Fluid AttacksDocumentation: RCE). This weakness allows, literally, the execution of code remotely. If a software component is vulnerable to this sort of flaw, unexpected actions can be triggered by an internal or external attacker.
Elevation of privileges (see
Fluid AttacksDocumentation: privilege escalation). Usually, a wrong configuration enables users to assign themselves somehow rights they shouldn’t have. For instance, a bank sales representative might leverage this flaw to give more resources or authorizations than they should in their role.
Think of any of these weaknesses; they could be present in your IT assets at some point. It might take just one of these to gain access to a corporate network and, for instance, silently leak confidential data. Also, it is not very difficult to think about a ransomware attack.
Closing these "invisible" doors could make a significant contribution in managing operational and organizational risk. The steps that can be taken to prevent the abuse of IT assets from these vulnerabilities could save a lot of effort and money for organizations. Furthermore, these steps would preserve the goodwill of their brands.
Cybersecurity is a process and should be layered
Why is it essential to have continuous checks? Because cybersecurity is not an end, and threats are evolving so fast that everything is becoming more digital and software-mediated. Have a look at the MIT Technology Review article "2021 has broken the record for zero-day hacking attacks." These numbers are worrisome, and we should ask ourselves how many vulnerabilities are out there silently harming. Organizations need to focus on what they can have control of and do it quickly.
Also, organizations must bear in mind that cybersecurity is not concentrated in one or two places. Quite the opposite: cybersecurity is distributed or layered. Although we have emphasized outdated software here, other IT and business environment components should also be addressed as attack surfaces. For example, there are cases in which one information resource is, by omission, published on the Internet, and only that allows an attacker to gain access to a supposedly protected network. Layered cybersecurity is critical to ensure availability is preserved, as well as integrity and confidentiality. Companies must check whether different layers of protection are fully working.
Companies can have comprehensive support in this endeavor from other expert organizations across all of their IT assets or cover at least the most critical ones. This is usually more efficient and desirable, as the independence of the third party ensures the disclosure of all flaws for the betterment of the organization.
What can Fluid Attacks do for you?
Fluid Attacks focuses on attacking systems
continuously for proactive
defense. Our tests are performed constantly, considering the changes
made in the source code, the deployed applications, and the
We aim to find all vulnerabilities that exist across the software development lifecycle. Yes, we can start checking for vulnerabilities right away when you have just begun developing your software. We employ several techniques like static code review, looking for coding practices that inject vulnerabilities, and dynamic penetration testing over deployed applications and infrastructure. In this last scenario, the interaction between infrastructure and application might lead to other vulnerabilities not visible in the source code. Thus, it is a comprehensive approach.
Organizations of all sizes can benefit from our approach by precisely doing what we suggest in this article: closing the "invisible doors." Our mission is to point out to our customers where these doors are and provide them with the information to close them effectively. We also run tests to check whether fixes have been successful.
We hope you have enjoyed this post. Let us know what you think, and reach out to us if you want to know more about our solutions.
Ready to try Continuous Hacking?
Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying. | <urn:uuid:8f3574b2-b47f-4fc3-87b0-96ea1d85c70b> | CC-MAIN-2022-40 | https://fluidattacks.com/blog/close-invisible-doors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00385.warc.gz | en | 0.94619 | 1,299 | 2.515625 | 3 |
6 Incident Response Steps: What to Do When Under Attack
What is the Incident Response (IR) Process?
When a security incident occurs, every second matters. Malware infections rapidly spread, ransomware can cause catastrophic damage, and compromised accounts can be used for privilege escalation, giving attackers access to more sensitive assets.
Incident response (IR) is a structured methodology for handling security incidents, breaches, and cyber threats. A well-defined incident response plan (IRP) allows you to effectively identify, minimize the damage from, and reduce the cost of a cyberattack, while finding and fixing the cause, so that you can prevent future attacks.
During a cybersecurity incident, security teams face many unknowns and must immediately focus on the critical tasks at hand. Having pre-planned incident response steps during a security incident can prevent many unnecessary business impacts and reputational damage.
Read on to learn a six-step process that can help your incident responders take action faster and more effectively when the alarm goes off.
In this article:
- What do compliance standards require in case of a cybersecurity incident?
- What are the NIST and SANS incident response methodologies?
- What are the 6 steps of incident response?
What do compliance standards require in case of a cybersecurity incident?
When a cybersecurity incident is confirmed by security analysts, it is important to inform relevant parties as soon as possible. There are specific requirements in common compliance standards:
- Privacy laws such as GDPR and California’s CCPA require public notification, and in some cases, personal notification to data subjects, in the event of a data breach.
- PCI DSS compliance specifies the steps an incident response plan should contain. In particular, Requirement 12 specifies your incident response plan should assign certain employees to be available 24/7, properly train incident response staff, and set up alerts.
Depending on the severity of the breach, legal, press and executive management should be involved. In many cases, other departments such as customer service, finance, or IT need to take immediate action. Your incident response plan should clearly state who should be informed, depending on the type and severity of the breach. The plan should include full contact details and how to communicate with each relevant party, to save time in the aftermath of an attack.
What are the NIST and SANS incident response methodologies?
The two most commonly used incident response frameworks are the National Institute of Standards and Technology (NIST) Computer Security Incident Handling Guide (SP 800-61) and the SANS institute Incident Handler’s Handbook.
The table below shows incident response steps according to each of these methodologies.
|NIST Incident Response Steps||SANS Incident Response Steps|
|1. Preparation||Step 1. Preparation|
|2. Detection and Analysis||Step 2. Identification|
|3. Containment, Eradication, and Recovery||Step 3. Containment|
|Step 4. Eradication|
|Step 5. Recovery|
|4. Post-Incident Activity||Step 6. Lessons Learned|
The incident response steps in each of these methodologies are similar, but there are subtle differences. The key difference is in step 3 of the NIST process, which groups together containment, eradication, and recovery into one step — meaning that these activities should be performed together. By contrast, in the SANS process these are distinct steps which should be followed one after the other.
The image below illustrates the NIST process and the flow between the four process steps.
What are the 6 steps of incident response?
The first priority when implementing incident response cybersecurity is to prepare in advance by putting a concrete IR plan in place. Your incident response methodology should be battle-tested before a significant attack or data breach occurs.
Building on the NIST incident response phases, here are specific incident response steps to take once a critical security event has been detected:
1. Assemble your team
It’s critical to have the right people with the right skills, along with associated tribal knowledge. Appoint a team leader who will have overall responsibility for responding to the incident. This person should have a direct line of communication with management so that important decisions—such as taking key systems offline if necessary—can be made quickly.
In smaller organizations, or where a threat isn’t severe, your SOC team or managed security consultants may be sufficient to handle an incident. But for the more serious incidents, you should include other relevant areas of the company such as corporate communications and human resources.
If you have built a Security Incident Response Team (CSIRT), now is the time to activate your team, bringing in the entire range of pre-designated technical and non-technical specialists.
If a breach could result in litigation, or requires public notification and remediation, you should notify your legal department immediately.
2. Detect and ascertain the source.
The IR team you’ve assembled should first work to identify the cause of the breach, and then ensure that it’s contained.
Security teams will become aware that an incident is occurring or has occurred from a very wide variety of indicators, including:
- Users, system administrators, network administrators, security staff, and others from within your organization reporting signs of a security incident
- SIEMs or other security products generating alerts based on analysis of log data
- File integrity checking software, using hashing algorithms to detect when important files have been altered
- Anti-malware programs
- Logs (including audit-related data), which should be systematically reviewed to look at anomalous and suspicious activity with:
- External storage
- Real-time memory
- Network devices
- Operating systems
- Cloud services
3. Contain and recover
A security incident is analogous to a forest fire. Once you’ve detected an incident and its source, you need to contain the damage. This may involve disabling network access for computers known to be infected by viruses or other malware (so they can be quarantined) and installing security patches to resolve malware issues or network vulnerabilities. You may also need to reset passwords for users with accounts that were breached, or block accounts of insiders that may have caused the incident. Additionally, your team should back up all affected systems to preserve their current state for later forensics.
Next, move to any needed service restoration, which includes two critical steps:
- Perform system/network validation and testing to certify all systems as operational.
- Recertify any component that was compromised as both operational and secure.
Ensure your long-term containment strategy includes not only returning all systems to production to allow for standard business operation, but also locking down or purging user accounts and backdoors that enabled the intrusion.
4. Assess damage and severity
Until the smoke clears it can be difficult to grasp the severity of an incident and the extent of damage it has caused. For example, did it result from an external attack on servers that could shut down critical business components such as e-commerce or reservation systems? Or, for example, did a web application layer intrusion perform a SQL Injection attack to execute malicious SQL statements on a web application’s database or potentially use a web server as a pathway to steal data from or control critical backend systems? If critical systems are involved, escalate the incident and activate your CSIRT or response team immediately.
In general, look at the cause of the incident. In cases where there was a successful external attacker or malicious insider, consider the event as more severe and respond accordingly. At the right time, review the pros and cons of launching a full-fledged cyber attribution investigation.
5. Begin the notification process
A data breach is a security incident in which sensitive, protected or confidential data is copied, transmitted, viewed, stolen or used by an individual unauthorized person. Privacy laws such as GDPR and California’s CCPA require public notification in the event of such a data breach. Notify affected parties so they can protect themselves from identity theft or other fallout from the disclosure of confidential personal or financial data. See Exabeam’s blog on how to create a breach notification letter in advance of a security incident.
6. Take actions to prevent the same type of incident in the future
Once a security incident has been stabilized, examine lessons learned to prevent recurrences of similar incidents. This might include patching server vulnerabilities, training employees on how to avoid phishing scams, or rolling out technologies to better monitor insider threats. Fixing security flaws or vulnerabilities found during your post-incident activities is a given.
Also, review lessons learned from the incident and implement appropriate changes to your security policies with training for staff and employees. For example, if the attack resulted from an unwitting employee opening an Excel file as an email attachment, implement a company-wide policy and training on how to recognize and respond to a phishing email.
Lastly, update your security incident response plan to reflect all of these preventative measures.
An incident response methodology enables organizations to define response countermeasures in advance. There is a wide range of approaches to IR. The majority of security professionals agree with the six incident response steps recommended by NIST, including preparation, detection and analysis, containment, eradication, recovery, and post-incident audits.
When it comes to preparation, many organizations leverage a combination of assessment checklists, detailed incident response plans, summarized and actionable incident response playbooks, as well as policies that can automate some of the processes. While well-planned, an incident response methodology should remain flexible, allowing for continuous improvement.
Want to learn more about Incident Response?
Have a look at these articles:
- The Three Elements of Incident Response: Plan, Team, and Tools
- The Complete Guide to CSIRT Organization: How to Build an Incident Response Team
- 10 Best Practices for Creating an Effective Computer Security Incident Response Team (CSIRT)
- How to Quickly Deploy an Effective Incident Response Policy
- Incident Response Plan 101: How to Build One, Templates and Examples
- IT Security: What You Should Know
- Beat Cyber Threats with Security Automation
- IPS Security: How Active Security Saves Time and Stops Attacks in their Tracks
Log4j by Another Name. It’s Coming; How Can You Keep Pace?
What Can We Learn From the Lapsus$ Attacks?
Exabeam News Wrap-up – Week of September 12, 2022
Exabeam News Wrap-up – Week of September 5, 2022
Subscribe today and we'll send our latest blog posts right to your inbox, so you can stay ahead of the cybercriminals and defend your organization.
See a world-class SIEM solution in action
Most reported breaches involved lost or stolen credentials. How can you keep pace?
Exabeam delivers SOC teams industry-leading analytics, patented anomaly detection, and Smart Timelines to help teams pinpoint the actions that lead to exploits.
Whether you need a SIEM replacement, a legacy SIEM modernization with XDR, Exabeam offers advanced, modular, and cloud-delivered TDIR.
Get a demo today! | <urn:uuid:50981a71-8b05-4c8c-91ab-0cac04b400cc> | CC-MAIN-2022-40 | https://www.exabeam.com/incident-response/steps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00385.warc.gz | en | 0.916069 | 2,318 | 2.96875 | 3 |
In a previous post I talked about a library that makes it super easy to communicate with web services. In this post, I’ll step back a little bit, explain what exactly a web service is, and go through the process of consuming data from them.
Understanding web services
By its name, a Web service can be any service that is accessed through a web browser, such as Dropbox, Evernote or Gmail. However in software terms, a web service is a program that you can interact with through a network call. In modern web applications, developers have implemented groups of web services called APIs (Application Programming Interface). The concept of an API is not new and it’s not exclusive to web or mobile programming, but I’ll only talk about its application to web and mobile technology.
Ten years ago the idea of having data available for consumption by third-parties was not common, in fact it was avoided at all costs. Companies adopted the approach of “my data is my data, and it’s my competitive advantage”. Today, we have seen that providing access to data is an essential part of companies like Facebook and Twitter. For them, having an API is a way of expanding the reach of their services, increasing engagement, innovation and adoption. Even governments have embraced the use of APIs in order to speed up the development of web and mobile apps for their citizens.
Why is JSON so important
HTTP GET and POST
The two most important methods for HTTP communications are GET and POST, GET being the default. You perform a GET every time you call a URL. Although you can send data through a GET call, it’s usually data that will be used as contextual information to retrieve more data. You perform a POST every time you send data to a web service, such as filling out a form. The POST method encodes the data in a different way, providing more security and allowing you to send more data in each request.
With the XHR library described earlier, you perform a HTTP GET call with a single line.
xhr.get(url, onSuccessCallback, onErrorCallback, options);
The callbacks are simply functions you send to the library to be used later on, in this case when the process was successful or when there was an error. Note that you need to define these functions before calling the GET or POST method.
Let’s examine the onSuccessCallback.
The POST server website will confirm that it indeed received the data and will give you a URL to view the data received. Remember, this is a public website so only use it to send test data.
Once you know that your data is being properly received by the “post server”, then you know that your app is working properly, so not switch the post call to the actual one and you’re set.
Now that you can communicate with web services, the next step is to do something useful with the data. By combining Appcelerator Alloy with this very basic example, you can have a native interface to read tweets. You can see all the required files here: https://gist.github.com/ricardoalcocer/5190529. Here’s what it looks like.
In a final post of this series, I’ll explain how to create your own web services using Appcelerator Cloud Services. Stay tuned. | <urn:uuid:08cb4e7a-1716-4a26-af67-601c4dc88bc1> | CC-MAIN-2022-40 | https://blog.axway.com/learning-center/software-development/api-development/consuming-data-from-web-services | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00385.warc.gz | en | 0.907789 | 1,531 | 3.21875 | 3 |
Is ignorance bliss? Or would better understanding of technology alleviate our privacy concerns?
Are we over-sharing our data?
Facebook’s study about emotional contagion in June of this year raised questions about privacy, consent and how much data we’re sharing online without even knowing.
The data scientists behind the study deliberately manipulated the news feeds of almost 700,000 Facebook users for a week in January 2012 to see whether happy and sad content on people’s feeds would affect their own posts. Some people were shown more positive content whilst others were shown sad posts. When the week was over, results were conclusive. Facebook newsfeed does influence our moods.
What part do algorithms play?
The Facebook study’s co-author, Cornell professor Jeff Hancock, asks his new students to complete an experiment at the beginning of the semester. The experiment starts by asking the students to Google exactly the same search term, then turn to the person next to them and compare the results on their screens. Inevitably, the students find that instead of each getting the same search results as their neighbour, they can be wildly different. And then comes the realisation that Google searches – much like everything else online – are governed by algorithms; the invisible systems built around our personal information.
This sounds a little bit like Big Brother in reality doesn’t it? But isn’t it better to have an online experience tailored to you? Algorithms are all around us and they aren’t new. They’re just getting smarter.
Who does Google think you are?
At this point you might be wondering what Google knows about you already. Dave Thier, a contributor writing for Forbes, found an easy way to figure this out. The link in Dave’s article takes you to Google’s Ad Settings, which were pretty accurate for him.
However, it’s plain to see that Google doesn’t always get it right. I took the test and based on the websites I’ve visited, Google thinks I’m aged between 35-44 (I’m 24!) and male. And whilst some of the interests listed did make sense (like computers and electronics, smartphones, social networks, SEO and marketing, technology news and travel), some were a bit weird. For example, soccer, cooking and astronomy all appeared on the list. Just for the record, I can’t stand football, I am an abysmal cook and I definitely won’t be identifying constellations any time soon.
What about giving Google our consent?
Hancock suggests that because algorithms are such a big part of the way the internet works ‘we may have passed the point where it’s possible for people to reasonably expect they’d have to give consent before a corporation messes with the algorithmic filters that affect the information they see online.’
The prevalence of algorithms and the frequency at which they are changed makes opt-out impractical at best. As Hancock puts it, what would obtaining consent even look like? For example, the algorithm behind Google search is tested and changed all the time, but we don’t see opt-out notifications every time it’s updated.
What about giants like Facebook, Amazon and Apple?
A marketing stunt for Watch Dogs, an Ubisoft game released earlier this year, requests permission to access a Facebook user’s account. From there, Digital Shadow pulls personal information to build a comprehensive profile of you as if you were an assassin’s target, just like in the game. Whilst viewed by some as just a bit of fun, it’s actually quite scary.
And yes, in the interest of writing this blog I decided to be a willing guinea pig for the campaign, which is still running. You can get your profile here if you want to. I thought I was quite savvy with my privacy settings on Facebook. Oh how wrong I was! Just using the information publicly available on my Facebook account, Digital Shadow could see:
- Some of my photos
- Which of my friends I interact with most and those who I rarely speak to
- Words commonly used by me and my friends
- When I’m most active on Facebook (between 7am and 8am on a Monday)
- Where I’ve been … complete with a photo of McDonald’s in Radcliffe. Guilty.
- An estimated annual salary based on my location, age, work and education
- Password possibilities generated by my interests and close friends
- And an estimate on the value of my accessible, private data generated online. It’s $49,269, although I’m not sure whether that’s good or bad.
It’s very much the same story with other big tech giants like Amazon and Apple. Just think about the recommendations you see when you log in to your Amazon account or your iTunes. They’re based on products you might have bought before, something you looked at but didn’t buy and items related your purchase history. Of course, this whole personalised experience is governed by algorithms, which work in the background collecting information as we shop. The same is true when you browse your newsfeed on Facebook, or search for something on Google.
Algorithms are set to change our lives in more ways than just streamlined shopping too. The tech giants have recognised niches in other markets and pushed the boundaries of innovation. A great example of this might be Facebook’s acquisition of Oculus VR, the virtual reality headset that’s been making some serious waves in the gaming industry. Meanwhile, Google made the headlines with their investment in driverless car technology and drones, and Apple’s plans seem to moving towards the healthcare sector with wearables.
Trust and technology
Hancock acknowledged there is often a state of mistrust surrounding new technologies, saying ‘it goes back to Socrates and his distrust of the alphabet, [the idea that] writing would lead to us to become mindless … It’s the same fear, I think. Because I can’t see you, you’re going to manipulate me, you’re going to deceive me.’
Maybe it’s just a case of needing clearer communication and open discussion. If more people knew what algorithms are and how they work, it might dispel that fear of what we don’t understand.
To some people all of this really does sound like Big Brother is watching you. We believe that raising awareness of the digital footprint you create is an important lesson for us all to learn. It’s about making informed choices about what you do and don’t share online.
More from the Digital Privacy series:
How much personal information would you share? | <urn:uuid:406cd8e2-c0c5-4388-a340-414f844d0940> | CC-MAIN-2022-40 | https://purple.ai/blogs/what-do-google-and-the-tech-giants-know-about-us/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00385.warc.gz | en | 0.948301 | 1,405 | 2.609375 | 3 |
Quantum Sensing the Brain Without Need for Cryogenics
(PhysicsWorld) A collaborative research team at the University of Nottingham’s Sir Peter Mansfield Imaging Centre and the Wellcome Centre for Human Neuroimaging at University College London, funded by the UK National Quantum Technologies programme and Wellcome is developing quantum enabled magnetic field sensors that offer sensitivity without the need for cyrogenics.
For many years the only viable option for imaging brain function on the superconducting quantum interference device (SQUID) – a cryogenic sensor that relies on quantum tunnelling through an insulating gap between two superconductors (the Josephson effect). The tunnelling current is a function of magnetic flux through the SQUID.
To maintain their superconductivity, SQUIDs must be cooled to –269 °C, which limits the design and deployment of MEG scanners. First, because they operate at such low temperatures, a thermally insulating gap must be maintained between the sensor and the patient’s head to prevent injury. Because magnetic field decays with distance squared, this gap limits sensitivity to the brain’s magnetic field. Second, cryogenics mean that sensors must be fixed in position above the head inside a cryogenic dewar, which means that if a patient moves their head relative to the scanner, the quality of the data goes down drastically. Just a 5 mm shift can render data useless, and many people cannot tolerate this environment. The fixed nature of the sensors also results in a one-size-fits-all helmet. This is a significant barrier to scanning young children and babies, since the helmet is much too large. Finally, the complex combination of SQUID sensors, control electronics and cryogenics makes MEG expensive.
Recent commercialization by the US company QuSpin has made optically pumped magnotometers (MEG( robust, easy to use and readily available, while miniaturization has made the most recent generation small and lightweight (similar to a Lego brick in both size and weight). Based on this new design, our team has integrated OPMs into a working prototype MEG device. Because they are so small and don’t need cryogenics, OPMs can be mounted directly on the surface of a human head, increasing sensitivity by removing the thermally insulating gap and getting the sensor closer to the brain.
Based on this new design, the team has integrated optically pumped magnetometers into a working prototype MEG device. This also allows the sensor array to move with the head, making the MEG measurement resilient to subject motion. Flexibility of OPM placement means an array can adapt to any head size, enabling babies and children as well as adults to be scanned with the same system. The lack of complex cryogenics also means that OPM-based MEG systems are ostensibly cheaper to produce and run. This technology therefore allows MEG to evolve, making systems more practical, more powerful, significantly cheaper and consequently much more suitable for clinical use. | <urn:uuid:ec00f060-9777-4256-9928-ace18442490c> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-sensing-the-brain-without-need-for-cryogenics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00385.warc.gz | en | 0.919078 | 616 | 3.21875 | 3 |
Artificial intelligence (AI) in agriculture has transformed the way the world’s farming operations work by giving food producers significantly improved access to data about their operations.
AI provides farmers with real-time insights about crop conditions, livestock activity, and the locations of their farm machinery. Looking ahead, many scientists believe AI in agriculture will play a pivotal role in increasing food production globally, particularly in areas where food insecurity is the norm.
By 2050, the world’s population will grow by 2 billion people, according to United Nations (UN) data on population and hunger. The world will require a 60% increase in food production to keep the global population fed.
Advances in AI and machine learning (ML) in agriculture are powering innovations that have the potential to improve food production supply chains in more affordable, sustainable ways.
AI and agriculture today
Spending on AI technology will grow from $1 billion in 2020 to $4 billion in 2026, a compound annual growth rate (CAGR) of 25.5%, according to Markets & Markets.
AI applications in agriculture tend to focus on one or more of four primary goals, according to the ITRex Group.
- Yield improvement
- Cost reduction
- Profit increase
- Alignment with sustainable farming practices
Here are several examples of exactly how AI is being used throughout the food production supply chain:
See more: Artificial Intelligence Market
5 examples of AI in agriculture
In his Forbes article, “10 Ways AI has the Potential to Improve Agriculture in 2021,” Louis Columbus addresses a range of successful AI applications in the industry:
1. Drone data is helping producers optimize the use of pesticides
Intelligent sensors, combined with visual data streams from drones, use AI to detect areas most infected with pests. This data helps farmers optimize the right mix of pesticides and allows them to zero in on only the field areas that need treatment. The result, Columbus says, is a reduction in overall costs and an increase in yields, two key drivers fueling AI in agriculture adoption.
2. Linear AI programming is enabling farmers to conserve more water
AI can help farmers locate irrigation leaks, optimize irrigation systems, and measure the effectiveness of crop irrigation approaches. Conserving water is becoming increasingly vital as the world’s population grows and drought conditions become more widespread and impactful. Using water efficiently can significantly impact a farm’s profit and contribute to the global effort to conserve water. Columbus says linear AI programming is being used to calculate the optimal amount of water a specific field or crop needs to reach the desired yield level.
3. IoT sensors are providing real-time insights into previously untraceable data sets
Farmers today have access to IoT sensors that can keep track of virtually every aspect of food production — a huge technological leap over agriculture methods from even a few years ago. It’s now possible for farmers to track data about soil moisture and nutrient levels to analyze crop growth patterns over time. Columbus points to a specific branch of AI — machine learning — as the key to using IoT sensor data to arrive at data-driven predictions about potential crop yields.
4. AI-powered yield mapping is improving crop-planning accuracy
Yield mapping is an agricultural technology that uses supervised machine learning algorithms to uncover patterns hidden within large-scale data sets that can be used for crop planning. Columbus notes this technique involves the collection of drone flight data, combined with IoT sensor data, to make predictions about potential crop yields before the vegetation cycle has begun.
5. AI-enhanced livestock monitoring is improving animal health and increasing profits
Being able to monitor livestock at a high level gives producers an edge over competitors who have yet to invest in AI-enhanced agriculture technology. Farmers, Columbus says, can monitor food intake, activity levels, and vital signs to develop a better understanding of the optimal conditions for better milk or meat production. Real-time health insights also allow farmers to quickly separate livestock infected with contagions from healthy animals as well as promptly address injuries and unexpected livestock behaviors. | <urn:uuid:5e6ee4ff-9247-4e80-9779-1b7ed9dfe698> | CC-MAIN-2022-40 | https://www.datamation.com/artificial-intelligence/ai-in-agriculture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00385.warc.gz | en | 0.92598 | 824 | 3.390625 | 3 |
Don’t be a Ransomware Victim!
Imagine coming into the office and booting up your computer, but instead of your usual desktop screen you’re greeted by a red page with the message: “Your personal files have been encrypted!” The page explains that you need to pay the specifed ransom in bitcoin to get your files decrypted. If you don’t respond within 3 days, the price doubles; if you don’t respond within 7 days, your data is lost forever. Worse yet, it’s not only your computer that’s been affected: it’s every computer on the company’s network. Ransom demands in the tens of thousands of dollars are common; demands in the millions are not unheard of.
Are you prepared for such a scenario? Do you know how to prevent it?
What is Ransomware
“Ransomware” is a modern version of the old “protection” racket. In the pre-technology version, you paid money to thugs and in exchange they didn’t throw rocks through your windows. In the modern version, you pay money to cyber-thugs so that they let you make use of your own data.
Ransomware is a very big problem: one cybersecurity research firm estimated the costs of ransomware in 2018 as exceeding $8 billion. Europol, the EU’s law enforcement agency, said in a report on organized crime that, “Ransomware remains the key malware threat in both law enforcement and industry reporting.”
And to add insult to injury, paying the ransom doesn’t guarantee you’ll get your data released: according to one survey, in 30% of cases where victims paid the ransom, the bad guy still didn’t release their data.
Enterprise Anti-Ransomware Solutions
How do you protect your organization from ransomware?
The traditional tools for protecting against ransomware are based on a “find and block or destroy” technique: firewalls, antivirus/anti-malware software, and secure web gateways are based on identifying malware and either blocking it from getting through to the endpoint device, or destroying or disabling it.
These tools rely on a combination of regularly updated databases of known threats and heuristic analysis, which applies various algorithms to detect threats that aren’t in the database.
There are two problems with databases: 1) if your database isn’t absolutely up-to-date, new malware won’t be caught; 2) they afford no protection against “zero-day” threats, brand new attacks that haven’t been seen before. Thousands of computers and networks can be infected on the first day a new type of malware is released, before it can be identified and antivirus databases can be updated.
That’s why antivirus software often includes heuristic analysis as well, to try and identify malware that’s not in the database. Unfortunately, cyberthieves are often able to mask their activities and slip past these defenses.
Bottom line: conventional approaches to protecting an organization against ransomware and other types of malware aren’t good enough. Some attacks can still get through.
What About Backups?
You follow good IT practices and make frequent backups of your data. Can’t you just ignore the ransomware and simply restore everything from a backup?
Unfortunately restoring from a backup doesn’t always go smoothly. One survey found that even though most companies do regular backups, only 42% were able to successfully restore 100% of their lost data from backups after a ransomware attack. And what if your backup is infected too? Unless you have offline backups, your backups are likely also encrypted. And offline backups generally aren’t “real time” so you’ll inevitably lose some data.
What’s the Solution?
So, what’s the best enterprise ransomware protection?
The only way to have full ransomware endpoint protection is to use a method that doesn’t rely on identifying malware as the first step, such as Remote Browser Isolation (RBI). An RBI solution isolates ALL web browsing in a separate “safe” server, away from the organization’s network. When a user opens a browser or clinks a link in an email, the browser is opened in a one-time-use remote container. The user sees a dynamic image of the website – the actual code on the website never reaches the endpoint device. If a site is infected, the malware or ransomware can’t spread outside of the one-time-use container, which is destroyed when the browsing session is over.
Ransomware can also be hidden in files that a user downloads from the web. Some RBI solutions such as Ericom Shield come with built-in file cleansing technology to protect against those threats as well. When a user downloads a file, it’s scanned and sanitized remotely before being downloaded to the user’s computer or other device.
In a world where malevolent hackers are growing increasingly sophisticated, the conventional approaches to protection against ransomware and other forms of malware are no longer enough. Standard best practices such as firewalls, antivirus software, and regular backups don’t guarantee protection. A remote browser isolation solution, that doesn’t rely on detection, offers a much greater degree of security from malicious ransomware.
If you liked this article you might also be interested in some of our latest blog posts: | <urn:uuid:21503d17-f6c1-4224-8ed8-e19dc5c82e00> | CC-MAIN-2022-40 | https://blog.ericom.com/enterprise-ransomware-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00585.warc.gz | en | 0.920726 | 1,147 | 2.546875 | 3 |
Computer security training isn’t just a matter of giving employees information. Knowing best practices is important, but it helps only if employees understand that they make a difference.
Talking about “viruses” which “infect” computers gives the wrong message. It suggests that attacks are just something that happens to computers, like catching a cold. The truth is that user errors make the majority of malware attacks possible, and that employees who think about security can avoid most of them.
Let’s start by going over best practices that encourage the proper mindset and promote secure action.
Email is where users make the most security mistakes. Employees need to recognize three things:
It’s not a “virus.” The attachment can’t do anything unless they open it. If they report suspicious mail to an administrator instead, their computers will be much safer.
Clicking on dubious links is another way employees invite attacks. What employees need to recognize here is:
In an ideal, bug-free world, users could access any website without risk. However, browsers do have bugs, so employees need to be cautious about what links they follow.
Weak passwords are a third big area for user error. Certain passwords are at the top of attackers’ lists for guessing, because they’re the most widely used ones. These include ones like “password” and “123456.” Criminals who guess them can get into their accounts and grab confidential information or manipulate company data. Employees need to know these things:
Employees who use easily-guessed passwords are effectively leaving the door unlocked. Anyone with malicious intentions will have an easy job of getting into their accounts and doing damage.
Smartphones and tablets are the newest targets for attack. They’re subject to the same kinds of attacks as desktop devices, but people don’t think about them as carefully. In addition to the other risks, they’re easy to lose. Employees need to recognize:
Encrypting their devices and requiring a strong password to unlock it is the best protection. Even so, employees should minimize the amount of sensitive information they store on them.
For each risk, the language needs to be about attacks and intrusions, not “infections.” Employees are responsible for keeping their devices and accounts safe, and what they do makes a huge difference.
Fuelled Networks is the trusted choice when it comes to staying ahead of the latest information technology and security tips, tricks, and news. Contact us at (613) 828-1280 or send us an email at firstname.lastname@example.org for more information.
Published On: 28th June 2016 by Ernie Sherman. | <urn:uuid:08ea2469-d94e-4faf-a6f0-c2b8e08d6ecd> | CC-MAIN-2022-40 | https://www.fuellednetworks.com/educating-employees-on-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00585.warc.gz | en | 0.943277 | 575 | 3 | 3 |
It is no secret that historically, the science, technology, engineering, and mathematics (STEM) fields have struggled with diverse integration and representation of minority groups. Many argue that such challenges to inclusivity are stifling the field's potential and, as a result, preventing many people from engaging in what can be rewarding professional or personal pursuits. But we are moving in the right direction. In 2020, for example, the number of women in STEM board positions increased by 18% globally. However, young girls need STEM role models to inspire them for this to continue.
An important part of supporting this movement is changing the conversation and raising awareness of inherent biases, stereotypes, or discrimination, which can often discourage people from pursuing their passions. As a result, the theme of International Women’s Day 2022 is #BreakTheBias. This movement, which focuses specifically on gender inequality, advocates for a gender-equal world that values and celebrates difference.
To participate in this year's conversation, we wanted to highlight one of Cognitive's amazing women: Catherine Cha, a Graphic Design Co-op Student on the UX & UI team. Catherine's work at Cognitive involved refreshing our graphics by curating a more diverse stock photo selection as well as bringing in a more approachable and relatable feel with new character and UX/UI illustrations. Catherine has always preferred digital to other types of art. It offers techniques, colours, movement, lighting, and other features that are not possible on physical paper. She hopes to use her creativity to help highlight women and other minority groups in STEM and, ultimately, break the bias.
What is your experience working in STEM?
Cognitive was my first significant experience in STEM. While my high school's arts and culture specialist program was both unique and rewarding, my exposure to other fields was quite limited. Overall, working in STEM has taught me that graphic design serves as a bridge between the consumer and the engineering/product team; it is the visible element that everyone is familiar with. It is critical, especially in technology, to help explain things in simple terms and engage consumers of all backgrounds.
Why is proper representation (especially in STEM) so important to you?
Proper representation in STEM is important to me because the most well-known faces in the industry are not very diverse and are often perceived as the sole representatives of the field. It can be extremely isolating then for those who do not see themselves in the field. I want the representatives of STEM to reflect the people we are trying to help and design for. We need to strive towards being more accepting and inclusive of others to break free of the biases that discriminatory structures in our society impose on us. Everyone is interested in STEM, and all must be represented by role models to whom they can relate. It is extremely discouraging for young girls to hear about women who are struggling for visibility and equality in the field. I want to contribute to the creation of a welcoming environment so that others will feel comfortable participating in STEM. I want to challenge more traditional ways of thinking so that we can respect everyone, not just those we know.
How have you been trying to bring your views on breaking bias into your role at Cognitive?
My role at Cognitive has been to reimagine our imagery and to participate in critical discussions about Cognitive's current and future brand direction. It's great to be part of a team where I can raise important concerns about bias, representation, and inclusion. I believe that having an open forum where new ideas can be freely discussed without fear is what is most important to me on any team. I want to help reinforce the idea that inclusion should be at the top of a company's priority list, not for brownie points, but because diversity reflects reality. A good way to ensure this is to engage in constructive internal dialogue and be honest about your design intentions.
What have you learned through your role?
From a practical standpoint, I've discovered that there are vast differences between industries, even within STEM. When creating visuals for any field, you must be aware of current standards and conversations and learn how to contribute to them in novel and exciting ways. I've learned, in particular, how to present technical knowledge in a more easily understood medium. This concept is especially important for Cognitive's graphics team because WiFi Sensing is not well understood by those outside of the telecommunications or wireless industries. Art is unique in its ability to visualize technology that would otherwise be difficult to understand if only read or heard about. Finally, I've been fortunate to work with several strong female role models. I've had the opportunity to learn from and listen to their experiences, which I hope will shape my own academic and professional achievements. They have taught me that if they can succeed in STEM, so can I. And perhaps I can serve as a role model for others.
What would be your advice for others looking to break the bias?
Don't be afraid to have those hard conversations. Speaking up for what you believe in is a big part of breaking the bias. Just because no one is saying it, does not mean that no one is thinking about it. The more people who participate in these conversations, the closer we will be to normalizing them. You may be shut down, but that does not mean you will always be shut down. Try again! You never know if the next time will be the one. And if it is, own it.
What does International Women's Day mean to you?
It’s been a long journey for society to be much more inclusive and to challenge traditional ways of thinking about gender. I do think things like International Women’s Day help facilitate conversations about representation and diversity. After all, we are more likely to find solutions if we keep these issues at the forefront of our minds. As with initiatives like this, it's important to remember that celebrating women should be about all women, not just an idealized version. That means we should be talking about and uplifting women of colour, women with disabilities, women of all ages, women from the LGBTQ+ community, and extending that towards feminine non-binary people.
Why did you choose Cognitive?
During my co-op, I realized how important company culture was to me. When I met with Cognitive people during the interview process, I was struck by their encouraging and friendly demeanor, particularly when it came to bringing in more representation and diversity. Unfortunately, I was turned down when I attempted to express my views on inclusion in previous roles. But, with Cognitive, I felt confident in putting forward my ideas in team meetings because I knew they would be considered. In addition, as a newer and smaller company, I get to work with ambitious, talented minds on collaborative and diverse teams. What really mattered to me was an environment in which I could push the envelope a little, develop my skills, and begin to see the diversity I desired in STEM. | <urn:uuid:70876677-f18a-43a0-a802-f3b56333f7f3> | CC-MAIN-2022-40 | https://www.cognitivesystems.com/resources/breaking-the-bias-in-tech-an-interview-with-cognitives-catherine-cha | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00585.warc.gz | en | 0.967018 | 1,399 | 2.84375 | 3 |
6 Network Security Protocols You Should Know
What Are Network Security Protocols?
Network security protocols are network protocols that ensure the integrity and security of data transmitted across network connections. The specific network security protocol used depends on the type of protected data and network connection. Each protocol defines the techniques and procedures required to protect the network data from unauthorized or malicious attempts to read or exfiltrate information.
Related content: Learn more about network security threats security protocols can protect against.
The OSI Network Model
Open Systems Interconnection (OSI) is a reference model for how applications communicate over networks. It shows how each layer of communication is built on top of the other, from the physical wiring to the applications that attempt to communicate with other devices over the network.
The OSI is a reference model that guides technology vendors on the design of interoperable software and hardware, providing a clear framework that describes the capabilities of a network or communications system. For security teams, the OSI model helps understand which layers of the network they need to defend, where specific security threats could strike, and how to prevent and mitigate them.
The OSI Model contains the following layers:
- Layer 1—Physical Layer—the physical cable or wireless connection between network nodes.
- Layer 2—Data Link Layer—creates and terminates connections, breaks up packets into frames and transmits them from source to destination.
- Layer 3—Network Layer—breaks up segments into network packets, and reassembles them upon receipt, and routes packets using an optimal path on the physical network.
- Layer 4—Transport Layer—responsible for reassembling the segments on the receiving end, turning it into data that can be used by the session layer.
- Layer 5—Session Layer—creates communication channels, called sessions, between devices. Keeps sessions open during data transfer and closing them when it ends.
- Layer 6—Presentation Layer—prepares data for the application layer, defining how two devices should encode, encrypt, and compress data to ensure it is received correctly.
- Layer 7—Application Layer—used by end-user software like web browsers and email clients. Sends and receives information that is meaningful for end-users using protocols like HTTP, FTP, and DNS.
6 Types of Network Security Protocols
Following are some of the most common network security protocols. They are arranged by the network layer at which they operate, from bottom to top.
Internet Protocol Security (IPsec) Protocol—OSI Layer 3
IPsec is a protocol and algorithm suite that secures data transferred over public networks like the Internet. The Internet Engineering Task Force (IETF) released the IPsec protocols in the 1990s. They encrypt and authenticate network packets to provide IP layer security.
IPsec originally contained the ESP and AH protocols. Encapsulating Security Payload (ESP) encrypts data and provides authentication, while Authentication Header (AH) offers anti-replay capabilities and protects data integrity. The suite has since expanded to include the Internet Key Exchange (IKE) protocol, which provides shared keys establishing security associations (SAs). These enable encryption and decryption via a firewall or router.
IPsec can protect sensitive data and VPNs, providing tunneling to encrypt data transfers. It can encrypt data at the application layer and enables authentication without encryption.
SSL and TLS—OSI Layer 5
The Secure Sockets Layer (SSL) protocol encrypts data, authenticates data origins, and ensures message integrity. It uses X.509 certificates for client and server authentication. SSL authenticates the server with a handshake, negotiating security session parameters and generating session keys. It can then securely transmit the data by authenticating its origin.
SSL sessions use cryptographic algorithms similar to the algorithms used by the client and server (determined during the handshake). Servers may support encryption with algorithms like AES and Triple DES.
X.509 server certificates are a requirement for SSL, enabling the client to validate the server. SSL can also use X.509 client certificates for authentication. These certificates must be signed by a trusted certificate authority in the server’s keyring.
Transport Layer Security (TLS) is an SSL-based protocol defined by the IETF (SSL is not).
Datagram Transport Layer Security (DTLS)—OSI Layer 5
DTLS is a datagram communication security protocol based on TLS. It does not guarantee message delivery or that messages arrive in order. DTLS introduces the advantages of datagram protocols, including lower latency and reduced overhead.
Kerberos Protocol—OSI Layer 7
Kerberos is a service request authentication protocol for untrusted networks like the public Internet. It authenticates requests between trusted hosts, offering built-in Windows, Mac, and Linux operating system support.
Windows uses Kerberos as its default authentication protocol and a key component of services like Active Directory (AD). Broadband service providers use it to authenticate set-top boxes and cable modems accessing their networks.
Systems, services, and users, only need to trust the KDC when using Kerberos. KDC offers authentication and grants tickets to enable nodes to authenticate each other. Kerberos uses shared secret cryptography to authenticate packets and protect them during transmission.
Simple Network Management Protocol (SNMP)—OSI Layer 7
SNMP is a network device management and monitoring protocol that works at the application layer. It can secure devices on LANs or WANs. SNMP provides a shared language to allow devices like servers and routers to communicate via a network management system. SNMP is an original part of the Internet protocol suite defined by the IETF.
Components of the SNMP architecture include a manager, an agent, and a management information base (MIB). The manager is the client, the agent is the server, and the MIB is the database. The SNMP agent responds to the manager’s requests using the MIB. While SNMP is widely available, administrators must adjust the default settings to enable communication between the agents and the network management system to implement the protocol.
With the introduction of SNMPv3 in in 2004, the SNMP protocol gained three important security features: encryption of packets to prevent eavesdropping, integrity checks to ensure packets were not been tampered in transit, and authentication to verify that communications come from a known source.
HTTP and HTTPS—OSI Layer 7
HTTP is an application protocol that specifies rules for web file transfers. Users indirectly use HTTP when they open their web browser. It runs on top of the Internet protocol suite.
HTTPS is the secure version of HTTP, securing the communication between browsers and websites. It helps prevent DNS spoofing and man-in-the-middle attacks, which is important for websites that transmit or receive sensitive information. All websites requiring user logins or handling financial transactions are attractive data theft targets and should be using HTTPS.
HTTPS runs over the SSL or TLS protocol using public keys to enable shared data encryption. HTTP uses port 80 by default, while HTTPS uses port 443 for secure transfers. With HTTPS, the server and browser must establish the communication parameters before initiating data transfers. | <urn:uuid:08478699-797f-4f03-a5d4-b72b0281e27f> | CC-MAIN-2022-40 | https://www.catonetworks.com/network-security/network-security-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00585.warc.gz | en | 0.859966 | 1,477 | 3.90625 | 4 |
Social Engineering Defined
Social engineering refers to psychological manipulation of people into performing actions or divulging confidential information. A type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional “con” in that it is often one of many steps in a more complex fraud scheme. Social engineering is often associated with phishing, social website scraping and watering hole attacks.
Step1: Reconnaissance: Harvest information for targeted attacks
- In the case of harvested information, social engineering is frequently the first step of sophisticated multi-step attacks. Complex social engineering attacks like advanced persistent threat attacks (APTs), CEO fraud, crypto currency attacks, and any targeted cyber-attack will use social engineering as a first step.
- Frequent reconnaissance methods include:
- Harvesting information from social websites: Facebook, Twitter, LinkedIn, Public posts online by your company, partner companies, vendors, news organizations, member associations etc.
- Communicating with employees by email, text messaging, or phone to gather targeted information
- Stealing personal information from your PC, smart phone or company servers
- Once this information is in hand, a targeted attack on individuals can be initiated that causes the victim to perform the desired behavior.
Step 2: Cause individuals to initiate a particular behavior leveraging what you know from harvested information.
This behavior will result in an infection that becomes the next step in the cyber-attack process. Frequent attack methods leveraging harvested information include spear phishing, CEO fraud, and water hole attacks. Actions elicited from individuals that drive infections include:
- Clicking in an email link that leads to an infected website
- Opening an email attachment that leverage computer software flaws that cause your computer to become infected
- Visiting a particular infected website and frequently a request to enter login credentials or private information
- Unauthorized, but familiar requests to send money or send confidential information to a 3rd party or bank (CEO Fraud) leveraging what you know about normal ordinary internal transactions. | <urn:uuid:6c8c0f3e-8cbf-47c9-a73b-057b78a9d9e9> | CC-MAIN-2022-40 | https://www.esecuritysolutions.com/solutions/social-engineering-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00585.warc.gz | en | 0.91528 | 411 | 3.03125 | 3 |
Every company has both information and knowledge. What’s the difference? Knowledge is something that’s actionable and that generates value for the receiver; it’s dynamic and causes action. Information, on the other hand, is static and is not actionable. It’s simply compiled data.
For example, if you received a list of all the sandwich shops in your state, you’re now informed, but you don’t have any real knowledge of how to make a sandwich. However, if someone showed you how to create a great sandwich that tastes better than any other sandwich you’ve ever had, you now have something actionable. The person shared with you some knowledge based on his or her experience. You can go home and recreate that sandwich again. That’s knowledge.
The good news is that knowledge increases in value when it’s shared. Think about it … have you ever learned from a co-worker? A customer? A stranger in the coffee shop? Of course you have. But you didn’t learn by giving the other person basic information. More than likely, you were in a dialog. You shared your knowledge and they shared theirs. In the process, you both learned something new and valuable.
Unfortunately, human nature is to horde and protect knowledge because we think we only have a certain amount of knowledge in our head. If this were true, then we’d certainly want to covet it dearly. In reality, each person is a fountain of knowledge and new ideas based on personal experiences, meaning the knowledge well is deep and won’t run dry. Additionally, when you share your knowledge, you don’t lose it.
Think of it like this: Suppose you’re in a large, dark room with hundreds of other people. Everyone in the room is holding an unlit candle, except for you — you have the only lit candle. Your lone candle provides the only glimmer of light. What happens when you walk over to someone holding one of the unlit candles and light it with your flame? The room is brighter, yet you don’t lose your initial flame. What happens when you use your flame to light another candle, and then another, and then another, and so on? Do you still have your original flame? Yes. Only now the room is more brightly lit because you shared your flame.
The same thing happens when you share knowledge. You still have your ideas, but by sharing them with someone else, you have the potential to improve those ideas well beyond what you previously thought possible. You make the ideas glow a little brighter. That’s why knowledge sharing is so powerful.
Knowing that knowledge sharing is valuable, it makes sense to capture and leverage knowledge in your organization. But how do you do that? Putting knowledge in a file cabinet or on a hard drive isn’t enough. You need to have your company’s knowledge on your network so it can be accessed anytime, anywhere, and via any device. People need to be able to not only view the knowledge, but also add to it and utilize it. That’s how knowledge evolves, stays relevant, and gets more valuable.
Additionally, realize that organizations today are facing a major knowledge drain crisis. Approximately 78 million Baby Boomers in the United States are headed for retirement. When they leave, they’re going to take all the knowledge and wisdom that’s in their head with them. Unfortunately, no one can change the fact that so many people will be retiring. But you can decide whether the retirees in your company are going to take their knowledge with them or if you’re going to capture it.
Knowing all this, it makes sense to develop knowledge sharing networks within your organization. The question is, how do you create one?
It all starts with the CIO
Harnessing the knowledge within your organization can’t happen without the CIO’s involvement. Why? Because one of the key tasks for the CIO and the IT department is to create new added value and competitive advantages for the organization. That’s precisely what a knowledge-sharing network delivers. The only way knowledge sharing can work organizationally is by putting it on a network.
In order to create a knowledge base containing the collective knowledge within the organization, you need networks that allow for instant access to the knowledge that’s gathered. A working, dynamic, real-time knowledge base is more powerful than any database or information base.
For centuries, knowledge sharing has been done on a small scale without technology, such as when people hang out around the water cooler, have casual lunches together, or go to a meeting. It’s during these times that people share lessons learned and best practices.
Unfortunately, such meetings are sporadic or only last a short time. Additionally, only a limited number of people receive the knowledge. And since it’s not captured, there’s no way for people to continually access the knowledge that was shared. But the CIO can put the company’s knowledge into a dynamic state by using simple networking technology. Capturing and leveraging intellectual assets is an area where the CIO can be highly strategic and can add value to the organization in ways no other executive can.
Knowing how vital your role as the CIO is to creating a knowledge base, it’s time to start getting people involved with the process. Again, your role is to drive the initiative and spearhead the process and technology. Others in your organization can set up a means to pull knowledge out of people.
Knowledge Pull – There was a movement in the 1990s to capture knowledge that resided inside organizations, but few had luck with it. Most of the knowledge they were sharing was really just information, and many companies made the sharing process too time consuming and complex. To make knowledge-sharing work, you need to engage in a process I call “knowledge pull”, where you pull the knowledge out of people. | <urn:uuid:81b3455b-4ba5-4e47-97bf-497dc8ac1b7c> | CC-MAIN-2022-40 | https://cioupdate.com/special-report-unlocking-knowledge-and-wisdom-in-the-enterprise-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00785.warc.gz | en | 0.952967 | 1,251 | 2.515625 | 3 |
This is the first part of our Virtual Machine blog series. Be sure to check back for part two of the series later this month.
When the term “virtual server” flashes across your computer screen, what’s the first thing that comes to mind? Was it a flickering Tron-style hologram of the waiter pouring you a tall drink? As cool as that would be, the types of servers we’ll be discussing in this series belong to a class of technology called virtual machines, and we’ve brought on a couple of our senior tech team members, Steve Chang and Luke Reynolds, to help us understand how they work. Today, Steve will be giving us the rundown on server basics, and the circumstances that might call for virtual servers.
Server virtualization is an interesting and advanced business tool, allowing an organization to create an additional server without buying or installing a separate physical device, but before we get into the “why?” of defining virtual servers, we’re going to quickly review the purpose of any server in the business setting. Steve says it’s pretty simple.
“Servers offer a centralized location for data storage, security for that data storage, various lines of defense for memory failure. They might be used to host all of the email conducted by a business, or host heavy-resource programs like CAD, for businesses that use remote desktop through lightweight devices.”
Steve goes on to note that not every program necessarily needs to have its own server though—a business could have one server dedicated entirely to finance needs, but certain programs do not like working with others on the same system.
“For example, one of our clients has timekeeping software that simply will not run on the same server as their domain controller (a server dedicated to a certain type of security), so they need a second server just for timekeeping.”
Once an organization takes on the complexity to need multiple servers, a certain choice presents itself: conventionally, this need would be addressed through the installation of additional physical server units, but an advanced alternative can be found in a process known as server virtualization.
So what is server virtualization?
“In a basic sense, server virtualization is the utilization of one heavily-resourced piece of hardware to host multiple images of servers—virtual servers—that will handle their respective roles and processes as if they were in separate server units.”
In simpler terms, to virtualize a server means that you’re taking the server software, which is ordinarily built on all the physical components of the server unit, and essentially “separating” it from the physical unit, so that it exists only as a virtual chunk of information (an image of the server). This image can then either exist in the same unit alongside other software, or it can be essentially copied and pasted into a new physical space, saving you the trouble of backing up the server’s contents, and manually reinstalling it with a new operating system.
“For example, Newmind Group was helping a school that had a difficult server situation- they had two physical servers, but one of them was on lease, and when it came time to return that unit, we were able to virtualize its contents, and transfer it to the server that was staying in the school. Of course, that’s a short-term solution, because not all hardware can handle multiple servers at the same time, but it’s an effective tactic to keep them up and running, while they work on getting the hardware they need for their servers to operate smoothly.”
Now that we’ve covered the basic definitions, in our next post Luke Reynolds will cover specific scenarios and potential cases for server virtualization. Would you like us to take on any unique questions regarding server strategy or virtualization? | <urn:uuid:f8692145-5a1c-4552-9a79-3e4f2276af54> | CC-MAIN-2022-40 | https://newmindgroup.com/2014/10/09/understanding-the-virtual-machine-servers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00785.warc.gz | en | 0.945782 | 795 | 2.875 | 3 |
Could the Fourth Industrial Revolution change the wine industry? It already has and will continue to just like it has changed just about every other sector from healthcare to manufacturing to retail. Artificial intelligence touches everything in winemaking from the soil analysis at the vineyards to how consumers select the right vintage to go with dinner. Let’s explore a few of the ways artificial intelligence will alter winemaking.
At the vineyard
Artificial intelligence is already in many vineyards in the form of AI-powered machines and sensors that help assess water needs and soil conditions for the grapes. Automated drones can fly above the vines with thermal infrared cameras to identify precisely what vines need water or suffer from diseases or damage from pests. Additionally, just as tourism companies use drones to make marketing videos, vineyards can use drones to give its customers a bird’s-eye view of the grape-growing process. Ultimately, robots might take over tasks at the vineyards to free up the winemaker to focus on other initiatives.
In Australia, GAIA (Geospatial Artificial Intelligence for Agriculture) uses AI software and a satellite image library to plot every vineyard in the country. The organisation feeds the data it collects to its deep neural network to monitor crop conditions, fruit quality, classify vineyards and more.
Just as it has done in other industries, artificial intelligence can help make wine production more streamlined and efficient. By analysing data from sensors and other data-retrieving tools, AI machines can monitor conditions as well as inventory and suggest action based on the data. As the world’s climate continues to evolve, artificial intelligence can play a critical role in how existing winemakers in various locations adjust to changing climate conditions and help inform new wine-growing regions as they become more hospitable to growing grapes.
Artificial intelligence has already been used as a virtual sommelier to help make wine pairing suggestions (more than 25% of wine drinkers use wine apps to help with purchasing decisions), but we can expect this capability to expand. In one example and partnership, AllRecipes. Com and Ste. Michelle Wine Estates joined forces to offer consumers immediate wine pairing suggestions for recipes on the AllRecipes. Com site. The AI tool used here takes a consumer’s personal tasting preferences, patterns in the recipes and information about what wine is available at local retailers to recommend wines for dishes. Similar to how Netflix or Spotify recommends movies and songs or artists to you, there are many apps and companies such as Wine Ring and WineStein that used artificial intelligence to create a virtual sommelier that can get to know you so well that it can offer personalised wine suggestions. In fact, more than 25% of wine drinkers use wine apps to help decide what wine to purchase. There’s even a smart wine vault on the market that can track your wine inventory and also gives you wine recommendations.
The AI transformation of wine recommendations can also impact your wine-buying experience. The same technology that can recommend a wine to you from an app can inform your wine-purchasing experience either online or at a retail store. Perhaps in the future, you will interact with a wine sommelier robot who will help you pick out a perfect bottle.
Winemaking might be a work of art to some, but it’s fundamentally very scientific. When artificial intelligence is used to analyse data about the grapes and other properties that ultimately influence the aroma, flavour and taste of wine it can identify patterns and insights that might be undiscovered by humans. The data analysis done by artificial intelligence can help winemakers make decisions about their crop and winemaking methodologies to perfect their system.
Now that artificial intelligence has vision and natural language processing, it isn’t too far fetched to believe that AI will soon have other senses such as taste and smell. With that ability, AI will be able to provide critiques and reviews of wine. In fact, Wine Spectator is an entire publication that is written by software, and it already offers ratings and reviews of wine.
Autonomous vehicles and wine
As the driving experience evolves with self-driving cars, it is expected that our vehicles will turn into entertainment area—when the AI system is keeping an eye on the road and navigating you are free to do whatever you want. In this scenario, drinking and driving is no longer a concern. It’s possible that as our roadways change to accommodate more self-driving vehicles, our collective consumption of alcohol will increase. | <urn:uuid:3c7202f1-4b6e-40cd-87ee-3f4b3362c49c> | CC-MAIN-2022-40 | https://bernardmarr.com/the-incredible-ways-the-4th-industrial-revolution-and-ai-are-changing-winemaking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00785.warc.gz | en | 0.949377 | 932 | 3.109375 | 3 |
Google has paid tribute to Mary Blair, an artist of unusual style whose works has made the classic Walt Disney films of 40s and 50s immortal and unforgettable through a doodle.
Born on 21st October 1911 in Oklahoma, the very talented Blair's best artworks are contributions to animations of Alice in Wonderland, Cinderella and Peter Pan. Her work also includes illustrations of numerous children's books.
Blair brought the concept of modern arts in the world of animation thus becoming an inspiration for an entire generation of illustrators. The animations were colourful and had a childlike image which vaguely reminiscent the popular cubist movement.
Walt Disney was also impressed by her methods and designs and thus he recruited her for "It's a Small World", which was the main attraction of the 1964 New York World's Fair where it debuted. Currently this can be found in every Disney theme park where it has been recreated time and again.
One of the most popular creations in which Blair was involved is the giant murals at Disneyland and Disney World. Blair, who died on 1978 July 26, tried to give the world the ability to view things in different light. Google's doodle on its tribute features an image of her which is an imaginary drawing as the way she would have drawn herself. This image of her is surrounded by simple patterns and shapes which makes up her own familiar cartoon world. | <urn:uuid:5620980e-cff1-4e2d-9590-8b994f92bde1> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/10/21/google-celebrates-mary-blairs-100th-birthday/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00185.warc.gz | en | 0.983369 | 280 | 2.515625 | 3 |
In just a little over a decade, smartphones have effected profound changes in our lives. They’ve given people around the world new ways to communicate, connect and consume, and, as a result, they’re widely seen as indispensable.
But, they haven’t just altered the landscape of our personal and professional lives. They’ve also helped the general public become familiar with the Internet of Things (IoT), which is the use of machines with digital sensors to gather data and an internet connection to share and analyze it for the purpose of identifying patterns and improving the performance of larger systems. As a result, they’ve smoothed the path towards the adoption of smart devices and analytics programming in other areas, including industrial facilities and utility grids.
The public square also stands to benefit from this trend, as evidenced by the growing popularity of the Smart City concept, which calls for using IoT and analytics systems to collect and act upon information relevant to the experience of urban living. This essay will examine several of the ways in which municipal authorities have tried to use technologies of this type to improve the functioning of cities over the last decade.
Safety and Security
The Smart City concept arose at a time when memories of 9/11, the largest terrorist attack ever to occur within the US, were still relatively fresh. As a result, it inevitably led government agencies at all levels – federal, state, county and municipal to think about how technology might help keep the public safe and secure.
Discussions of the topic weren’t confined to terrorism, which has (thankfully) remained a relatively rare phenomenon in the US. They extended into the realm of crime, which is a far more common threat to public order. As a result, municipal governments started considering the question of how IoT and analytics technologies might help them improve performance on this front.
One of the first cities to take action was Santa Cruz, California, which launched a pilot program for predictive policing in 2011. Under the program, the city’s police department began using a computer algorithm to analyze crime data. This helped the department identify and map “hot spots” that would benefit from more frequent patrols. Since it updated patrol maps on a daily basis, it also allowed police officials to respond more quickly to new developments and trends.
Since the conclusion of the pilot program, Santa Cruz has adopted PredPol, a cloud-based predictive policing software package from the same researcher that designed the original algorithm. Its success has inspired other cities in California to look at PredPol and other solutions of similar type.
Public Services and the Quality of Life
At the same time, policing isn’t the only public service to be affected by the Smart City concept.
Municipalities have also turned to technology in a bid to manage parking services. For example, Amsterdam, the capital of the Netherlands, used the Mobypark mobile phone app, which has since become the Mobypark Platform, to help users find parking spaces and pay for access on demand. The app helped reduce traffic and congestion, city officials say. Laguna Beach, CA, uses Frogparking, a similar app designed by a New-Zealand-based company, to manage city-owned parking spots. This system also allows the municipality to use data from smart meters to ticket drivers who don’t pay for parking.
Likewise, public transport is ripe for inclusion in Smart City planning. In Spain, Barcelona’s CityOS program helps commuters spend less time waiting by using information from IoT sensors to optimize bus routes. These sensors assess street traffic patterns and count the number of people waiting at bus stops, feeding data into a central system that can re-route buses as necessary. For its part, the city of Columbus, Ohio teamed up with a local utility in 2017 to promote the use of electric vehicles. The utility, American Electric Power Ohio, helped municipal authorities build new vehicle charging stations and establish systems to help drivers keep their vehicles powered up.
Similarly, the Smart City concept can help municipal authorities manage energy consumption and environmental impact. Stockholm used a publicly owned fiber-optic network to implement the Green IT program, which seeks to reduce heating costs and emissions by increasing energy efficiency. San Leandro, CA, launched the ZipPower project in late 2016 to optimize local energy resources. Within the ZipPower framework, it has taken steps to promote the use of small-scale renewable energy systems and created a software platform that helps consumers minimize electricity costs.
Some of these Smart City environmental monitoring programs even have a public health dimension. For example, Copenhagen has teamed with Google to gather information on pollution levels that can be used to generate map-style graphics. These maps, which are based on data collected from monitors installed on Google’s StreetView cars, display air quality indices for every section of the city. Residents can then use them to find the best routes for walking, jogging, bicycle riding and other outdoor activities that improve the quality of life.
Towards a More Comprehensive Approach
All of the programs described above represent an advance upon older systems. But, for the most part, they only affect one or two specific aspects of urban living (parking, traffic, crime, public transport, etc.) at a time. In other words, they have a limited ability to help their users.
Over time, this will probably change. As The Economist pointed out in its special World 2019 issue earlier this year, cities adhering to the Smart City concept are likely to look for more comprehensive solutions that integrate multiple sets of data in ways that make the urban experience more seamless overall.
Indeed, some are already moving in this direction. In the United Arab Emirates, Dubai has introduced mobile phone apps and rechargeable smart cards that allow residents to pay for a wide range of public and private services (including but not limited to traffic fines, water and power supplies and business-related transactions) and access public transportation networks. Meanwhile, Singapore uses its Smart Nation Sensor Platform to collect data on infrastructure, housing and public amenities. It then feeds the information into an analytics system designed to facilitate access to services, improve productivity and ensure public safety.
Because of these programs, Singapore and Dubai are both well down the road towards realizing the potential of the Smart City concept. Eventually, municipal governments should be able to take it a step farther, integrating the capabilities of Dubai’s payment and access solutions with the analytics features of Singapore’s platform. When and if they do, they will change the lives of city dwellers.
Written by Gregory Miller, a writer with DO Supply covering robotics, AI and automation.
Originally published August 12, 2019. Updated March 27, 2020. | <urn:uuid:8e58159c-79fc-493a-a418-36808daa9532> | CC-MAIN-2022-40 | https://www.iotforall.com/smart-cities-decade-of-progress | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00185.warc.gz | en | 0.940122 | 1,365 | 2.84375 | 3 |
Do You Believe Facial Recognition Readers Store Your Picture?
There are many misconceptions about science and technology. One of the misconceptions is that biometric IP door readers store a picture of your fingerprint or face. Another one is that the Earth is a flat disc that rests on top of four elephants, on top of a turtle*.
For clarity, biometric door readers do not store a picture of your fingerprint or face, and the Earth is not flat. This article describes the technology behind biometric door readers.
We are all sensitive to information being gathered about us. Privacy is very important. We do not like the government or even the organization we work for knowing all about us. With all the hacks going on, who would not be worried about our fingerprint getting stolen. However, do not worry. This is one of those misconceptions. Your fingerprint cannot be stolen from today’s biometric readers.
Instead of storing a picture of our actual fingerprint or face, the biometric readers captures only a small subset of data and then convert these minutiae points to encrypted binary data. The code can only be retrieved using a mathematical algorithm that has no physical relationship to the biometric.
The sensors are very accurate and impossible to deceive. The readers include integrated security components that make them safe from hacking.
Special algorithms optimize the capture process. This reduces the false rejection rate. The fingerprint scanner uses subsurface multispectral imaging technology to see beneath the outermost layer of the skin (epidermis) and view the live layer of the skin (dermis) where the true fingerprint resides. This means that conditions on the outer skin’s surface (such as calluses, dryness, dirt or contaminants, moisture, or the effects of aging) do not limit the ability of the sensor to capture fingerprint minutiae data. The latest scanners include anti-spoofing technology that rejects a fake fingerprint. The reader detects the live finger rather than a rubber mold or other fake fingerprint. To learn more, take a look at our video, “How Biometric Readers Work.”
What happens if the biometric doesn’t work? The latest readers have other modes of entry besides the biometrics. They can use a credential if their biometric doesn’t work. Just remember that a biometric is much more secure than an RFID credential. The fingerprint identifies the person and not the card that they carry. You can also use dual authentication. A person can carry a credential and also require that their face match the credential applied.
There are various levels of security provided by door access control systems. Door readers that just require a PIN number are the least secure. It is difficult to control pin numbers when people change. For example, if a person no longer works at the facility, all the numbers may have to be changed. Door readers that use credentials such as cards are much safer, but the cards can be copied or stolen. The most secure level of security is provided by a door access control that identifies the person. For more about the various levels of security, take a look at our article, “Comparison of Security Provided by Door Access Systems.”
Another misconception is that network attached biometric readers are much more expensive than RFID IP readers. In reality, all this great new technology is about the same price as some of the IP RFID door readers. Biometrics is becoming the new standard for door access control. The world is not flat, and biometric readers maintain our privacy. They are safe and more secure than RFID door readers.
*Note, in the Discworld books by Terry Pratchett, the world is described as a disc, resting on the top of four elephants who, in turn, stand on the back of a giant space-faring turtle.
Contact us to learn more about the biometric door readers. We can be reached at 800-431-1658 in the USA, or at 914-944-3425 everywhere else, or use our contact form. | <urn:uuid:29b24963-2ce6-4192-be77-688678446b0d> | CC-MAIN-2022-40 | https://kintronics.com/biometric-door-control-misconceptions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00185.warc.gz | en | 0.905267 | 828 | 2.640625 | 3 |
In Entuity, attributes values in which you can free type also allow entry of regular expressions (Regex). When defining filter rules with regular expression, note that pattern matching is case-sensitive.
- It is expensive to use capturing groups.
(<text-to-match) is a capturing group, (?:<text-to-match) is a non-capturing group.
- Case insensitive matches should be avoided - these are expensive due to the need to support multiple character sets.
Therefore, do not use (?i) or /i unless it is unavoidable (and if so, use it for the smallest part of the search).
- a name that includes lon:
- a name that starts with lon:
- a name that starts with either lon or par:
- a name that ends in 1:
- a name that ends in a, b or c:
- a name that contains at least one digit:
- a name that include s, t, u or v:
- a name that include a pair of digits next to each other:
- a name that has x as the fourth character:
- 1 or more special characters (metacharacters) in their name require that the character is escaped. For example, a name with a plus sign is escaped using the backslash: | <urn:uuid:c17905f6-e50d-4a32-aa13-e5b4b9021896> | CC-MAIN-2022-40 | https://support.entuity.com/hc/en-us/articles/360002526657-Regular-expressions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00185.warc.gz | en | 0.876486 | 345 | 2.703125 | 3 |
Machine Learning Interpretability Basics
Sophisticated machine learning models have a reputation for being accurate yet difficult to interpret; however, you don’t simply have to accept that. In this learning session, we explore interpretability features that help you understand not just what your model predicts, but how it arrives at its predictions.
These tools are important throughout the whole model lifecycle.
- If you’re developing a model, you can learn which features matter overall and how your model needs improvement.
- If you’re a stakeholder for a model, you can see the patterns that the model discovered and compare them against domain knowledge and business rules.
- If you’re using a model in production to help make decisions, you can learn which features were most important in individual cases, and use that as a guide for actionable next steps or interventions.
Regardless of your role, seeing how the model makes its predictions can help you understand and trust it.
- XEMP Prediction Explanations with DataRobot
- How to Understand a DataRobot Model
- DataRobot Public Documentation > SHAP-Based Prediction Explanations | <urn:uuid:6b9adabd-b74f-4163-a8d6-15ded3603f44> | CC-MAIN-2022-40 | https://www.datarobot.com/blog/machine-learning-interpretability-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00185.warc.gz | en | 0.935385 | 239 | 2.71875 | 3 |
When it comes to securing your sensitive, personally identifiable information against criminals who can engineer countless ways to snatch it from under your nose, experts have long recommended the use of strong, complex passwords. Using long passphrases with combinations of numbers, letters, and symbols that cannot be easily guessed has been the de facto security guidance for more than 20 years. But does it stand up to scrutiny?
A short and easy-to-remember password is typically preferred by users because of convenience, especially since they average more than 27 different online accounts for which credentials are necessary. However, such a password has low entropy, making it easy to guess or brute force by hackers.
If we factor in the consistent use of a single low-entropy password across all online accounts, despite repeated warnings, then we have a crisis on our hands—especially because remembering 27 unique, complex passwords, PIN codes, and answers to security questions is likely overwhelming for most users.
Instead of faulty and forgettable passwords, tech developers are now pushing to replace them with is something that all human beings have: ourselves.
Bits of ourselves, to be exact. Dear reader, let’s talk biometrics.
Biometrics then and now
Biometrics—or the use of our unique physiological traits to identify and/or verify our identities—has been around for much longer than our computing devices. Handprints, which are found in caves that are thousands of years old, are considered one of the earliest forms of physiological biometric modality. Portuguese historian and explorer João de Barros recorded in his writings that 14th century Chinese merchants used their fingerprints to finalize transaction deals, and that Chinese parents used fingerprints and footprints to differentiate their children from one another.
Hands down, human beings are the best biometric readers—it's innate in all of us. Studying someone’s facial features, height, weight, or notable body markings, for example, is one of the most basic and earliest means of identifying unfamiliar individuals without knowing or asking for their name. Recognizing familiar faces among a sea of strangers is a form of biometrics, as is meeting new people or determining which person out of a lineup committed a certain crime.
As the population boomed, the process of telling one human being from another became much more challenging. Listing facial features and body markings were no longer enough to accurately track individual identities at the macro level. Therefore, we developed sciences (anthropometry, from which biometrics stems), systems (the Henry Classification System), and technologies to aid us in this nascent pursuit. Biometrics didn’t really become "a thing" until the 1960’s—the same era of the emergence of computer systems.
Today, many biometric modalities are in place for identification, classification, education, and, yes, data protection. These include fingerprints, voice recognition, iris scanning, and facial recognition. Many of us are familiar with these modalities and use them to access our data and devices every day.
Are they the answer to the password problem? Let’s look at some of these biometrics modalities, where they are normally used, how widely adopted and accepted they are, and some of the security and privacy concerns surrounding them.
Fingerprint scanning is perhaps the most common, widely-used, and accepted form of biometric modality. Historically, fingerprints—and in some cases, full handprints—were used as a means to denote ownership (as we’ve seen in cave paintings) and to prevent impersonation and the repudiation of contracts (as what Sir William Herschel did when he was part of the Indian Civil Service in the 1850's).
Initially, only those in law enforcement could collect and use fingerprints to identify or verify individuals. Today, billions of people around the world are carrying a fingerprint scanner as part of their smartphone devices or smart payment cards.
While fingerprint scanning is convenient, easy-to-use, and has fairly high accuracy (with the exception of the elderly, as skin elasticity decreases with age), it can be circumvented—and white hat hackers have proven this time and time again.
When Apple first introduced TouchID, its then-flagship feature on the 2013 iPhone 5S, the Chaos Computer Club (CCC) from Germany bypassed it a day after its reveal. A similar incident happened in 2019, when Samsung debuted the Galaxy S10. Security researchers from Tencent even demonstrated that any fingerprint-locked smartphone can be hacked, whether they’re using capacitive, optical, or ultrasonic technologies.
"We hope that this finally puts to rest the illusions people have about fingerprint biometrics," said Frank Rieger, spokesperson of the CCC, after the group defeated the TouchID. "It is plain stupid to use something that you can't change and that you leave everywhere every day as a security token."
Otherwise known as speaker recognition or speech recognition, voice recognition is a biometric modality that, at base level, recognizes sound. However, in recognizing sound, this modality must also measure complex physiological components—the physical size, shape, and health of a person’s vocal chords, lips, teeth, tongue, and mouth cavity. In addition, voice recognition tracks behavioral components—the accent, pitch, tone, talking pace, and emotional state of the speaker, to name a few.
Voice recognition is used today in computer operating systems, as well as in mobile and IoT devices for command and search functionality: Siri, Alexa, and other digital assistants fit this profile. There are also software programs and apps, such as translation and transcription services, reading assistance, and educational programs designed with voice recognition, too.
There are currently two variants of voice recognition used today: speaker dependent and speaker independent. Speaker dependent voice recognition requires training on a user’s voice. It needs to be accustomed to the user’s accent and tone before recognizing what was said. This is the type that is used to identify and verify user identities. Banks, tax offices, and other services have bought into the notion of using voice for customers to access their sensitive financial data. The caveat here is that only one person can use this system at a time.
Speech independent voice recognition, on the other hand, doesn’t need training and recognizes input from multiple users. Instead, it is programmed to recognize and act on certain words and phrases. Examples of speaker independent voice recognition technology are the aforementioned virtual assistants, such as Windows’ Cortana, and automated telephone interfaces.
But voice recognition has its downsides, too. While it has improved in accuracy by leaps and bounds over the last 10 years, there are still some issues to solve, especially for women and people of color. Like fingerprint scanning, voice recognition is also susceptible to spoofing. Alternatively, it’s easy to taint the quality of a voice recording with a poor microphone or background noise that may be difficult to avoid.
To prove that using voice to authenticate for account access is an insufficient method, researchers from Salesforce were able to break voice authentication at Black Hat 2018 using voice synthesis, a piece of technology that can creates life-like human voices, and machine learning. They also found that the synthesized voice’s quality only needed to be good enough to do the trick.
"In our case, we only focused on using text-to-speech to bypass voice authentication. So, we really do not care about the quality of our audio," said John Seymour, one of the researchers. "It could sound like garbage to a human as long as it bypasses the speech APIs."
All this, and we haven’t even talked about voice deepfakes yet. Imagine fraudsters having the ability to pose as anyone they want using artificial intelligence and a five second recording of their voice. As applicable as voice recognition is as a technology, it's perhaps the weakest form of biometric identity verification.
Iris scanning or iris recognition
Advocates of iris scanning claim that iris images are quicker and more reliable than fingerprint scanning as a means of identification, as irises are less likely to be altered or obscured than fingerprints.
Iris scanning is usually conducted with an invisible infrared light that passes over the iris wherein unique patterns and colors are read, analyzed, and digitized for comparison to a database of stored iris templates either for identification or verification.
Unlike fingerprint scanning, which requires a finger to be pressed against a reader, iris scanning can be done both within close range and from afar, as well as standing still and on-the-move. These capabilities raise significant privacy concerns, as individuals and groups of people can be surreptitiously scanned and captured without their knowledge or consent.
There’s an element of security concern with iris scanning as well: Third parties normally store these templates, and we have no idea how iris templates—or all biometric templates—are stored, secured, and shared. Furthermore, scanning the irises of children under 4 years old generally produces scans of inferior quality compared to their adult counterparts.
Iris scanners, especially those that market themselves as airtight or unhackable, haven’t escaped cybercriminals' radar. In fact, such claims often fuel their motivation to prove the technology wrong. In 2019, eyeDisk, the purported "unhackable USB flash drive," was hacked by white hat hackers at PenTest Partners. After making a splash breaking Apple’s TouchID in 2013, the CCC hacked Samsung’s "ultra secure" iris scanner for the Galaxy S8 four years later.
"The security risk to the user from iris recognition is even bigger than with fingerprints as we expose our irises a lot," said Dirk Engling, a CCC spokesperson. "Under some circumstances, a high-resolution picture from the Internet is sufficient to capture an iris."
This biometric modality has been all the rage over the last five years. Facial recognition systems analyze images or video of the human face by mapping its features and comparing them against a database of known faces. Facial recognition can be used to grant access to accounts and devices that are typically locked by other means, such as a PIN, password, or other form of biometric. It can be used to tag photos on social media or optimize image search results. And it’s often used in surveillance, whether to prevent retail crime or help police officers identify criminals.
As with iris scanners, a concern of security and privacy advocates is the ability of facial recognition technology to be used in combination with public (or hidden) cameras that don’t require knowledge or consent from users. Combine this with lack of federal regulation, and you once again have an example of technology that has raced far ahead of our ability to define its ethical use. Accuracy is another point of contention, and multiple studies have backed up its imprecision, especially when identifying people of color.
Private corporations, such as Apple, Google, and Facebook have developed facial recognition technology for identification and authentication purposes, while governments and law enforcement implement it in surveillance programs. However, citizens—the target of this technology—have both tentatively embraced facial recognition as a password replacement and rallied against its Big Brother application via government monitoring.
When talking about the use of facial recognition technology for government surveillance, China is perhaps the top country that comes to mind. To date, China has at least 170 million CCTV cameras—and this number is expected to increase by almost threefold by 2021.
With this biometric modality being used at universities, shopping malls, and even public toilets (to prevent people from taking too many tissues), surveys show Chinese citizens are wary of the data being collected. Meanwhile, the facial recognition industry in China has been the target of US sanctions for violations of human rights.
"AI and facial recognition technology are only growing and they can be powerful and helpful tools when used correctly, but can also cause harm with privacy and security issues," wrote Nicole Martin in Forbes. "Lawmakers will have to balance this and determine when and how facial technology will be utilized and monitor the use, or in some cases abuse, of the technology."
Otherwise known as behaviometrics, this modality involves the reading of measurable behavioral patterns for the purpose of recognizing or verifying a person’s identity. Unlike other biometrics mentioned in this article, which are measured in a quick, one-time scan (static biometrics), behavioral biometrics is built around continuous monitoring and verification of traits and micro-habits.
This could mean, for example, that from the time you open your banking app to the time you have finished using it, your identity has been checked and re-checked multiple times, ensuring your bank that you still are who you claim you are for the entire time. The bonus? The process is frictionless, so users don’t realize the analysis is happening in the background.
Private institutions have taken notice of behavioral biometrics—and the technology and systems behind this modality—because it offers a multitude of benefits. It can be tailored according to an organization's needs. It's efficient and can produce results in real time. And it's secure, since biometric data of this kind is difficult to steal or replicate. The data retrieved from users is also highly accurate.
Like any other biometric modality, using behavioral biometrics brings up privacy concerns. However, the data collected by a behavioral biometric application is already being collected by device or network operators, which is recognized by standard privacy laws. Another plus for privacy advocates: Behavioral data is not defined as personally identifiable, although it's being considered for regulation so that users are not targeted by advertisers.
While voice recognition (which we mentioned above), keystroke dynamics, and signature analysis are all under the umbrella of behavior biometrics, take note that organizations that employ a behavioral biometric scheme do not use these modalities.
Biometrics vs. passwords
At face value, any of the biometric modalities available today might appear to be superior to passwords. After all, one could argue that it’s easy for numeric and alphanumeric passwords to be stolen or hacked. Just look at the number of corporate breaches and millions of affected users bombarded by scams, phishing campaigns, and identity theft. Meanwhile, theft of biometric data has not yet happened at this scale (to our knowledge).
While this argument may have some merit, remember that when a password is compromised, it can be easily replaced with another password, ideally one with higher entropy. However, if biometric data is stolen, it’s impossible for a person to change it. This is, perhaps, the top argument against using biometrics.
Because a number of our physiological traits can be publicly observed, recorded, scanned from afar, or readily taken as we leave them everywhere (fingerprints), it is argued that consumer-grade biometrics—without another form of authentication—are no more secure than passwords.
Not only that, but the likelihood of cybercriminals using such data to steal someone’s identity or to commit fraud will increase significantly over time. Biometric data may not (yet) open new banking accounts under your name, but it can be abused to gain access to devices and establishments that have a record of your biometric. Thanks to new “couch-to-plane” schemes several airports are beginning to adapt, stolen biometrics can now put a fraudster on a plane to any destination they wish to go.
What about DNA as passwords?
Using one’s DNA as password is a concept that is far from far-fetched, although not widely-known or used in practice. In a recent paper, authors Madhusudhan R and Shashidhara R have proposed the use of a DNA-based authentication scheme within mobile environments using a Hyper Elliptic Curve Cryptosystem (HECC), allowing for greater security in exchanging information over a radio link. This is not only practical but can also be implemented on resource-constrained mobile devices, the authors say.
This may sound good on paper, but as the idea is still purely theoretical, privacy-conscious users will likely need a lot more convincing before considering to use their own DNA for verification purposes. While DNA may seem like a cool and complicated way to secure our sensitive information, much like out fingerprints, we leave DNA behind all the time. And, just as we can’t change our fingerprints, our DNA is permanent. Once stolen, we can never use it for verification.
Furthermore, the once promising idea of handing over your DNA to be stored in a giant database in exchange for learning your family’s long-forgotten secrets seems to have lost its charm. This is due to increased awareness among users of the privacy concerns surrounding commercial DNA testing, including how the companies behind them have been known to hand over data to pharmaceutical companies, marketers, and law enforcement. Not to mention, studies have shown that such test results are inaccurate about 40 percent of the time.
With so many concerns, perhaps it’s best to leave the notion of using DNA as your proverbial keys to the kingdom behind and instead focus on improving how you create, use, and store passwords instead.
Passwords (for now) are here to stay
As we have seen, biometrics isn’t the end-all, be-all most of us expected. However, this doesn’t mean biometrics cannot be used to secure what you hold dear. When we do use them, they should be part of a multi-authentication scheme—and not a password replacement.
What does that look like in practice? For top level security that solves the issue of having to remember so many complex passwords, store your account credentials in a password manager. Create a complex, long passphrase as the master password. Then, use multi-factor authentication to verify the master password. This might involve sending a passcode to a second device or email address to be entered into the password manager. Or, if you're an organization willing to invest in biometrics, use a modality such as voice recognition to speak an authentication phrase.
So, are biometrics here to stay? Definitely. But so are passwords. | <urn:uuid:187cda6d-bbd3-4e15-a65c-914b1a8313b7> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2020/04/the-passwordless-present-will-biometrics-replace-passwords-forever | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00185.warc.gz | en | 0.949814 | 3,795 | 2.984375 | 3 |
Email technology is so old it should have been deprecated. And yet, this old dog is still number one on both Businesses and Attackers hearts.
Email mimics traditional Postal Mail: Same superior purpose (deliver messages and parcels (files) from anyone to everyone, on time, and with notice), enabling both parties to store the message. Yet they share the same flaws (is the sender real?, is the message dangerous?, has someone else seen the contents?, ...) So this makes email simultaneously unavoidable and flawed - the perfect combination for attackers!
The increasing sophistication of cybercrime reveals that attackers have increased activity across various threat vectors. However, email remains the number one target of cybercriminals. We know that the most prevalent cyber attacks are malware, ransomware, and phishing attacks. Almost 50% of malware originates from email. Ransomware is most often deployed through malicious spam emails and phishing emails. And according to recent phishing statistics, 96% of phishing attacks are delivered by email.
So many cybersecurity attacks start with email because cybercriminals recognize that most organizations remain unprepared due to inadequate security solutions. Many companies continue to use outdated email security technologies with low detection rates. With malicious emails easily evading protections of legacy email technologies, threats successfully make their way into user inboxes. And without proper cybersecurity training, employees may click malicious email attachments or links, consequently installing malware, stealing intellectual property, or sabotaging systems.
Closing the Gaps in Email Security
Not all email security systems are equipped with the right technology to detect sophisticated threats, leaving companies vulnerable. However, these are not the only problems organizations face when securing their email channels. Attackers are increasingly using packers, which are used to compress and encrypt code to prevent detection. Hackers can also evade standard antivirus programs by modifying the code that detects viruses.
Attackers have also developed sophisticated malware that evades detection in virtual environments. And when solutions lack the agility to learn and flexibility to pivot, the system fails to adjust to changing algorithms. When systems are agile, they can dynamically identify new patterns and deploy new logic to prevent further attacks.
Many organizations also fail to achieve high detection rates because they lack resources and an incident response infrastructure. Without an experienced incident response team, companies don’t have the ability to properly monitor, analyze, and report all email security incidents. And without experts, there’s no one to advise the company’s decision-makers about the strategies and tools that can be leveraged to optimize security systems and prepare them for advanced threats.
To close the gaps in email security, organizations need to thoroughly evaluate the email security solutions they’re considering implementing. The solutions should be agile and flexible enough to support constant updates and catch evolving threats. To achieve high detection rates and improve incident response, the company should enhance communications between its IT team, email security partner, and end-users.
Cybercrime may be evolving, but so is cybersecurity. With email as the top threat vector for malicious attacks, companies should focus on improving their email ecosystem. At AnubisNetworks, we know how critical it is for service providers and enterprises to strengthen their email security. We’ve designed our Email Security Platform for complex organizations that need a robust security system with a high level of operationalization. It is fully capable for Fraud, Malware, and Spam detection, with added features for user control, message deliverability, and traffic routing functionalities. | <urn:uuid:9ea2ce71-705a-43a9-9d95-f72f3e5df5cf> | CC-MAIN-2022-40 | https://www.anubisnetworks.com/blog/email_the_most_common_channel | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00185.warc.gz | en | 0.927317 | 708 | 2.546875 | 3 |
There are over 8,400 project management methodologies you could choose to use for any given project. However, not every method will be the right fit for you, your team, and your project. Read on as we discuss project management methodologies and how you can use one to improve efficiencies and reduce workflow bottlenecks.
What Are Project Management Methodologies Anyway?
Project management methodologies refer to a defined set of guiding principles, processes, and methods intimately tied to how your projects unfold.
Every method has its own unique blueprint, along with factors like workflow, checks and balances, team roles, deadlines, and expectations for how you will execute projects and tasks from start to finish. Naturally, your choice of project management methodology will define every aspect of your project’s lifecycle.
Project management methodologies also help structure, standardize, and organize work methods.
With predefined templates, definitions, and guidelines in place, you can quickly get all involved parties on the same page. The fact you can identify weak points, mistakes, inefficiencies, and opportunities for improvement to tweak work methods and increase efficiency is another benefit.
Some examples of more widely known PM methods include waterfall, agile, scrum, Kanban, lean, and the critical path method–all of which we’ll go into further in this guide.
How Project Management Methodologies Work
As mentioned, every project management methodology has its own rules, principles, processes, and best practices. The methodology you implement should depend entirely on the type of project you undertake. As a project manager, you will likely use many methodologies in your career to tackle various kinds of projects.
In this section, we’ll outline some of the more popular methodologies to help you understand how each works and the kind of projects they would be best suited for.
Project Management Methodology 1: Waterfall
According to the waterfall or “traditional” methodology, all tasks and phases should be completed in a linear, sequential manner. In other words, each project stage must be completed before the next begins.
You must have a crystal clear idea of project demands before proceeding, as there’s no scope for corrections once the project is underway.
Waterfall is divided into discrete stages:
- First Stage: Collecting and analyzing requirements
- Second Stage: Designing the solution and your approach
- Third Stage: Implementing the solution and fixing issues
This methodology is heavily requirements-focused, where every process is self-contained. It’s best for software development, with short and simple projects that have clear and fixed requirements and documentation.
Project Management Methodology 2: Agile
Agile project management is a more dynamic methodology that’s far more adaptive and accommodating toward changes taking place throughout the project instead of following a linear approach.
Put simply, agile is the opposite of the waterfall method. It has no top-heavy requirements-gathering and is instead iterative with small incremental changes that allow a team to respond to changing requirements through frequent testing, reassessment, and adaptation.
Interestingly, the concept of agile management has paved the way for other methodology frameworks, such as lean, scrum, and Kanban. All of these methods are quick, collaborative, and open to data-driven change.
Thanks to its flexibility, the agile approach is very versatile and can be used for diverse projects. It works particularly well for projects where you only have a general idea of a product and hence, need to accommodate quick changes, updates, and adjustments.
We also recommend agile for project management teams with average project planning skills.
Project Management Methodology 3: Waterfall-Agile Hybrid
As the name suggests, this hybrid approach is a combination of the waterfall and agile methodologies. It uses the best of waterfall and agile to create a unique method that’s flexible yet structured, making it more efficient and versatile.
Under this methodology, you start by gathering and analyzing requirements (waterfall), after which you adopt a more flexible approach with an emphasis on rapid iterations (agile). This way, you get the best of both worlds.
You may have to compromise on requirements and flexibility since you are trying to reconcile two polar opposite approaches.
Keeping this in mind, we would recommend the hybrid methodology for projects that have middling requirements, i.e., projects that require structure and flexibility. This would include medium-sized projects with moderately high complexity and fixed budgets, where you likely have an idea of the end product but are still open to experimentation.
Project Management Methodology 4: Scrum
Scrum is a form of agile project management that features heavily in software development. While the methodology may borrow agile principles and processes, it has its own methods and tactics for project management.
Under scrum, all work is split into short cycles (called sprints) that usually last about 1-2 weeks. For every sprint, work is taken from the backlog.
Scrum places the project team front and center of the project and does away with a project manager. Instead, teams are expected to be self-managing and self-organizing and are led by a scrum master for the duration of each sprint.
Performances are reviewed in a “sprint retrospective” at the end of every sprint, and changes are issued and implemented before the next sprint starts.
The scrum method is best suited for highly focused and skilled teams who can set their own priorities and understand project requirements.
Project Management Methodology 5: Kanban
Kanban is another agile framework that focuses on early releases with collaborative and self-managing teams—just like scrum.
It’s a visual project management methodology that strives to deliver high-quality outcomes by painting a picture of the workflow process to identify bottlenecks early on in the development process.
All tasks are visually represented as they progress through different columns on a Kanban board, where every column represents a stage of the process. Work is continuously pulled from a predefined backlog based on the team’s capacity and progresses through the columns on the board.
Kanban is great for giving everyone an instant visual overview of where each work item stands at any given time. Work-in-progress limits restrict the number of tasks in play, meaning you can only have a certain number of tasks in each column—or on the board overall.
Kanban works best for smaller teams and even to boost personal productivity. It isn’t the most appropriate option for large and complex projects with multiple stages and milestones.
Project Management Methodology 6: Lean
The lean methodology promotes maximizing customer value while simultaneously minimizing waste—all aimed to create more value for the customer by using fewer resources.
Originally, this waste minimization referred to reducing physical waste in the manufacturing process. But now, it also targets other wasteful practices in the project management process known as the 3Ms:
- Muda (wastefulness) – When there’s any consumption of resources that don’t create any value for the customer.
- Mura (unevenness) – When you have overproduction in a particular area that results in chaos in other areas of your workflow, leaving you with too much inventory or inefficient processes.
- Muri (overburden) – When there’s too much strain on resources (people, equipment), resulting in breakdowns in either machines or humans (machine breaks or the human is overworked).
Lean seeks to change the traditional way workers operate by making them more value-focused. It shifts the focus from optimizing individual technologies, assets, and vertical departments to optimizing project flow through entire value streams that flow horizontally across assets, technologies, and departments.
This project management methodology is excellent for reviewing the project delivery process, helping cut out waste, and optimizing project flow. While it adds value for the customer, it also helps lower overall costs.
Project Management Methodology 7: Critical Path Method (CPM)
Under the CPM methodology, you categorize all activities needed to complete the project within a work breakdown structure. You can then map the projected duration of every activity, as well as any dependencies between them.
Following CPM will help you map out activities that can be completed simultaneously and activities that must be completed before others can start. You can then use the information to determine which path would enable you to finish the project with the least slack.
This method is more suitable for non-software projects with interdependent parts. If you want some tasks to be completed simultaneously but others to end before others can begin, CPM would be a good fit.
But while CPM can work wonders for industries with complex but repetitive activities, it’s less suited for dynamic areas such as creative project management.
The Best Tools For Project Management
Once you select a project management methodology for your needs, you will want to implement a tool that works best for that specific methodology.
Nira has reviewed many project management tools and researched to find the best ones for various needs. In our article on the best project management tools, you’ll find tools for multiple methodologies and why we like them.
Trello is an affordable option for the kanban and agile methodologies due to its intuitive interface and visual use of boards to track tasks and project progress. If you are using agile for a larger or more complex project, Trello may not be the best option, as it is more lightweight and not meant for highly complicated projects with many stages.
Wrike works very well using the waterfall methodology, using Gantt charts to track progress. It is also very scalable and has a ton of customization options.
Smartsheet is also suitable for waterfall, with its multiple views and included waterfall templates. It is also adaptable enough to be great for lean methodology.
Jira and Asana are robust options for agile methodology, with Jira being a leader in the space. We’ve even put together an article on how to integrate the two.
As for critical path management, Wrike and Jira are both versatile tools that can work here, too.
How to Choose the Best Methodology for Your Projects
Let’s discuss how you can choose the most appropriate project management methodology to ensure your project’s success.
Step 1: Evaluate the Project Thoroughly
Before choosing a project method methodology, you should know precisely what the final deliverable should be like and what you’ll have to do to get those results.
If you have a clear idea about the end result, opt for a structured methodology like waterfall. However, if the end result is vague, choose an iterated methodology like agile.
In addition to the final deliverable, you should also consider other factors like project budget, timeline, type and industry, size and complexity, and stakeholder expectations.
Try to gather your initial requirements for the project. If you find the requirements indicate you need a large and diverse team, pick a project management method that‘s flexible and vice versa.
Step 2: Consider Your Team Members
The whole point of selecting a project management methodology is to have a blueprint for your project that will tell your team what to create and when to create it. But before any of this happens, your team members should be able to read the blueprint.
Learning any methodology involves time—something your team members might be resistant to—resulting in delays. So if your team isn’t familiar with the method, you won’t see any results, or at the very least, you’ll struggle to see anything positive.
In addition to your team’s expertise, you should also consider its composition.
If your team thrives on collaboration, a structured approach like agile would be a better fit. However, if your team is highly motivated and self-monitored, adopting scrum can work well. Similarly, the CPM methodology would make a better choice if you have limited resources.
Choose a methodology that fits with your team instead of pushing your team to fit the method.
Step 3: Analyze Your Organizational Structure
When choosing a project management methodology, consider your company structure, culture, available resources, operating industry, and past records.
If your past records show all your scrum projects have been delayed and poorly received, it would make no sense to implement the method again.
The same logic applies to your company’s organizational structure. While some project management methodologies work well with large organizations having established hierarchies, others are more appropriate for smaller and leaner firms.
Step 4: Consider Stakeholder Requirements
Stakeholders are crucial for a project’s success, which is why you should always consider their requirements to ensure the success of all your projects.
First, you have to factor in stakeholder involvement. Some methodologies require stakeholders to be regularly involved at every stage of the project. For example, with agile, stakeholders must be regularly available for feedback. Therefore, if your stakeholders are busy people, choose a methodology that requires lower stakeholder involvement.
Next, you have to factor in stakeholder requirements. Dive deep into the different aspects of your stakeholders’ requirements. How do they work? What expectations do they have from the project manager? Do the stakeholders frequently change project scope?
If your stakeholders want daily updates, pick a methodology that can accommodate this demand. If you have indecisive stakeholders, choose something more flexible.
Step 5: Review All Your Tools
Project management tools are usually designed to work with a specific methodology. Therefore, your existing software tools play a huge role in influencing your choice of method.
Make a list of all tools you currently use. Follow this by listing the capabilities and limitations of each tool, and then compare your list against the requirements for a prospective project management methodology. Naturally, you want a methodology that works well with your existing two sets and systems.
If you have the budget to buy new tools, that’s great. But remember, you will lose critical time in retraining your team.
We hope the above guide helps you pick the best PM methodology fit for your project, team, organization, toolset, and stakeholders. You’ll see an immediate change with your projects running faster, smoother, and more efficiently. | <urn:uuid:ab13ad5f-13d1-4d8c-a8e6-83b82b05b858> | CC-MAIN-2022-40 | https://nira.com/project-management-methodologies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00386.warc.gz | en | 0.929261 | 2,943 | 2.90625 | 3 |
Juniper Zones Explained
Usually hovering around 10% of the router market share, Juniper Networks might not have a global stranglehold on networking products, but they're also not negligible. For top-of-the-line speed, throughput and open architecture, Juniper outperforms their competition — including Cisco, who holds a larger, broader market share.
Juniper networks are particular, though, from their hardware to their security. Juniper's best approaches to security include the Juniper Networks SRX Series Gateways — high-performance enterprise security, routing and networking devices. A key to understanding how SRX gateways keep the right people in and the wrong people out is understanding Juniper Zones.
What is a Juniper Security Zone?
Quick Definition: Network traffic has to have a place to enter and exist on network devices — those are interfaces. In Juniper networks, a security zone is what you get when interfaces get bundled together and given the same regulation requirements. A zone is a group of interfaces with similar security needs.
An Overview of Juniper Zones [VIDEO]
In this video, Scott Morris covers Juniper zone concepts and how they interact with the security gateway in JUNOS (SRX). Simply put, a zone is a group of interfaces with similar security needs, and policies are going to handle how traffic is controlled between each of these zones.
How Do Juniper Security Zones Work?
In this post, we're going to explore Juniper security zones and how they're defined and operate. You should think of this as a preliminary introduction to the idea of Junos zones, and not a definitive tutorial on managing and configuring Junos security. For more in-depth training in Juniper configuring, browse CBT Nuggets' training for Juniper Networks.
The easiest way to imagine zones is to conceptualize a hypothetical network diagram. We have the wide, open internet over on one side of our network, and separating our network from the hazards and dangers of the web is a Juniper SRX device. Through four separate interfaces, the SRX is connected to four EX4200s, a Juniper ethernet switch.
Those four EX4200s each belong to a zone. Because the interfaces inside each zone receive identical security regulations, it's pretty common to separate zones according to functional departments. So in our case, we'll have an IT Zone for our IT staff and department, a Data Center Zone, Engineering Zone, and a Human Resources Zone. Depending on the size of the organization, we might have many more other zones, too.
In our network diagram, the internet is separated from our network, but the most accurate way to conceptualize the internet in this case would be to call that the Internet Zone. All the traffic in the Internet Zone is separated from our internal zones by routers and switches. It's the job of the network engineer to set up the interfaces to belong to one zone or another.
What is the Null Zone in Juniper Network Security?
In Juniper Network Security, there is a Null Zone, or "blank" zone. The Null Zone exists by default, and all interfaces that aren't assigned anywhere else belong to the Null Zone. What's essentially happening in the Null Zone is that all traffic going into and out of that zone is getting dropped. The default behavior of the Null Zone is that it doesn't go anywhere.
Once you know this about the Null Zone, it's a standard security practice. But if you're not familiar with this default behavior, it can be confusing and annoying. Because out of the box, if you were to plug in an SRX device, you'd find that network traffic isn't routing. The traffic doesn't get where you want it to, and you likely wouldn't know why. On Juniper devices, interfaces are in the null zone by default, where traffic won't be passed until the interface is assigned to a zone.
There are some minor exceptions to this. Management interfaces like fxp0 and em0 don't technically start in the null zone. Nevertheless, fxp0 and em0 interfaces — acting like standard management interfaces — allow network access to each node in the cluster, and still need to be configured before accessing them.
Juniper Security Zone Rules vs. Juniper Security Zone Policies
Something to remember about zones, and the reason that management interfaces like fxp0 and em0 don't need to be explicitly attached to a zone, is that zones define rules of transit. That is to say, zones regulate packets coming into or going out of the router itself. Transit and egress of packets from and between users is what zones focus on.
And there are rules about zone behavior too. But by that we don't mean policies. Zone policies are the rules for the handling of traffic. But zone rules are how zones behave, what they're capable of, and most importantly, what they're not capable of.
Obviously, zones exist to have logical interfaces assigned to them. A logical interface may be assigned into a zone, but it can't be assigned to more than one zone. Similarly, a logical interface can be assigned to a routing instance. But you can't assign the same logical interface to more than one routing instance. In both cases, zones and routing instances, it's got to be one thing or another.
To think back to our network diagram, we couldn't have a single interface be both in HR and Engineering. We might set up access controls that allow for highly similar features between the HR interface and Engineering interface, but we can't have one interface belong to multiple zones.
Notice that we're referring to the point on the device that does the routing as the "logical interface". If you're not entirely familiar with routing in the Junos world, that term might throw you off. But you might have heard it referred to elsewhere, in other routing schema, as a unit. Other vendors also refer to the same concept as a "sub-interface". Really, whatever you want to call it is fine.
But it's important to understand that it's the logical portion that must belong to one and only one zone or routing instance. The nature of abstracting the interface and converting it to a logical interface for the SRX to interact with means that it can't be subdivided to other zones and interfaces.
Can Logical Interfaces in the Same Zone Be Assigned to Different Routing Interfaces?
No, a logical interface cannot be assigned to different routing interfaces than other interfaces in the same zone. The reason for this has to do with the logical abstraction of the interfaces.
In other words, we can't have some places in HR belong to one routing instance and other places in HR belong to another one. Remember that a zone is a group of interfaces receiving the same security measures and transit policies. Splitting a zone up in that way wouldn't just be counter-intuitive, it would make for a freakishly confusing situation on the SRX itself.
The bottom line when it comes to zone behavior is that if you want to assign different interfaces in the same zone to different routing interfaces, you'll need to set up multiple zones. Not only that, but that means that an incredibly important part of planning and creating your zones is understanding and taking into account how your interfaces are done. All of the logical interfaces in one particular zone must belong to the same routing instance.
To that effect, many engineers keep the demarcation very simple. In other vendors, you might see the simple differentiation of "Inside" and "Outside". In the Junos world, that isn't quite the default behavior. The default behavior on branch devices will have some zones already baked in. Those are typically a "Trust" and an "Untrust".
The common wisdom in the IT world is that Juniper Networks' relatively small market share tends to mean that the IT professionals who are trained and familiar with the devices are especially valuable. It's a matter of low supply and considerable demand.
Understanding Juniper zones, how they route, and the rules that govern them is core to using and securing Juniper networks. If you're interested in starting a career in Junos networking, consider CBT Nuggets' associate-level JNCIA-Junos (JN0-102) training. | <urn:uuid:2068278f-fdbb-42c3-bede-ef0824f9ff13> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/networking/juniper-zones-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00386.warc.gz | en | 0.946188 | 1,705 | 2.734375 | 3 |
“40 per cent of jobs will be lost to AI”. That was the sort of headline inadvertently prompted by a report by Oxford University academics on ‘The Future of Employment’ which examined the coming impact of Artificial Intelligence on the workplace.
Whilst the researchers later clarified that the reality of AI’s growth would be slightly less apocalyptic, the consensus remains that more than a third of jobs will be at risk of significant automation by the mid-2030s. Regardless of the exact numbers affected, we may currently be living through a period which will enter the history books as the start of the fourth industrial revolution. What will this mean for the role of humans in decision-making in the workplace?
- Artificial intelligence: a job creator or job destroyer? (opens in new tab)
The transition to new manufacturing processes in the first industrial revolution saw machine replace muscle, with machines doing the heavy lifting previously undertaken by human - or animal - power. The second and third revolutions saw electrically-powered flow production and computerisation transform the workplace respectively. Some people believe we are in the midst of a fourth industrial revolution – one defined by replacing brain power with machines.
That may prompt a bit of existential angst, but let us focus in this discussion on the impacts for knowledge workers.
A family doctor or general practitioner is an exemplar knowledge worker, building on long theoretical training with daily cases adding to their bank of experience and insights. However, they are limited in what they can know by their training, patient interaction, and outside study.
“Thinking” machines on the other hand have an outsize advantage. They can absorb data from thousands, even millions, of case notes without flagging. Moreover, those case notes can track a patient well outside of the normal scope of a family doctor, taking in the outcomes of downstream diagnosis and treatment by specialists. Like humans, AI can develop biases but, unlike many of us, AI algorithms will actually change their predictions as the information changes.
The transition to automation is already well underway. In the medical field, Google Health’s Streams app is exceeding – in certain contexts – experienced consultants’ ability to assess multiple streams of data and make effective predictions. It can of course do this tirelessly, round-the-clock.
AI is undoubtedly encroaching on many jobs. Professional services firms, for example, are worried as they see AI pick up the fee-generating tasks historically carried out by the base of their pyramid. However, AI is simultaneously opening up many new opportunities for human experts.
The advantage of appropriate AI
Businesses implementing AI well are already benefitting in areas that span the entire value chain. From improved customer experience and reduced agent costs, to more-insightful analysis that can support management decisions, the impact that AI delivers to businesses today can be game-changing. Those late to automation will lose out against their competition.
Consider the example of a mobile network customer. An unrealistic usage value can now be spotted and diverted by AI for human expert review, which means that the customer isn’t landed with an outrageous bill. Not only will this save the time of the service agent, it will also avert the bad press from a negative Twitter complaint. In this case, AI can help maintain a satisfied customer, and prevent resources having to be diverted on rectifying downstream errors.
Management teams will increasingly rely on AI to do the analytical heavy-lifting. Exception reporting won’t need to be driven by thresholds set by management. Instead, AI can learn to recognise what is considered ‘normal’ and ‘good’, and flag early warning signs when something diverges from the norm. By using AI in this way, management teams will be able see what the road ahead looks like and make predictions and corrections accordingly.
- The AI jobs boom (opens in new tab)
Complementing not compromising
AI excels at performing tightly-constrained, bounded, tasks. However, it often falters on more complex activities that humans find straightforward. When faced with complications, the human brain can reason outside the parameters of a given situation.
One reason driverless vehicles haven’t been rolled out extensively is the need for drivers to synthesise multiple inputs and respond appropriately to uncommon situations. To succeed in this area, machines would need to be able to react to the variables around them in the same way a human driver would e.g. “the person currently standing safely on the pavement looks distracted and is about to step into the road”.
A typical corporate example would be a business that is moving into a new market or product area. Past performance of established business units may be a guide, but a predictive algorithm built on it will be a far from infallible guide to future success in the new ventures.
Overseeing such challenges means that the need for human intelligence won’t disappear anytime soon. In fact, the demand for human expertise should actually grow as companies utilising AI free up more time to focus on the “value-add” activities. As times goes on and technology takes up more and more of the grunt work of analysing large data sets and providing guidance, more opportunities will come up for human input to synthesise these predictions and define the next action to take that supports the firm’s strategic and operational goals.
A ‘gotcha’ for all of this is that algorithms will reflect biases within the given data sets. For example, there is evidence that algorithms used during recruitment processes have treated applicants less favourably based on their gender or ethnicity because of historical decisions taken by recruiters and line managers. Flaws like this are a reminder that predictions from an algorithm should be taken as guidance, not fact, and sometimes could be simply wrong.
The role of AI in the workplace, therefore, will largely complement and augment the human decision-making process. Managers don’t need to waste time sifting through available analyses; instead, they can take advantage of AI-generated insights, with algorithms flagging potential issues and suggesting follow up action. As such, AI is not about relying solely on the outcome of algorithms to make decisions, but rather using it to guide us towards greater innovation while protecting us from costly mistakes. Whilst AI may not always get it right, businesses that use it properly will certainly reap the rewards.
- Employees in the age of robots and AI: More relevant, not less (opens in new tab)
Xavier Fernandes, Analytics Director, Metapraxis (opens in new tab) | <urn:uuid:b776f4fb-984f-41a3-903a-3f1aae433854> | CC-MAIN-2022-40 | https://www.itproportal.com/features/ai-and-human-decision-making-is-ai-set-to-take-over-the-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00386.warc.gz | en | 0.9438 | 1,346 | 2.921875 | 3 |
Traditionally, the majority of businesses used only dedicated server hardware. Data centres used to be filled with a number of server-racks and each one of these servers had a different dedicated application and protection scheme. If there was a need for another application, then a new server was added to the system and deployed. However, today server virtualization is being adopted rapidly. This is because of:
Application Isolation: Conventionally, data centres alloted one dedicated server per application. However, this caused an extreme waste of resources, as most servers were underutilized and the workload was minimum. With server virtualization, you get to make the most out of your existing servers.
Increased Uptime: With server virtualization chances of data breaches, cloud backup failures, etc. , are minimzed. In the process, IT standards compliance mandates are met.
Run New Applications at Low Costs: Hardware and Operating Systems quickly get outdated, as newer versions are launched. It is not easy for most businesses to replace their entire IT infrastructure with new ones. This is where virtualization can be helpful. It can give you access to the most up-to-date hardware and OS at affordable costs.
Server Virtualization for Backups and Disaster Recovery
For any business, maximum uptime is extremely important. Thus, when you are running a backup job, you would rather not pause the running applications. Your systems might not be advanced enough or your server might not have sufficient memory to handle multiple complex processes at the same time. With server virtualization, however, you can run a great number of applications simultaneously while running backup jobs at the same time.
Here are a few ways you can benefit from server virtualization for creating and managing backups:
Faster and Easier Full-Server Backups: Some vendors use modern VM methods that allow you to create full backups much faster than using traditional methods. For instance, by using Block Level Differentials, users don’t have to backup the full imagine every single time, rather only backup the blocks that have changed (whether modified or added data).
Hardware Independent Data Replications: Before virtualization was popular, data replication was quite expensive and tedious, as it used mirrored SAN hardware or server-by-server reconfiguration. However, with virtualization you can perform replication at the hypervisor level, thus making hardware dependence irrelevant.
Better Data Protection: Virtualized servers are in essence physical computers that serve as containers for virtual machines. In other words, they are software modules that appear physical on the surface. In reality, however, they are separated and abstracted from the core hardware layer. So, with virtualized servers, you don’t have to deal with complex things, such as boot sectors, system states, etc. and protecting your data becomes much faster and easier.
Today, most organizations run VMs in their computing environment. Running VMs is very important and advantageous, and can optimize your IT environment and customize it on the fly, enjoy robust data recovery and cloud backup options. It can also lower your operational (hardware, staffing, infrastructure, power, cooling and licensing) expenses significantly.
Optimizing your cloud backup and recovery in virtualized environment doesn’t happen overnight, but needs a deep thought and planning, as servers could at times fail when virtualized. Therefore, you will need to assess your hardware status before running VMs, layered with cloud backup and recovery software.
If you are not sure where to start on your VM and cloud backup & recovery needs, contact Data Deposit Box. | <urn:uuid:8d652d82-5099-4deb-a69a-77d79c615bff> | CC-MAIN-2022-40 | https://datadepositbox.com/how-server-virtualization-benefits-%E2%80%8B%E2%80%8Bcloud-backup-and-disaster-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00386.warc.gz | en | 0.946817 | 729 | 2.6875 | 3 |
A cyber security vulnerability generally refers to a flaw in software code that allows an attacker access to a network or system. Vulnerabilities leave businesses and individuals open to a range of threats including malware and account takeovers.
There is a huge range of possible vulnerabilities and potential consequences to their exploits. The US government’s National Vulnerability Database (NVD) which is fed by the Common Vulnerabilities and Exposures (CVE) list currently has over 176,000 entries. One well-known example of a cybersecurity vulnerability is the CVE-2017-0144 Windows weakness that opened the door for WannaCry ransomware attacks via the EternalBlue exploit. Another infamous case is the Mirai botnet that spread through the exploitation of multiple flaws.
Once vulnerabilities are discovered, developers typically work fast to release an update, or “patch.” Ideally, all users install the update before attackers have a chance to exploit the vulnerability. But the reality is that in many cases, attackers strike quickly to take advantage of a known weakness. Plus, even when a patch is released, slow implementation of updates means that attackers can exploit vulnerabilities years after they have been discovered.
In this post, we’ve rounded up the top cybersecurity vulnerability statistics and facts to be aware of as we head into 2021.
1. Over 8,000 vulnerabilities were published in Q1 of 2022
The NVD database holds 8,051 vulnerabilities published in Q1 of 2022. This is about a 25 percent increase from the same period the year prior. If these numbers hold, this would mark a slight year-on-year increase since there were around 22,000 published in 2021.
2. Half of internal-facing web application vulnerabilities are considered high risk
Edgescan’s 2022 Vulnerability Statistics Report analyzed the severity of web application vulnerabilities. It found that almost one-in-ten vulnerabilities in internet-facing applications are considered high or critical risk. This rose to 15 percent if the target normally processed online payments.
3. Organizations with more than 100 staff see more high or critical-risk vulnerabilities
Edgescan’s 2021 report broke down the severity of vulnerabilities according to company size. Smaller companies with 100 employees or fewer saw the lowest portion of medium, high, or critical-risk vulnerabilities (five percent total). Companies with 10,000+ employees see the largest portion of medium and critical-risk vulnerabilities while medium-sized organizations with 101–1,000 employees saw the largest portion of high-risk vulnerabilities.
4. The mean time to remediation (MTTR) is around 58 days
According to Edgescan, the average time taken to remediate internet-facing vulnerabilities was 57.5 days. That is a slight improvement over the year prior when the MTTR was 60.3 days.
This varies from one industry to another, though. Public administrations, for instance, had a MTTR of 92 days, whereas healthcare organizations had an MTTR of just 44 days. The data shows that the smaller an affected organization is, the more quickly it tends to recover.
5. The most severe vulnerability of 2021 was CVE-2021-44228
CVE-2021-44228 is a vulnerability impacting Log4j, an open-source logging library used in thousands of projects, applications, and websites. This vulnerability allowed attackers to run arbitrary code on any affected system, and while it was swiftly patched out, it’s extremely likely that a high number of vulnerable applications remain online.
6. The oldest vulnerability discovered in 2020 was 21 years old
Interestingly, Edgescan found a pretty old vulnerability that has been around since 1999: CVE-1999-0517. This affects Simple Network Management Protocol version 2 (SNMPv2), which is used for managing devices and computers on an IP network. The vulnerability can allow unauthorized SNMP access via a guessed community string. It has a base Common Vulnerability Scoring System (CVSS) score of 7.5 making it a high-severity weakness.
7. The first critical vulnerabilities in a major cloud infrastructure were found in January 2020
In early 2020, Check Point researchers discovered and reported critical vulnerabilities in the Microsoft Azure infrastructure. According to the Check Point article detailing the vulnerability, researchers “wanted to disprove the assumption that cloud infrastructures are secure.” The vulnerabilities received the highest CVSS score of 10.0. The qualitative severity ranking of a score of 9.0-10.0 is “critical.”
These vulnerabilities enable malicious actors to compromise apps and data of users who utilize the same hardware.
8. More than 11% of vulnerabilities have a critical score
According to CVE Details, out of roughly 176,000 vulnerabilities, more than 19,000 have a CVSS score of 9.0–10.0. That said, the vast majority (77.5 percent) have a score between 4.0 and 8.0.
9. 75% of attacks in 2020 used vulnerabilities that were at least two years old
According to the Check Point Cyber Security Report 2021, three out of four attacks took advantage of flaws that were reported in 2017 or earlier. And 18 percent of attacks utilized vulnerabilities that were disclosed in 2013 or before, making them at least seven years old.
10. Citrix remote access vulnerability attacks increased 2,066% in 2020
According to Check Point, the number of attacks exploiting vulnerabilities in remote access products increased substantially in 2020. Citrix attack numbers increased more than 20-fold, while Cisco, VPN, and RDP attacks increased by 41%, 610%, and 85%, respectively.
11. 31% of companies detected attempts to exploit software vulnerabilities
A 2020 report from Positive Technologies tells us that almost one-third of detected threats involve software exploit attempts. According to the report:
“More than half of attempts involved vulnerability CVE2017-0144 in the implementation of the SMBv1 protocol. This is the same vulnerability leveraged by the infamous WannaCry ransomware, and for which a patch was released back in 2017. But attackers have kept it in their arsenals as they search for computers that have not been updated in the last 3.5 years.”
12. High-risk vulnerabilities are present on the network perimeters of 84% of companies
Another study from Positive Technologies uncovered the alarming statistic that 84 percent of companies have high-risk vulnerabilities on their external networks. It also found that more than half of these could be removed simply by installing updates.
13. More than one in four companies are still vulnerable to WannaCry
Positive Technologies also found that 26 percent of companies remain vulnerable to the WannaCry ransomware as they have not yet patched the vulnerability it exploits. That’s particularly concering given that WannaCry attacks spiked in Q1 of 2021.
14. XSS remains a huge threat
Hacker One research found that cross-site scripting (XSS) weaknesses were the most common type of vulnerability in 2020, accounting for 23 percent of all reports. Rounding out the top three weakness types was information disclosure (18 percent) and improper access control (10 percent).
15. The most profitable industry for bounty hunters is computer software
When it comes to which industries earn the most for bounty hunters, computer software weaknesses are the highest earners by quite a significant amount. The average bounty payout for a critical vulnerability is around $5,754. The electronic and semiconductor industry pays $4,633 per critical vulnerability and the cryptocurrency and blockchain field pays about $4,481.
16. “80% of public exploits are published before the CVEs are published”
A report published by Palo Alto Networks in August 2020 found that 80 percent of studied exploits were made public before their related CVEs had even been published. Perhaps more concerning is the length of time that passes between publish dates. On average, exploits are published 23 days before their respective CVEs. As noted in the report:
“As a result, there is a good chance that an exploit is already available when the CVE is officially published – illustrating one more way that attackers are too often a step ahead of security professionals.”
17. More than 28,500 WordPress vulnerabilities have been detected over the past 8 years
The number of new vulnerabilities has been increasing steadily since WPScan first started tracking in 2014. More than 3,000 new vulnerabilities were discovered in 2021, and in the first quarter of 2022, we’ve already seen an additional 700.
18. In Q4 2021, zero-day exploits were involved in 66% of malware
WatchGuard’s Internet Security Report – Q4 2021 tells us that from October to December of 2021, zero-day malware accounted for two-thirds of all threats. This was a marginal decrease over the previous quarter.
19. Lower numbers of vendor-specific vulnerabilities in 2021
According to RiskBased Security’s 2021 Year End Report, IBM was the vendor with the most confirmed vulnerabilities this year. However, it’s worth noting that most vendors actually have fewer vulnerabilities than last year. The exceptions are Software in the Public Interest, Inc and Fedora project, which saw a small increase.
20. Over 75 percent of applications have at least one flaw
Veracode’s State of Software Security Report Volume 11 released in October 2020 found that more than three-quarters (75.2 percent) of applications have security flaws. That said, only 24 percent of those are considered to have high-severity flaws.
21. Information leakage flaws are the most common
Veracode also tells us that the most common types of flaws are information leakage, CRLF injection (where an attacker injects unexpected code), cryptographic issues, code quality, and credentials management.
22. One in four flaws are still open after 18 months
A fairly alarming finding from Veracode’s 2021 report is that after a year and a half, around 27 percent of flaws are still open.
23. Frequent scanning correlates to much faster remediation time
Veracode did find that applications that scanned for flaws regularly saw much faster average remediation times. Those with 260+ scans per day remediated 50 percent of flaws within 62 days. That time was extended to 217 days for applications running just 1–12 scans per day.
24. Google has paid $35 million in bug bounties since 2010
Google’s Vulnerability Reward Program (commonly referred to as a bug bounty program) rewards researchers for discovering and reporting bugs in the company’s software. It has paid out $35 million since 2010. 696 researchers from 62 countries were paid bounties in 2021, with the largest single award amounting to $157,000.
25. Microsoft paid almost $14 million in bug bounties in one year.
In a similar vein, Microsoft rewards researchers that spot and report bugs in its software. In an July 2021 review, the company reported it had paid $13.6 million in bug bounties in the past 12 months. This is more than double the amount Google paid out in 2019. In total, 340 researchers were awarded with the largest award amounting to $200,000.
26. Facebook (now Meta) has awarded almost 7,000 bounties since 2011
A December 2021 blog post by Facebook (now known as Meta) tells us that since its bug bounty program began in 2011, the company has received over 150,000 reports and awarded 7,800 bounties. At the time of the report, 2021 bounties totaled $2.3 million. Around 25,000 reports had been received, and more than 800 bounties were awarded. Its highest bounty to date is $80,000.
27. Unpatched vulnerabilities were involved in 60% of data breaches
According to a 2019 Ponemon Institute Vulnerability Survey:
“60% of breach victims said they were breached due to an unpatched known vulnerability where the patch was not applied.” However, an even higher portion (62 percent) claimed they weren’t aware of their organizations’ vulnerabilities before a breach.
28. The global IT security market is set to reach almost $400bn by 2028
According to a report by Fortune Business insights, the value of the information security market is forecast to hit $366.1bn by 2028. The exponential growth is driven by the integration of machine learning, the Internet of Things (IoT), and a surge in the number of eCommerce platforms.
FAQs about cyber security vulnerabilities
How do you identify cyber security vulnerabilities?
Whether you're a home user or using a system for business, there are several ways to identify a cyber security vulnerability to help prevent threats from cybercriminals. These are some best practices to follow:
- Check that your device software and operating systems are up-to-date.
- Use an internet security suite to monitor your network for any vulnerabilities.
- Keep up with the latest cyber threat information to avoid risks of ransomware and phishing attacks.
What are the different types of cyber security vulnerabilities?
Cyber security vulnerabilities generally fall into four categories, these include:
- Operating system vulnerabilities arise when the OS is outdated, often allowing an attacker to find an exploit yet to be patched to gain entry to a system.
- Network vulnerabilities, i.e., issues with software or hardware on a network that could allow an outside entity to gain malicious entry.
- User error is one of the most common ways that sensitive data falls into the wrong hands, a.k.a, human vulnerabilities.
- Process vulnerabilities are when processes aren't followed correctly or are not in place to begin with. Reused passwords or weak passwords can make a system more vulnerable for an attacker to penetrate. | <urn:uuid:cd63a746-16aa-4ae3-9cbc-62cb8822fe48> | CC-MAIN-2022-40 | https://www.comparitech.com/blog/information-security/cybersecurity-vulnerability-statistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00386.warc.gz | en | 0.953267 | 2,830 | 3.203125 | 3 |
Cybersecurity threats are growing every day, be they are aimed at consumers, businesses or governments. The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles.
The rapid digital transformation it has forced upon us has seen us rely almost totally on the internet, ecommerce and digital communications to do everything from shopping to working and learning. It has brought into stark focus the threats we all face and the importance of cybersecurity skills at every level of society.
European Cybersecurity Month is a timely reminder that we must not become complacent and must redouble our efforts to stay safe online and bolster the cybersecurity skills base in society. This is imperative not only to manage the challenges we face today, but to ensure we can rise to the next wave of unknown, sophisticated cybersecurity threats that await us tomorrow.
Developing cybersecurity education at all levels, encouraging more of our students to embrace STEM subjects at an early age, educating consumers and the elderly on how to spot and avoid scams are critical to managing the challenge we face. The urgency and need to build our professional cybersecurity workforce is paramount to a safe and secure cyber world.
With a global skills gap of over four million, the cybersecurity professional base must grow substantially now in the UK and across mainland Europe to meet the challenge facing organisations, at the same time as we lay the groundwork to welcome the next generation into cybersecurity careers. That means a stronger focus on adult education, professional workplace training and industry-recognised certification.
At this key moment in the evolution of digital business and the changes in the way society functions day-to-day, certification plays an essential role in providing trust and confidence on knowledge and skills. Employers, government, law enforcement – whatever the function, these organisations need assurance that cybersecurity professionals have the skills, expertise and situational fluency needed to deal with current and future needs.
Certifications provide cybersecurity professionals with this important verification and validation of their training and education, ensuring organisations can be confident that current and future employees holding a given certification have an assured and consistent skillset wherever in the world they are.
The digital skills focus of European Cybersecurity Month is a reminder that there is a myriad of evolving issues that cybersecurity professionals need to be proficient in including data protection, privacy and cyber hygiene to name just a few.
However, certifications are much more than a recognised and trusted mark of achievement. They are a gateway to ensuring continuous learning and development. Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities.
Ultimately, we must remember that cybersecurity skills, education and best practice is not just a European issue, and neither is it a political issue. Rather, it is a global challenge that impacts every corner of society. Cybersecurity mindfulness needs to be woven into the DNA of everything we do, and it starts with everything we learn. | <urn:uuid:8071f385-cd35-4553-a617-5c5f5d726388> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2020/10/05/why-developing-cybersecurity-education-is-key-for-a-more-secure-future/?web_view=true&utm_campaign=Cybersecurity%20News&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-9AAd5ZOWzDh0JywVzJryzF8lUsi3jOoHCtYb3aJFVFcTooSHO0Tzj28zzVKtmoK1TOe04q | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00386.warc.gz | en | 0.945785 | 629 | 2.609375 | 3 |
There is always the possibility that you have been involved with a data breach and you simply have not been contacted by the affected party. Plus, if a hacker has managed to crack a website or service without being detected, you wouldn’t be notified in any case, either. Ask yourself this question: if I were to be involved with a data breach, how would I know it, and what can I do about it? And what is my data being used for anyway?
Let me ask you a few questions—first, how confident are you that you could spot an online ruse, and second, did you know there’s a stain on your shirt right now?
Did you look?
If so, you’ve just fallen for the school playground version of social engineering, a serious threat. Let’s discuss the kind that you’re more likely to see in terms of your business’ cybersecurity.
Cloudflare has foiled the plans of yet another major hacking attack, a record-breaking DDoS attack of the likes we have never before seen. Let’s examine what goes into such an attack and what you can do to keep your business safe from their influence.
Data breaches—any event where a business’ confidential data is viewed, copied, or stolen by an unauthorized person or party—are a serious problem. Unfortunately, they are also a serious problem that can be caused by no shortage of situations. Let’s review some of the causes of business data breaches so you’ll know what to keep an eye out for.
Cybersecurity is an important subject for a business’ entire team to appreciate, particularly when it comes to the minute differences between different terms. For instance, a layperson might hear “breach” and automatically think “security incident.” While this technically isn’t incorrect, per se, the two terms aren’t really synonymous.
Let’s take a few moments to dive into the minutiae and define these two terms more clearly.
Data breaches are an unfortunate reality in this day and age, even during the holiday season. While it is important to do everything you can to prevent these kinds of disasters, you need to be prepared to deal with it—both in terms of your operations, and in terms of communicating with your clientele.
Twitch, Amazon’s popular streaming service where gamers and content creators broadcast to wide audiences, recently suffered a data breach. Thanks to this data breach, folks on the Internet now know just how much these content creators make, and it has exposed a whole new issue that Amazon must resolve.
Many threats immediately make themselves known on your device the second they install themselves, like ransomware and other types of malware. Others, like this newly discovered threat called MosaicLoader, discreetly install themselves in the background of your device and cause problems behind the scenes.
A vulnerability in Microsoft’s MSHTML browser engine has been discovered and tracked by Kaspersky. It is being exploited all over the world right now. How can you avoid this vulnerability so that it doesn’t affect your business? Let’s find out.
The cyberattack on SolarWinds was devastating for many reasons, and Microsoft has officially uncovered yet another type of malware used in the attack on the software provider. This time, it is a backdoor threat they have named FoggyWeb. What does this threat do and why is it so important to look at this incident even now?
Ransomware is such a massive threat that all businesses should be aware of the latest news and findings regarding how it spreads and how it can be prevented. According to a recent report, the latest modes of transporting ransomware have been revealed. What can your organization do to keep ransomware off of its network? Let’s find out.
“Hackers are a serious threat to modern businesses” isn’t exactly a novel statement, is it? However, if a hacker was to be lurking on your network, would you know the signs to help you catch them? Just in case, we wanted to share a few strategies that can help highlight these warnings so you can more effectively catch any threats present on your network—particularly when your workforce is accessing it remotely.
Nothing is more frustrating than going to log into your device and finding out that you either cannot access it or that files you thought were there have been wiped. Unfortunately, this is the situation that many users of a specific device have recently gone through. Thanks to an unpatched vulnerability, users of Western Digital’s My Book network-attached storage device are suffering from lost files and lost account access stemming from remote access.
It doesn’t matter if you are a small locally-owned business or a larger-scale enterprise. Network security is equally important, as all businesses by default collect valuable information for hackers. It makes sense to protect your valuable assets, and your data is one of them. A recent threat called Agent Tesla is just another example of phishing malware designed to steal data from businesses just like yours.
It seems that the last few months have been filled with major cyberattacks, particularly those taking advantage of major businesses that might not initially be considered targets for these kinds of acts. For instance, McDonald’s Restaurants was recently breached. Let’s examine the situation, and how it plays into the recent trends we’ve witnessed.
Ransomware has rapidly progressed from an irritating annoyance to a legitimate global threat, with the U.S. Justice Department officially going on the record and establishing that future ransomware investigations will be handled the same way that terrorism cases are now. Let’s review the reasons behind this policy change and how your business should respond.
While it really would be a nice thing to have, there is no magic bullet for your business’ cybersecurity—no single tool that allows you to avoid any and all issues. However, there is one way to help make most threats far less likely to be successful: building up your company’s internal security awareness amongst your employees and team members. Let’s go over eleven ways that you can help ensure your company is properly protected, simply by encouraging your employees to take a more active role in guarding it.
Last weekend saw a significant cyberattack waged against the world’s largest meat processor and distributor, JBS S.A., that completely suspended the company’s operations in both North America and Australia… and as a result, has impacted the supply chains associated with the company. Let’s examine the situation to see what lessons we can take away from all this.
Few things are scarier for a modern business to consider than the idea that they will be hacked, regardless of that business’ size or industry. After all, hacking can, will, and does cause significant damage across basically all aspects of your organization. This is precisely why it is so important that—should a business be hacked—the proper steps are taken in response.
Research has revealed that cyberattacks are spending decreasing amounts of time on their targeted networks before they are discovered. While this may sound like a good thing—a faster discovery of a threat is better than a slower one, after all—this unfortunately is not the case.
Mobile? Grab this Article | <urn:uuid:436488d3-f616-420d-a20b-60f6b1494fc4> | CC-MAIN-2022-40 | https://dev.maketechwork.com/news-events/blog/tags/hackers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00386.warc.gz | en | 0.958999 | 1,508 | 2.53125 | 3 |
What Is Log Injection?
Log Injection, also known as Log Forgery, describes a vulnerability arising from any scenario in which un-trusted input is allowed to pollute or compromise the integrity of application or system log files.
It is typically there result of the failure to prevent maliciously crafted input intended to mislead monitors and SIEM (Security Information and Event Management) systems from appearing in log files.
Log File Integrity
Log files are the way that applications, services, and the OS (Operating System) itself record events and create an historical archive of activities that have occurred. Every application typically generates a log file which (ideally) allows for the reconstruction of events in the case of problems.
In addition to providing insight into the sequence of events leading up to a problem, log files are now often fed into SIEM systems that look for patterns of problematic or suspicious behavior, and then potentially generate alerts to proactively warn administrators and security personnel. The integrity of the information in the logs is assumed to be accurate and log file content is typically trusted.
This trust is misplaced if untrusted input provided to an application can appear without filtering in the application’s log files. This is because content can be maliciously crafted within the log file(s) to make it appear that a problem is occurring when it is not, and/or can be used to obscure a problem or an attack.
Log Injection: Example
Consider an application that logs failed login attempts and triggers an alert after some fixed number of failed attempts by with the same login id. This might be used to detect brute-force attacks on the application and pro-actively alert administrators. Further suppose that the SIEM system has been configured to generate an alert if 10 entries such as the following appear with the log for the same login id within one (1) minute:
Sep 11:2018:01:07:13: ApplicationName:Failed Login, Id=admin
The monitor would be reset if a successful login event occurs before reaching the alert threshold.
The solution makes sense as long as the integrity of the log file is maintained. However, an attacker who can add arbitrary content to the log file might attempt to login with an Id value designed to forge a log entry:
foo\r\nSep 11:2018:01:07:13: ApplicationName:Successful Login, Id=admin
If the application does not validate the incoming login id value, and subsequently logs it as shown above, the resulting log file would appear to contain TWO (2) entries, the first unsuccessful and the second successful:
Sep 11:2018:01:07:13: ApplicationName:Failed Login, Id=foo Sep 11:2018:01:07:13: ApplicationName:Successful Login, Id=admin
The latter line is a forged record that will reset the monitor on failed login attempts for the ‘admin’ account and prevent the intended alerts from being generated.
For additional information, a classic treatment of the topic can be found here,
For insight into how to detect Log Injection vulnerabilities, please see the article entitled “How To Test for Log Injection“.
For insight into how to avoid or fix LDAP Injection vulnerabilities, please see the article entitled “How To Prevent Log Injection“.
About Affinity IT Security
We hope you found this article to be useful. Affinity IT Security is available to help you with your security testing and train your developers and testers. In fact, we train developers and IT staff how to hack applications and networks.
Perhaps it was a network scan or website vulnerability test that brought you here. If so, you are likely researching how to find, fix, or avoid a particular vulnerability. We urge you to be proactive and ensure that key individuals in your organization understand not only this issue, but also are more broadly aware of application security.
Contact us to learn how to better protect your enterprise.
Although every effort has been made to provide the most useful and highest quality information, it is unfortunate but inevitable that some errors, omissions, and typographical mistakes will appear in these articles. Consequently, Affinity IT Security will not be responsible for any loss or damages resulting directly or indirectly from any error, misunderstanding, software defect, example, or misuse of any content herein. | <urn:uuid:3ee1f816-a768-4780-9778-099a5ceef050> | CC-MAIN-2022-40 | https://affinity-it-security.com/what-is-log-injection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00586.warc.gz | en | 0.919088 | 892 | 3.453125 | 3 |
Smart Cities Blog Series
This blog is a part of a series that explores key trends in smart infrastructure by leveraging Artificial Intelligence and IoT technologies. The write-up highlights smart applications in reducing pollution, streamlining waste management, improving road safety, and reducing traffic congestion.
Factories are the principal source of air, water, and soil pollution. Harmful greenhouse gases are constantly emitted at alarming rates in countries with heavy R&D, such as India and China. Toxic waste is often dumped into nearby water sources or land. This heavily damages the surrounding arable land and increases the levels of toxicity in fishing grounds.
IoT sensors can be placed at key pollution points in factories to collect samples from water, soil, or air. This data can then be sent to information processing facilities where Big Data processes cross-reference these levels against permitted values and benchmarks. Artificial Intelligence applications can map predictive models of environmental degradation based on values from these IoT sensors.
Advanced ‘Smart Sorting’ technologies utilize IoT image sensors that scan specified areas and send the information to an advanced AI-powered image recognition software that differentiates between recyclable material and waste products. This sequence is currently applied in “conveyer belt style” sorting facilities. AI-powered sensors are significantly better at detecting and recognizing materials from different angles than traditional optical sensors.
Intelligent Bins are another application of AI & IoT in smart infrastructure. Intelligent sensors placed in public garbage bins across the city can actively monitor the ‘fullness’ of trash cans. This information can be sent in real-time to an Artificial Intelligence-powered software that optimizes and determines the best garbage collection route and frequency. This provides municipalities with an efficient option to collect trash and reduce fuel consumption.
Traditional Streetlights have always been expensive to maintain and extremely energy-consuming. Municipalities in smart cities have since switched to intelligent LED streetlights and within a year have reported an 80% decrease in overall costs, including electricity usage, maintenance, and replacements.
Taipei, among other cities, has made the move to intelligent LED streetlights. This new technology utilizes various IoT sensors to measure key data trends in its immediate environment, such as weather, light intensity, humidity, and visibility. Aggregate datasets are sent over the internet to information processing facilities where data from streetlights all over the city are processed by a central Artificial Intelligence-powered software.
Artificial Intelligence streetlight control systems can adjust light intensity according to IoT sensors data. Streetlights can sense and activate only when cars pass by during low traffic instances. Intelligent streetlights can self-report on faults when they require maintenance. Real-time information availability creates safer & more vibrant cities during low visibility conditions and contributes to maintaining efficient energy consumption levels.
The World Health Organization (WHO) reported 1.3 million annual road accident deaths globally in 2021. Additionally, between 20 and 50 million people suffered non-fatal injuries that may lead to permanent disability. Such accidents heavily contribute to unsafe cities, lower productivity, an increase in traffic congestion, and a decrease in the overall quality of life.
Although most of these accidents occur in low to mid-income regions due to poor road infrastructure, unsafe road systems, and a lack of road laws adherence, developed urban cities still account for a significant amount of fatal & non-fatal road accidents. As such, smart cities must tackle the challenges of road surveillance, traffic control, and safety systems by Artificial Intelligence, Big Data analytics, and IoT technologies.
IoT-powered traffic lights contain smart sensors placed strategically across different areas. They monitor and measure traffic congestion levels. This data can be sent and processed in real time by AI-Powered Big Data Analytical routing algorithms, which re-route vehicles across the city simultaneously while displaying road congestion levels in different areas, accidents, and obstacles on roads and highways.
Centralized AI-powered Big Data software can measure traffic flow data collected from CCTV and AI video analytics software to generate predictive traffic flow patterns. This allows municipalities to recognize areas that need infrastructure expansion and additional routes.
IoT beacons and smart sensors placed on parking spots can detect and light up when parking spaces become available. This simple mechanism can save time wasted in finding parking spaces in hospitals, shopping malls, and other public infrastructure.
In the event of car theft, AI-powered Big Data analytics software can tap into a connected network of IoT sensors, beacons, and CCTVs with video recognition software to track and follow criminals throughout the city while relaying that information in real-time to the authorities.
Having successfully delivered on some of the largest and most complex projects in the MENA region, we have developed a stellar reputation for being engineering-driven. This firmly positions us as the go-to solution provider for key projects, such as building Modular Data Centers, IT Infrastructure, Physical Security, Smart Surveillance, Video Analytics, Access Control, and Perimeter detection.
Our solutions are ISO compliant and adhere to international standards.
We partner with security hardware and software industry leaders to deploy these advanced solutions. Our fully in-house team of engineers handles everything from procurement, design, installation, and integration to the deployment of these technologies. | <urn:uuid:8a060f3d-e40d-4390-8e60-0b1eb6758f56> | CC-MAIN-2022-40 | https://mvptech.ae/artificial-intelligence-and-iot-applications-in-smart-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00586.warc.gz | en | 0.923513 | 1,060 | 3.265625 | 3 |
What Is Cloud Infrastructure?
Cloud infrastructure is a collection of the components and elements required to provide cloud computing. This includes computing power, networking, storage, and an interface that enables users to access virtualized resources.
Virtual resources mirror those of physical infrastructure, and they include components like memory, network switches, servers, and storage clusters. They are required to create applications that users can access through the cloud or retrieve via the internet, telecom services, and wide-area networks (WANs). The cloud infrastructure approach offers benefits like greater flexibility, scalability, and lower cost of ownership.
A cloud infrastructure enables organizations to access data storage requirements and computing capabilities as and when they need it. Rather than creating on-premise IT infrastructures or leasing data-center space, organizations can now rent cloud infrastructure and their required computing capabilities through third-party providers.
Cloud infrastructure is available for private, public, and hybrid cloud systems. It can also be rented through cloud providers and via several cloud infrastructure delivery models.
How Does Cloud Infrastructure Work?
Cloud platform and infrastructure works through an abstraction process, such as virtualization, to separate resources from the physical hardware they are typically installed on into the cloud. These virtual resources are provisioned into cloud environments using tools like automation and management software, enabling users to access the resources they need, when they need them.
Components of Cloud Infrastructure
Any organization that purchases a cloud computing solution does so by leasing access to cloud infrastructure. This is built on four core components: data storage, networking, power, and virtualization, each of which is crucial to helping businesses deploy and deliver cloud applications and services.
Any cloud infrastructure requires physical hardware, which can be located at various geographical locations. This hardware includes backup devices, firewalls, load balancers, networking equipment, routers, and storage arrays.
A key piece of hardware is servers, which are computers or devices programmed to provide services to customers or users. Web servers, which provide Hypertext Markup Language (HTML) or Hypertext Preprocessor (PHP) files, use the Hypertext Transfer Protocol (HTTP). File servers store vast amounts of information, while mail servers enable email messages to be sent across the internet.
In the case of private cloud, organizations can use dedicated servers that are responsible for storing information. Public cloud, on the other hand, uses a multi-tenant model, which enables a server to provide services for multiple customers.
Virtualization is critical to cloud infrastructure as it abstracts data storage and computing power from the hardware. This allows users to interact with a cloud infrastructure from their hardware by using a graphical user interface (GUI). Virtualization often occurs on data storage and computing resources, which makes it easier for users to access them.
Cloud storage enables organizations to store their data in cloud-based file servers rather than their own data centers. Third-party providers, such as Microsoft Azure, Amazon Simple Storage Service, and Google Cloud Storage, are responsible for managing and maintaining data and providing remote backups. Data that organizations store in the cloud can be accessed through the internet or cloud-based applications.
Networking enables the cloud resources users need to access to be delivered to them across the internet. It does this through physical hardware, such as switches, wiring, routers, and load balancers, then virtual networks on top of the physical resources. Cloud-based resources are then delivered to users across a network, typically the internet, which enables them to access cloud applications and services remotely, whenever they need them.
Cloud networks are typically made up of various subnetworks and can be used to create virtual local-area networks (VLANs).
What Are the 3 Main Types of Cloud Architecture?
There are three main types of cloud architecture, all of which use the core components of cloud infrastructure to deliver computing services to users and organizations.
- Public cloud: Public cloud architecture involves the use of third-party cloud providers, which make cloud resources available to multiple customers via the internet. These providers operate multi-tenant environments that lower the cost of data storage and computing power for customers. This approach is also effective in lowering the total cost of computing resources. However, it can present privacy issues for organizations that handle sensitive data or personally identifiable information (PII).
- Private cloud: In a private cloud architecture approach, cloud infrastructure is only accessed by one organization. The private cloud architecture can be built, developed, and maintained by a company’s own IT teams or delivered by external providers.
- Hybrid cloud: A hybrid cloud architecture can be considered the best of both worlds, providing private and public cloud infrastructures that interact within a connected but separate system. This approach is ideal for organizations that handle sensitive information and PII, allowing them to store their most critical data in private clouds and less sensitive data in public clouds. With a hybrid cloud architecture, organizations maintain their private environments while using public cloud services for other computing tasks and data storage capabilities.
3 Cloud Infrastructure Delivery Models
Cloud infrastructure can also be delivered in different ways, typically through three standard delivery models.
- Infrastructure-as-a-Service (IaaS): An IaaS model involves cloud service providers delivering capabilities such as data storage, networking, servers, and virtualization to their customers. The customer can access as much computing power or data storage as they require but needs to have their own software platform to run it. This involves the use of applications, data, middleware, operating systems, and runtime services. IaaS is the most hands-on form of cloud delivery model, requiring organizations to control and maintain most of their own cloud resources.
- Platform-as-a-Service (PaaS): The PaaS approach sees cloud service providers deliver the entire cloud infrastructure to customers. This means the data, networks, servers, and virtualization of the infrastructure will be delivered through a platform of operating systems, runtime services, and middleware. This approach enables organizations to deploy, develop, operate, and test their software and applications in a cloud environment, without the cost and complexity that typically come with building an on-premises IT infrastructure.
- Software-as-a-Service (SaaS): A SaaS model involves cloud service providers delivering applications through web-based portals. The SaaS approach is the most popular, widely used cloud service delivery model. All data storage is located on the service provider’s servers. Customers do not have to store application information on local hard disks, which takes a lot of hard work away from organizations. SaaS providers are responsible for delivering the entire technology stack, which includes maintaining applications and the cloud infrastructure that supports them.
Cloud Infrastructure vs. Cloud Architecture
Cloud infrastructure differs from the cloud architecture itself. Cloud infrastructure involves the tools that are used to build a cloud environment, while cloud architecture is the concept or blueprint behind how it will be built.
Cloud architecture outlines how the various technologies for creating a cloud computing environment will be connected. This includes the combination of components that comprise a cloud environment, including hardware, networks, operating systems, virtual resources, automation software, management tools, and container technologies.
Advantages and Disadvantages of Cloud Infrastructure
Cloud infrastructure is becoming increasingly popular as the technology becomes more powerful, intuitive, and cost-efficient. Key advantages include:
- Cost: Cloud infrastructure offers major cost savings on operating expenses. Cloud customers get all the components and services they need delivered through the cloud, rather than creating, building, managing, and maintaining a data center. This saves huge amounts of spending on energy bills, IT expertise, hardware, servers, and software that accompany a physical data center. Instead, cloud infrastructure enables businesses to pay for only the data storage and computing power they need as and when they require it.
- Agility and flexibility: Cloud infrastructures are highly agile and flexible because they are self-managed and allow service changes to be made in a matter of minutes. This increases uptime and makes business systems efficient, enabling users to access shared data through mobile or Internet-of-Things (IoT) devices as necessary. As a result, organizations become more focused on business and issues that drive the bottom line than being bogged down in IT matters.
- Security: The cloud is often looked down on with skepticism for being insecure or making data easy to compromise. But enterprise-level cloud infrastructures are highly secure environments primed to protect organizations’ data against cyberattacks, viruses, and data breaches through advanced firewalls and encryption keys. Furthermore, a hybrid cloud approach enables organizations to securely store their most sensitive data in private clouds while providing great user experiences by storing less sensitive applications and big data in public clouds.
However, like any technology solution, there are cons to using cloud infrastructure. The most prominent disadvantages that organizations could encounter include:
- Vendor risks: The cloud is an evolving concept and technology, which means it is rapidly fluctuating and ever-improving. That also means some cloud service providers get it right while others get it wrong. If a cloud provider ceases to exist or performs a major overhaul, any organization that relies on that provider for its infrastructure becomes exposed to risk.
- Connection issues: Cloud infrastructure is totally reliant on the internet, which means any cloud solution is only as solid or reliable as the network connection it is built on. Users increasingly refuse to accept any downtime when accessing their favorite cloud applications and services, regardless of whether that downtime is caused by a storm, human error, or technical outage. Any cloud infrastructure needs dependable connections and networks that are supported by business promises and the delivery of service level agreements (SLAs).
- Data control: Cloud infrastructure usage moves data control away from the organization to their cloud service provider of choice. Organizations are likely to have less or limited control over access to their applications, data, and any server-based tools.
Secure Your Cloud Infrastructure with Fortinet
The Fortinet Dynamic Cloud Security solutions protect organizations by giving them the confidence to deploy all application types across any cloud infrastructure. These solutions provide organizations with the control and visibility they need across their cloud infrastructures, enabling them to secure applications and provide connectivity between data centers and the cloud.
Fortinet also protects organizations’ most critical applications with the FortiWeb web application firewall (WAF), which is a Challenger in Gartner’s Magic Quadrant for WAFs. The Fortinet WAF protects business-critical web applications from cyberattacks that target both known and unknown vulnerabilities, or zero-day exploits. The solution is crucial as organizations’ attack surfaces rapidly evolve every time they deploy new features, update existing features, and expose new web application programming interfaces (APIs).
FortiWeb keeps pace with the rapidly changing threat landscape to ensure organizations are always protected from the latest known and zero-day threats. It provides advanced features and a multi-layered approach to protect against major security risks, including the Open Web Application Security Project (OWASP) Top 10 threats. FortiWeb ensures robust protection against and the identification of anomalous behavior, malicious and benign anomalies, and bot mitigation and blocking. FortiWeb can protect business applications in any cloud environment, with options for hardware appliances, virtual machines, SaaS solutions, containers, and the FortiWeb Cloud WAF-as-a-Service.
What Is Cloud Infrastructure?
Cloud infrastructure refers to the components and elements that are required to provide cloud computing. This includes computing power, networking, storage, and an interface that enables users to access virtualized resources.
What are the main components of cloud infrastructure?
There are four core components of cloud infrastructure: hardware, virtualization, storage, and network. Hardware includes physical devices; such as backups, firewalls, load balancers, networking equipment, routers, servers, and storage arrays. Virtualization is used to abstract resources from these hardware devices. Storage enables organizations to host big data in the cloud rather than expensive physical data centers. Network enables users to access cloud-based applications and data through the internet.
What are the three types of cloud computing?
The three types of cloud computing are the Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) delivery models. | <urn:uuid:6ddaeb2b-5086-4909-910a-67d03b8032df> | CC-MAIN-2022-40 | https://www.fortinet.com/fr/resources/cyberglossary/cloud-infrastructure | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00586.warc.gz | en | 0.917137 | 2,545 | 3.5625 | 4 |
IGBT Liquid Cooling
The insulated-gate bipolar transistor (IGBT) is among the most common semiconductors used in Power Electronics. Historically made from Silicon (Si), newer IGBTs are being fabricated from Silicon Carbide (SiC) and Gallium Nitride (GaN) for greater performance and thermal efficiency. While IGBTs play an important role in the delivery and conversion of power, they can also generate high levels of heat during high-frequency switching. A main priority of electric power designers is the cooling of IGBT devices. Mikros Technologies develops custom IGBT microchannel liquid cooling solutions to help you improve power performance, electrical efficiency, safety and reliability of your power electronics system.
Development of IGBTs
The IGBT is a solid-state switch that creates an electrical circuit by applying voltage to part of a semiconductor. It was first developed in the 1980s, and improved upon the invention of the metal oxide semiconductor field-effect transistor (MOSFET). The IGBT is unique because it suppresses the thyristor operation, which makes the device more efficient. IGBTs can be found in modern solar and wind turbine inverters, motor drives and power systems.
Thermal Challenges of IGBTs
IGBT thermal management is an essential part of an optimized power electronics system. IGBT modules can fluctuate in temperature and generate high thermal power depending on their use case, switching state and power input. Although IGBTs helped us make great strides with renewable energy, inefficiencies due to heat loss and operating temperatures continue to be a design limitation. Wire bond delamination due to material stresses from thermal expansion and contraction still presents the largest hurdle to IGBT operation. This has prompted IGBT designers to consider innovative cooling solutions that help increase the power capacity and lifetime of their devices.
IGBT Cooling Solutions
There are several cooling options that can help with IGBT thermal management. Each solution has its own advantages.
Using air-cooled heat sinks can be a quick and cost-effective option for lower power devices. For some industrial designs, air-cooled systems utilize a heat sink with large mass and thermal capacitance to absorb heat spikes. Due to the heat-carrying capacity of air and often high ambient operating temperatures, air-cooled heat sinks have a harder time managing higher temperatures and thermal spikes in high power electronic devices. This makes air cooling an economical option for low- and mid-power devices only.
Direct Liquid Cooling
Direct liquid cooling improves heat transfer coefficients and decreases the thermal resistance of a system, lowering the junction temperatures and increasing power capacities in IGBTs. Liquid cooling also allows designers to achieve greater power per volume, which increases performance and energy efficiency. Direct liquid cooling historically has taken the form of large copper tube cold plates swaged into aluminum blocks with a thermally conductive epoxy. Other designs include machined channels in copper, increasing efficiency. Embedded designs provide a pin-fin heat sink in the direct-bond copper (DBC) substrate, which is then mounted to a fluid reservoir, providing a flow of coolant over the fins for convective cooling.
Microchannel Liquid Cooling
For high power designs, the lowest thermal resistance achievable comes via an innovating microchannel liquid cooling design. Using integrated microchannel liquid cooling can lower the thermal resistance of an IGBT 10x-100x over other liquid cooling designs by reducing the thermal mass of the coolant in micro-sized flow channels. This dramatically improves heat dissipation rates and lowers the operating temperatures of IGBT system designs. Mikros Technologies microchannel cold plates optimize microchannel thermal physics to provide even better performance. Our microchannel matrix designs can be tailored to fit IGBT module specifics and can provide unparalleled and consistent thermal management for power electronics.
Power Electronics Applications
Over the last few decades, IGBTs have become mainstream in power electronics applications. There are several types of devices and systems that rely on IGBTs for power.
IGBTs are ideal for inverters thanks to their exceptional gate control and ability to carry higher currents. Advanced cooling options provide an even smoother transition from DC to AC power using IGBT modules.
IGBTs are used to provide electric power for voltage converters. Converters play a major role in the design of power supplies.
There are two types of power supplies supported by IGBTs:
Switch Mode Power Supply (SMPS)
SMPS uses a switching regulator to convert voltage and currents, and IGBTs can help with high-current applications.
Uninterruptible Power Supply (UPS)
When an electrical system fails, an IGBT can help a UPS deliver seamless, consistent power.
IGBT devices are common in induction heating applications. High power heaters can be optimized by implementing a microchannel cooling solution.
Traction Motor Control
Commonly used in the railway and electric vehicle (EV) industries, traction motors support propulsion applications. These industries require the stable, high-current power that IGBT power modules offer. A more efficient cooling solution for traction IGBTs like Mikros microchannels can reduce the energy used during operation, providing more power and longer battery life.
Renewable Energy Applications
A growing number of commercial, industrial, and municipal entities are implementing renewable energy solutions into their strategic plans. Optimized IGBT liquid cooling with a microchannel solution can help generate more sustainable energy at a lower cost for consumers.
Concentrated Photovoltaic Receivers (CPVs)
Solar Assisted Heat Pumps
A solar-assisted heat pump involves the integration of a conventional heat pump with photovoltaic input to lower heat generation costs. Microchannel cold plates for the IGBT power modules can improve energy efficiency, lower the total cost of ownership for the system.
Faster charging for the emerging electric vehicle market is strongly in demand. Many EV chargers use high power IGBT devices that must be liquid-cooled to provide high current inputs while maintaining critical safety standards. Microchannel liquid cooling of these charging modules can dramatically improve their performance and energy efficiency.
Contact Us Today
As a premier designer and manufacturer of microchannel cold plate designs for IGBTs, Mikros Technologies designs cooling solutions for all these industries and more. For more information about our microchannel liquid cooling solutions, call (603) 690-2020 or contact us online today. | <urn:uuid:743b7f0b-4d0e-4288-98d5-5a8d14b763b8> | CC-MAIN-2022-40 | https://mikrostechnologies.com/home/applications/renewable-energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00586.warc.gz | en | 0.894055 | 1,337 | 3.265625 | 3 |
One billion people live in hunger. How can food producers and retailers help?
The world produces enough food to feed everyone on Earth. Yet, almost one billion people live in hunger. Estimates are that between one third and half of all food produced globally is wasted or lost along supply chains every year. This amount alone would be enough to feed twice the number of hungry people in the world.
What if we start seeing food losses and waste not only as problems but also as an untapped opportunity? By forcing us to think differently, solutions to the food waste challenge can become enablers of policy changes, social development, environmental governance, and business innovation.
Most food waste takes place at home
Wasting food has environmental, social, and economic costs that start adding up right at the farm, and increase with every additional step towards the consumer. The household is where the highest percentage of food waste takes place. Households in economically developed countries are responsible for about 38% to 47% of their country’s food waste. According to research from WRAP, a UK-based Waste & Resources Action Program there are 10 main reasons for food waste at home.
In this point-of-view report, co-authored by Capgemini and the Consumer Goods Forum, we explore the root causes of consumer food waste. Beyond the challenges, the report identifies opportunities that this problem reveals for food producers and retailers and suggests ideas for adopting a digital technology approach that will change consumer behaviors and contribute to profitable business growth for food producers and retailers. | <urn:uuid:fe847d31-b648-4138-8204-44e73b1306f0> | CC-MAIN-2022-40 | https://www.capgemini.com/au-en/resources/smart-reduction-of-consumer-food-waste-using-technology-for-the-benefit-of-retailers-and-consumers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00586.warc.gz | en | 0.953766 | 311 | 3.421875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.