text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Nearly half of the new entries to Oxford Dictionary Online are inspired by technology or online slang.
Words like ‘ZOMG’ and Twittersphere’ are among the 35 new words or abbreviations that have made into the online dictionary.
ZOMG is particularly used in social networks as a sarcastic comment made on a post by an inexperienced user. The abbreviation is a variation of the words ‘oh my god’, though it's not clear where the Z first originated or what it may have meant. Twittersphere on the other hand has been defined as ‘postings made on the social networking site Twitter, considered collectively’.
Other words included in the Oxford Dictionary Online include NSFW, which stands for ‘Not Safe For Work’, infographic, breadcrumb trail, network neutrality, permalink, paperless and Cyber Monday.
Also among the new entrants is ‘lappy’, a slang for laptops. The slang is used particularly outside the US.
“The world of computers and social networking continues to be a major influence on the English language, with the introduction of badware, social graph, and network neutrality into our dictionary,” the Oxford Dictionary said in a blog post (opens in new tab).
“The new additions also hint at the danger of sneaking a peek at the Twittersphere or other social networks whilst at work – not everyone is thoughtful enough to add the NSFW warning!” it continued. | <urn:uuid:46d063b0-defc-4e0a-80ba-fc83f3fa09e7> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/06/06/zomg-enters-oxford-dictionary-online/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00478.warc.gz | en | 0.92371 | 310 | 2.546875 | 3 |
SSL Scanning is a method used to scan encrypted data for malware or confidential data that could lead to a piece of sensitive information being leaked to unwanted hands.
SSL is used to encrypt data for protection against malicious eavesdropping, however, the encrypted data can easily contain malware or information that is not meant to be shared outside an organization’s network e.g. trade secrets or financial statements.
SSL scanners help protect organizations by decrypting the data sent by clients, scan it for malware or confidential information and then re-encrypt it before it is sent to the destination server.
Does DNSFilter do HTTPS/SSL Scanning?
DNSFilter is a DNS provider, we do DNS lookups and block threats by scanning the DNS queries. We do not have access to web traffic packets hence we cannot do anything with them.
In the hierarchy of activities in a client/server communication, SSL Inspection would happen “above” DNS resolution. In the act of resolving a DNS request, we do not have access to the network traffic/packets directly and can do no such inspection.
We don’t intend to spy on your web traffic while performing the DNS lookups. In fact, the very DNS lookup itself would be subject to the same deep packet inspection (DPI) as any other packet on the network, should you have it configured.
Typically, this would be handled at a hardware firewall level, usually attached to an Intrusion Detection System (IDS).
We definitely recommend the use of a full spectrum of security technologies, and while we’ve got you covered on the DNS side, DPI is just one of the other solutions you’ll want in your toolbox. | <urn:uuid:1376c3b1-e0fb-442f-9e32-51b09ed35abb> | CC-MAIN-2022-40 | https://help.dnsfilter.com/hc/en-us/articles/4406002844435-SSL-Scanning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00478.warc.gz | en | 0.929147 | 350 | 2.640625 | 3 |
According to news releases from both Intel and Brown University they have partnered in the hopes of developing technology to help those with spinal cord injuries walk again.
Injuries to the spinal cord can prevent the electrical signals from the brain from passing to the muscles, causing paralysis. These kind of injuries are devastating and permanent as the human body cannot regenerate severed nerve fibers by itself. But researchers believe AI technologies may be able to help some to regain control of their muscles.
Thanks to funding from a $6.3 million DARPA grant, Brown University Researchers along with surgeons from Rhode Island Hospital are partnering with Intel and Micro-Leads Medical to develop an “intelligent spinal interface”. The idea is to build an interface capable of bypassing the damage caused by an injury, by establishing a new link between the brain and the rest of the body.
As the Brown statement explains, “The experimental spinal interface will be designed to bridge the gap in neural circuitry created by a spinal injury (…) The idea is to record signals traveling down the spinal cord above an injury site and use them to drive electrical spinal stimulation below the lesion. At the same time, information coming up the cord from below will be used to drive stimulation above the injury. The device could potentially help to restore both volitional control of limbs muscles as well as feeling and sensation lost due to injury”
The researchers need to collect a large amount of data for motor and sensory signals sent from the spinal cord and use AI neural networks to communicate the correct commands. Intel will offer its hardware and software expertise to help create the machine learning tools that are needed for this project.
“A spinal cord injury is devastating, and little is known about how remaining circuits around the injury may be leveraged to support rehabilitation and restoration of lost function,” Said David Borton, assistant professor at Brown’s School of Engineering and researcher at Carney Institute for Brain Science. “Listening for the first time to the spinal circuits around the injury and then taking action in real time with Intel’s combined AI hardware and software solutions will uncover new knowledge about the spinal cord and accelerate innovation toward new therapies.”
According to estimates from The National Spinal Cord Injury Statistical Center 291,000 people in the U.S. are living with spinal cord injuries, and more than 17,000 new cases are added to that number every year. Over 30% of those spinal cord injuries result in tetraplegia or paraplegia.
The team on this DARPA funded project is committed for the next two years. Saying they will work with volunteers who have spinal cord injuries, having them participate in physical therapy with the interface in use for up to 29 days. The team says the initial focus will be on signals related to leg control and the signals related to bladder control.
Header Image by Intel
I’m Danial Payne I’ve been a freelance writer, video, and web person since 1988. My passion is technology, whether it’s the latest cameras or cutting edge ways the internet is used to improve medicine. I write for Internet News Flash and am helping with the online resurrection of Digital Content Creators Magazine Contact me: firstname.lastname@example.org | <urn:uuid:78d7b8a8-bbbb-42a7-a84d-8e9ada14d194> | CC-MAIN-2022-40 | https://www.internetnewsflash.com/intel-brown-university-2-year-collaboration-on-ai-that-may-give-paralyzed-patents-functional-limb-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00478.warc.gz | en | 0.929235 | 662 | 3.203125 | 3 |
Detecting and preventing ransomware attacks have become the primary goal for most organizations. The cybersecurity community across the globe is severely concerned about the rising sophistication of ransomware attacks. Ransomware operators have become a serious threat to organizations and individuals, creating havoc, encrypting sensitive corporate data, and demanding hefty ransoms.
By Rudra Srinivas, Senior Feature Writer, CISO MAG
Threat actors leverage double-extortion techniques by threatening to post victims’ data online if they refuse to pay the ransom. If this isn’t menacing enough, threat actors are now leveraging the triple extortion technique to make their ransomware business more lucrative. In triple extortion, attackers send their ransom demands to the customers and third-party agencies associated with the victim.
Several industries saw a growth in their cybersecurity budget to thwart ransomware threats, and governments even declared ransomware as a national threat, implementing robust security measures to protect their critical infrastructure from both state and non-state adversaries.
The Rising Costs of Ransomware
Despite multiple joint cyber operations that busted several ransomware groups, new kinds of ransomware strains are still being reported regularly. A survey from Cybersecurity Ventures predicted that ransomware attacks would cost organizations across the world $20 billion in 2021, which is a 57% increase when compared to 2015 ($325 million). It also forecast that ransomware attacks will cost the victims over $265 billion annually by 2031, reporting an attack every 2 seconds.
Rise in Attackers’ Revenue
The pandemic and the newly adopted remote working environment gave more opportunities to ransomware operators in creating new malware and extortion techniques. DarkSide ransomware group, which is behind the infamous Colonial Pipeline hack, extracted over $90 million ransom in Bitcoin from 47 victims. The group reportedly infected nearly 99 organizations with the DarkSide malware, with an average ransom payment of $1.9 million.
Why do companies rush to paying ransom?
Threat actors purposely target high-profile organizations with a larger employee and customer base. Their brand image and the massive amount of sensitive customer data make large enterprises accept ransom demands. Research from the Neustar International Security Council (NISC) revealed that over 60% of organizations admitted that paying the ransom would be their primary solution in the event of a ransomware attack. One in five organizations said they would consider paying 20% or more of their company’s annual revenue.
Most organizations prefer paying ransom to avoid data loss or misuse by attackers. For instance, meat-processing giant JBS confirmed that it had paid $11 million to the REvil ransomware gang to restore its systems. The U.S. Colonial Pipeline reportedly paid $4.4 million ransom after attackers disrupted its services. However, there is no assurance that victims will be able to recover their data after paying the ransom. There is a chance that attackers may demand more ransom; they may release only a small amount of data on the dark web, or they can get hold of a copy of the encrypted data to threaten the victim in the future. It’s imperative to think about the effects and consequences of ransom payments before paying them.
Speaking to CISO MAG about the rise of ransomware attacks, cybersecurity researcher Bob Diachenko said, “Ransomware evolves similarly to any software proposition on the market – there are large groups operating as marketplaces with ransom-as-a-service solutions, state-sponsored APTs, and many independent actors most of which are simply trying to reach a low-hanging fruit in the form of misconfigured databases.”
About the Author:
Read More from the author. | <urn:uuid:fd674f4f-5976-4f73-9d87-482dbebbe7fb> | CC-MAIN-2022-40 | https://cisomag.com/rags-to-riches-the-evolution-of-ransomware-operators/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00478.warc.gz | en | 0.946167 | 731 | 2.5625 | 3 |
Microsoft Access is now much more than a way to create desktop databases. It’s an easy to use tool for quickly creating browser-based database applications that help you run your business. Your data is automatically stored in a SQL database, so it’s more secure and scalable than ever, and you can easily share your applications with colleagues.
This course will guide you through the basics of relational database design and through the creation of database objects. You will learn how to use forms, query tables, and reports to manage data. You will understand the interface, customization, and creation editing of the many objects available within the Microsoft Access application. This course is divided into three separate levels being Basic Microsoft Access, Intermediate Microsoft Access, and Advanced Microsoft Access.
To see more Microsoft related training, click here. | <urn:uuid:9793bc12-45cf-4b25-8b83-4f57b75670b8> | CC-MAIN-2022-40 | https://ituonline.com/product/microsoft-access-2016-training/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00678.warc.gz | en | 0.890596 | 163 | 3.15625 | 3 |
October marks Cybersecurity Awareness month. For 18 years, the Cybersecurity & Infrastructure Security Agency (CISA) has made October a month to educate and promote safe online behaviors and practices for both consumers and businesses. CISA has also created the Stop. Think. Connect™ campaign to raise public awareness about cybersecurity and how consumers and businesses can protect and mitigate against cybersecurity threats. This blog will focus on some of the most common cyber-attacks and tips for how today’s enterprises can safeguard their data as well as their customers.
According to RiskBased Security Research, data breaches resulted in 36 billion records being exposed in the first three quarters of 2020. Additionally, the use of malware increased by 358% through 2020. Also, more than 90% of healthcare organizations suffered at least one cybersecurity breach in the previous three years. The cost of cybersecurity runs in the millions and it’s estimated that cybercrime costs organizations $2.9 million every minute with the average attack costing $3.86 million. During the COVID-19 pandemic, online scams spiked more than 400% in March 2020 alone compared to previous months.
It’s important to note that it’s not just the financial burden that negatively affects today’s organizations, but their reputation and brand name are damaged as well.
Most common cyber attacks
There are dozens of different types of cyber-attacks. The most common ones are listed below:
A man-in-the-middle attack (MITM) is where an attacker intercepts the communication between two parties in an attempt to spy on the victims, steal their personal information and/or credentials, and even alter the conversation in some way between the parties.
MITM attacks are becoming less common these days since most email and chat systems use end-to-end encryption which prevents third parties from interfering with the data that is transmitted across the network, regardless of whether the network is secure or not.
Distributed Denial-of-Service (DDoS) attack
A DDoS attack is where an attacker will flood a target server with malicious traffic in an attempt to disrupt, and maybe even bring down the target. But, unlike traditional denial-of-service attacks, which most sophisticated firewalls can detect and respond to, a DDoS attack can leverage multiple compromised devices to bombard the target with traffic.
A Phishing attack is where an attacker tries to trick an unsuspecting victim into handing/sending over valuable and personal information, such as passwords, credit card details, and so on.
Phishing attacks often arrive in the form of an email pretending to be from a legitimate organization, such as a bank, the tax department, the government, or some other trusted entity.
Phishing is probably the most common form of cyber-attack because it is easy to carry-out, and surprisingly effective.
An eavesdropping attack occurs when a hacker intercepts, deletes or modifies data that is transmitted between two devices. Often referred to as “snooping” or “sniffing”, the attacker looks for unsecured network communications to intercept and access data that is being sent across the network. This is one of the major reasons why employees are asked to use a VPN when accessing the company network from an unsecured public Wi-Fi hotspot for example.
Tips to safeguard your personal information
There are several ways today’s organizations can protect their data and those of their customers. Let’s review some of them.
a. Patching the Operating Systems & Software regularly
Every new application and/or new software program can open the door to a cyber attack if companies aren’t proactive in regularly patching and updating all software. This includes updates on every device and those used by employees. It’s important to always check for updates especially when purchasing a new computer. Most system and software updates often include new or improved security features designed to protect the system from vulnerabilities.
b. Install and Activate Software and Hardware Firewalls
Firewalls can prevent malicious hackers and stop employees from browsing unsafe websites. Companies should install and update firewall systems on every computer, smartphone, and networked devices. This also includes remote/off-site employees even when using a virtual private network (VPN). To add an extra layer of protection, it’s highly recommended that companies install an intrusion detection/prevention system (IDPS) to provide a greater level of protection.
c. Secure All Wireless Access Points & Networks
Wireless networking can provide cyberattacks an open door to steal confidential company information and data. For secure wireless networking, companies should use these router/gateway best practices:
- Change the administrative password on new devices frequently
- Set the wireless access point so that it does not broadcast its service set identifier (SSID)
- Configure the router to use Wi-Fi Protected Access 2 (WPA-2), with the Advanced Encryption Standard (AES) for encryption
- Avoid using WEP (Wired-Equivalent Privacy)
- If providing wireless internet access to your customers or visitors, make sure it is separated from your business network
d. Educating and training employees
Educating employees in the basics of cybersecurity and keeping them up to date with all the threats your business is exposed to is essential. Insider threats are a major cause of vulnerabilities today. Today’s organizations should have an incident response plan that documents what employees should do in the event they encounter a suspicious file or email and/or are a victim of an attack. Keeping everyone in the loop is critical.
In sum, today’s cyberattacks are becoming more prevalent and affecting businesses of all sizes. However, when companies invest in their cybersecurity infrastructure and cyber resilience, they can protect their business and safeguard customer data from any major cyber attack. | <urn:uuid:6ef5d8cc-88db-4d90-a811-6dcdf67f667e> | CC-MAIN-2022-40 | https://www.cassianetworks.com/blog/the-importance-of-safeguarding-data-and-how-to-minimize-cybersecurity-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00678.warc.gz | en | 0.940519 | 1,207 | 3.09375 | 3 |
As we are getting more and more dependent on technology, we also expose ourselves to all kinds of online threats. With the exponential rise of data breaches and ransomware attacks that are affecting millions of home users and organizations alike, cybersecurity starts to be a top of mind problem for everyone.
Investing in cybersecurity helps companies better address major cybersecurity issues like the impact of a cyberattack, huge financial loss, business disruption, or brand reputation damage.
In a business world where customers’ privacy and data protection are vital, companies need to sharpen the focus on a strong cybersecurity culture and adopt a risk-based approach to security.
To boost security and combat cyber threats, companies are adopting new technologies such as automation and artificial intelligence that come with great opportunities for accelerating growth.
There’s no such thing as AI without massive volumes of data. Big data is being processed and analyzed by AI systems, and companies use this groundbreaking technology to find the root causes of problems humans can’t handle by themselves. With the help of AI, businesses worldwide can better scale their responses to the increasing number of threats and “see” them in advance.
There’s been a lot of buzz around Artificial Intelligence (AI) for the past years, and now it’s playing an important role in many sectors such as healthcare, education, retail, automotive, and even cybersecurity. And it will continue to gain popularity.
For example, in education, AI can fill some key gaps in the way teachers learn and do their tasks. From adjusting the learning system based on students’ specific needs to automating administrative tasks, Artificial intelligence is changing and revolutionizing teacher and student experience.
Another useful example is the healthcare sector where AI spans in many areas and shapes medicine today. From robots that assist doctors for surgery to smart devices that better monitor diseases, AI is significantly reshaping medicine the way we know it.
In fact, the artificial intelligence market is projected to grow and reach $190.61 billion by 2025, according to new market research. Moreover, “the adoption of AI has tripled in 12 months, with AI becoming the fastest parading shift in tech history”, said the State of AI 2019 Divergence report.
Another research project from Senseon concluded that around 69% of small to medium enterprises (SMEs) will implement AI security solutions in the upcoming five years, while 44% of them are planning to invest in AI defense in the immediate future.
But what’s beyond these numbers? Achim Daub, an executive at one of the world’s biggest makers of fragrances, shares his story (and experience) about implementing AI in the process of making changes in his line of business:
It’s kind of a steep learning curve, Daub says.
We are nowhere near having AI firmly and completely established in our enterprise system.
Richard Zane, the chief innovation officer at UC Health, a network of hospitals and medical clinics in Colorado, recently launched Livi, a virtual assistant which assists patients who use their website and need more information.
Going through the implementation process of the virtual assistant, Zane argued that:
It took a year and a half to deploy Livi, largely because of the IT headaches involved with linking the software to patient medical records, insurance-billing data, and other hospital systems.
How can AI improve cybersecurity for companies?
The use of AI in cybersecurity proves to be essential for organizations that want to better secure their digital assets and stay ahead of threats.
As cyber threats escalate in frequency and sophistication, data protection is a real challenge for companies around the world. Malicious hackers stay abreast of technology changes and take advantage of the potential of automated cyberattacks.
Recovering from security breaches can take a lot of time and money, so companies have started to invest in AI to better detect and automatically block cyber attacks.
Better detection of cyberattacks: AI includes two subsets, deep learning, and machine learning algorithms, which use behavioral analysis to better detect anomalies. With the help of these technologies, companies can respond faster to online threats and prevent them from happening in the first place.
Depending on the AI system used, it can detect suspicious activity and quickly find errors within your system or network infrastructure.
Accurate prediction: AI can also be used to predict security breaches or cyber threats. Based on its algorithms, it can search through the amount of data and make predictions based on how the system is trained.
A good example is the biometric authentication solution, which uses artificial intelligence to identify and authenticate people based on unique biological characteristics. The integration of AI technology in biometric systems plays a key role in enhancing methods like facial or iris recognition with a higher level of accuracy.
Faster response: AI proves its efficiency in helping organizations to respond faster to the next generation of cyberattacks or combating malware. AI-driven technology can help companies to automate countermeasures to prevent being the victim of a cyberattack and fight against online threats.
A study from the Capgemini Research Institute finds that 69% of companies surveyed acknowledge they can’t respond to cyber threats without using Artificial Intelligence.
Save time and money: Every year, ransomware attacks and breaches cost companies millions of dollars to recover and get their business on track.
The average time of identifying a security breach by companies is about 196 days, and a lot of things can happen during that period of time. Using an AI solution can help companies save time and money, and improving data protection. It does that through the automation process and constant learning on how to better safeguard network infrastructures in the modern world.
AI is changing the game for companies that have already implemented in their technology solutions packed with great benefits.
This trend is confirmed by a survey from Gartner that questioned more than 3,000 CIOs and IT leaders about the most disruptive technologies of the moment.
They concluded that “AI is by far the most mentioned technology and takes the spot as the top game-changer technology”. According to the survey, “37 percent responded that they already deployed AI technology or that deployment was in short-term planning”.
Simply put, AI is helping organizations automate the part of their workflow to boost the effectiveness of their cybersecurity programs.
The data backs up this trend: 53% of surveyed companies are using machine learning for cybersecurity purposes, according to the 2019 Cloud Threat Report from Oracle and KPMG.
Sharing his thoughts on the future of cybersecurity, Joshua Davis, Director of Channels at Circadence states that:
The future of cybersecurity is going to include humans working alongside automated assistants, where AI/ML assist in operations. Imagine a day where there is Alexa/Cortana/Google At Home-type tools providing cyber intelligence support going forward.
The challenges (and dangers) of using AI in cybersecurity
While the role of AI is expanding in cybersecurity and there are many advantages in adopting it, there are also some downsides we should be aware of.
As companies grow smarter with AI by doing data analysis and finding vulnerabilities, so do cybercriminals. They can take advantage of this emerging technology to deploy it for malicious purposes and cause havoc.
Most likely, threat actors will try to keep up with the AI-based technology by leveling up their skills to launch effective attacks.
According to a research whitepaper from DarkTrace, AI-driven malware might become a reality in the near future. The team behind the research presented some possible scenarios in which artificial intelligence (AI) plays a small role in cyber-attacks.
Security researchers believe that “In the future, AI-driven malware will self-propagate via a series of autonomous decisions, intelligently tailored to the parameters of the infected system”.
They also concluded that:
Weaponized AI will be able to adapt to the environment it infects. By learning from contextual information, it will specifically target weak points it discovers, or mimic trusted elements of the system. This will allow AI cyber-attacks to evade detection and maximize the damage they cause.
More than that, security researchers at IBM have already announced a proof-of-concept AI-powered malware called “DeepLocker”, which was presented at Black Hat USA, in 2018.
According to the research, the malware included hidden code that generated some keys which could unlock malicious payloads, under certain conditions.
We could expect to see more attempts from hackers to explore this technology and facilitate the launch of AI-driven attacks. However, we need to acknowledge the help AI is giving companies to be on the safe path and secure their assets.
Complementing human expertise with AI
When it comes to cybersecurity, the potential for artificial intelligence to have long term impact is real. Especially for businesses that are forward-thinking their data security first and learning how to better protect it.
Artificial intelligence is not 100% bulletproof, but one thing is clear: it will play a key role in companies’ defense strategy, enabling them to be more prepared for upcoming and imminent cyber threats. | <urn:uuid:f871ce32-57b9-46f0-9f4f-d22fce8f1bd1> | CC-MAIN-2022-40 | https://def.camp/artificial-intelligence-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00678.warc.gz | en | 0.948702 | 1,869 | 2.6875 | 3 |
Installation: At this stage, SOC analysts are advised to deploy a Security Information and Event Management (SIEM) and Host-Based Intrusion Detection System (HIDS) to detect attacks. To deny an attack, Cyber Kill Chain recommends using Two-Factor authentication, strong password, and privilege separation as well as disrupting attack using data execution prevention. If the attackers successfully penetrate corporate critical IT infrastructure, SOC teams must contain them in a timely fashion to mitigate damages. To this end, Cyber Kill Chain recommends employing Inter-Zone Network Intrusion Detection System, App-aware firewall, and trust zones.
Command & Control: The Command & Control (C2) is a server that is controlled by hackers to send commands to systems exploited by malware and receive stolen data from a targeted system (s). C2 servers often blend in with normal traffic and avoid detection. Many of their activities have been detected in cloud-based services, such as file-sharing services, and webmail.
At this stage, these attacks can be detected using the Host-based Intrusion Detection System (HIDS) and Network Intrusion Detection System (NIDS). The HIDS also assists in disrupting the attack. Cyber Kill Chain also helps the SOC team to deny C2 server attacks using network segmentation, firewall, and Access control Lists (ACLs). Besides, these attacks can be degraded using the Tarpit scheme, which is used on systems to purposely delay incoming connections. This security control is effective against computer worms. To deceive the hackers, always use domain name system redirect. Finally, SOC teams should contain C2 server attacks using trust zones and domain name system sinkholes.
Actions on Objectives: To detect and disrupt an attack, Cyber Kill Chain recommends utilizing endpoint malware protection as well as using data-at-rest encryption to deny an attack. Other security controls include using “quality of service” to degrade attacks, employing Honeypots to deceive attackers, and conducting incident response to contain attacks.
Exfiltration: Exfiltration or Data Exfiltration is also a malicious attempt to steal data and information. SOC team can use the SIEM system and DLP techniques to detect data exfiltration. DLP also helps in disrupting the attack. They can use Egress Filtering to deny an attack. Lastly, exfiltration can be prevented using firewalls and ACLs.
You can also perform malware and malicious traffic investigation with the Security Orchestration, Automation, and Response (SOAR) system.
Nowadays, organizations mostly rely on outsourcing companies or service providers, such as cloud computing, Software-as-a-Service (SaaS), and data centers, to streamline their day-to-day business operations and continuity. However, it is vital to make sure that the services are being provided through the effective implementation of internal controls. To this end, the role of SOC 1 Audit is indispensable.
The SOC 1 Audit is used in the auditing of 3rd party service providers whose services are pertinent to their client’s impact over financial reporting. This auditing system is based on an attestation standard developed by the American Institute of Certified Public Accountants (AICPA).
There are two types of SOC 1 reports – namely SOC 1 Type 1 report and SOC1 Type II report. The former is an attestation of controls at a service provider at a specific point in time whereas the latter is an attestation of control at a service provider over a specified period of time.
The SOC 1 checklist explains the specifics of each system’s component that will be assessed by your auditor during your SOC 1 audit. To prepare your SOC 1 Compliance Checklist, you need to follow it on KrkpatrickPrice.
After taking a deep dive into this article, it has been realized that all stages of Cyber Kill Chain are very useful for a SOC team. Cyber Kill Chain involves all stages of a potential attack and recommends various security solutions to detect, deny, disrupt, degrade, deceive, and contain attack at each of the stages. Among them, SIEM is very valuable.
Selecting an effective SIEM tool is not an easy decision for enterprises as there are a lot of similar products in today’s IT market. A wise approach is required to select your product. Logsign SIEM is a next-gen Security Information and Event Management solution that focused on combining Security Intelligence, Log Management, and Compliance.
In the last section: Exfiltration, we discover that how Logsign SOAR helps in performing malware and malicious traffic investigation. In addition to this, you can also carry out Email Phishing Investigations, Vulnerability Management, Case Management, Compromised Credentials, and more importantly, the automated Threat Hunting.
This article guides you through the process of creating correlation rules on Logsign SIEM.
This article thoroughly covers the types of dashboards available on a SIEM solution, along with data contained therein. | <urn:uuid:d6b7632c-e1e9-48ef-8d43-fa273b5fed89> | CC-MAIN-2022-40 | https://www.logsign.com/blog/how-cyber-kill-chain-can-be-useful-for-a-soc-team-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00678.warc.gz | en | 0.924421 | 1,025 | 2.640625 | 3 |
In the world of Cybersecurity, there is no shortage of buzz words and techno jargon. Often many of these words are used together, causing even more confusion. One such grouping of these words is “Governance, Risk, and Compliance,” collectively known as the “GRC.” While a business needs all three of these to work together in a seamless fashion, they do have their individual purposes as well.
Exactly what do these terms mean?
The definitions for these terms can be explained as follows:
As it relates to IT, this is how an organization is run. Typically, this will be from a top-down structure. For example, at the top is the CISO, beneath him or her are the managers from the IT Department and the IT Security team, followed then by the Project Managers who are responsible for managing the employees that are getting the deliverables done for the client. A typical example of this would be a software development team. The developers report to the Project Manager, who in then reports to the Department Manager. The characteristics of an effective governance chain of command exhibits the following:
- A clear and transparent line of communication: The vision, the goals and the objectives must be transmitted all the way down to the lowest-ranking IT member, and likewise, the needs and ideas of the IT Security team must be heard, listened to, and transmitted back to the CISO for evaluation and consideration.
- Effective resource allocation: The CISO and the respective managers work together as a cohesive unit to distribute (sometimes scarce) resources to effectively manage the Cyber threat landscape as best as possible.
- A system of checks and balances: The CISO and his or her top-level managers must enforce the divisional lines of who is responsible for what, and also making sure that there is a strong sense of accountability.
- Rewards and acknowledgment: A good Governance system will reward those employees who have made an impact in protecting the digital assets of the company, as well for other employees who have maintained a good level of Cyber Hygiene. Likewise, rather than singling out and punishing employees who may have made a mistake, constructive criticism will instead be offered.
Compliance refers to your company’s policies and rules that abide by the security requirements of other entities that you deal with. Probably the best examples of this are the data privacy laws, most notably the GDPR and the CCPA. They have provisions and mandates that your company must meet, primarily to safeguard the Personal Identifiable Information (PII) datasets that you have been entrusted with. Characteristics of a good Compliance program include:
- Choosing the right framework(s) or methodologies: This will guide you in the process of selecting the best controls possible to protect confidential information and data.
- Having a change management system in place: Any adjustments or changes that you make to the controls are well-documented, and any upgrades or new tools/technologies that are to be deployed are first tested in a controlled environment before being releases to a production status.
This typically refers the amount of “pain” your company can withstand before a threat variant causes permanent damage to your IT and Network Infrastructure. There are other definitions and ways to calculate risk, but some of the common traits of a good Risk Management program are as follows:
- Your company has created a categorization scheme: With this, you take an inventory of all your digital assets, and in turn, decide (based upon both quantitative and qualitative factors) which are most prone, and least suspect to an impact if your organization becomes a victim of a security breach. For example, the database that houses the PII datasets would be a prime target and will therefore need the most controls to protect it. Because of this, it will receive a numerical ranking of 10 (with 10 being most vulnerable and 1 the least vulnerable). For example, the documented minutes from meetings held a long time ago are unlikely to be a sought-after target, thus needing a minimal number of controls, if any, giving them a ranking of about 3.
- The controls are monitored: Just like the other components of your IT and Network infrastructure, Risk Controls can go stale and lose their effectiveness if they are not kept up-to-date with the latest patches and upgrades. Therefore, a good Risk Management program will keep an eye on all of your controls on a real time basis, and alert you and your IT Security team if any of them need further attention and/or optimization.
Now that you have a greater understanding of what IT Governance, Risk, and Compliance are about, you may be motivated to craft an effective GRC plan for your company. Developing such a plan is something that you should not attempt to do on your own. A future article will take a deeper dive in how to go about this and whom you should consult in the process. Keep in mind that a GRC plan is a document that will be scrutinized by regulators and auditors, even insurance companies, as you apply for a Cybersecurity Insurance Policy.
Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He is also studying for his Certificate In Cybersecurity through the ISC2. | <urn:uuid:e828170c-a798-459d-a78e-8088cf440b95> | CC-MAIN-2022-40 | https://platform.keesingtechnologies.com/understanding-the-grc-it-governance-risk-and-compliance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00678.warc.gz | en | 0.950769 | 1,100 | 2.5625 | 3 |
Jan. 28 marks Data Privacy Day each year. Individuals are increasingly aware of the importance of data privacy, and governments continue to implement and tighten associated regulations.
How successfully are organizations dealing with data privacy? It varies wildly; there are all too frequent reports of data privacy failures, often associated with ransomware. A Dark Reading poll that ended in December 2021 found that fewer than one-quarter of organizations believe they are fully prepared for a ransomware attack, leaving the remaining three-quarters highly susceptible, which in turn threatens data privacy.
Ransomware will continue to be a hugely successful method of attack that organizations must defend against, with data privacy regulations a significant part of the equation. [Note: Omdia research subscribers can read more on this here: "Data Privacy Day 2022: Ransomware’s Success is Data Privacy’s Failure."] Focusing on the information life cycle (create, process, store, transmit, destroy) will help organizations understand what data requires protection and where it resides. Furthermore, classifying data appropriately is important as all data is not equal: Some data will require strong protection, and other data will not. By understanding these nuances, organizations can begin exploring more advanced approaches to ransomware as with the use of artificial intelligence (AI) to see unseen patterns in the data that may point to a potential incursion or threat.
Attackers using malware can block access to data and/or systems, encrypt and lock data, or even move company data off-site. Attacks that take place over a keyboard can be particularly difficult to detect and mitigate as they can dwell over time, appearing innocuous at first as attackers may use trusted routes of ingress as they move laterally through a target network. AI techniques such as unsupervised deep learning (DL) can help organizations understand attack targets and vectors by encouraging observability across the data life cycle. If an organization can detect the wake of activity created by a potential wrongdoer, it stands a good chance of blocking or diverting an incursion before systems can be locked or data encrypted.
Here, AI offers many helpful tools that can help companies deal with malware. Statistical and mathematical machine learning (ML) algorithms like "k-nearest neighbor" and "decision trees" can identify malware payloads and known attack patterns, for example. Where AI really steps into the spotlight, however, is with DL neural networks. Unlike statistical and mathematical ML technologies that use known rules (e.g., "this is or is not a piece of malware") to identify a potential attack, DL technologies can actually deduce the rules themselves. Popular DL algorithms — including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM) — can parse huge amounts of disparate data to build an understanding of the patterns in that data, patterns that may turn out to represent an attack.
IT and security practitioners considering investing in AI as a means of fighting ransomware must first build an understanding of their entire data landscape as it pertains to data security and privacy. This means building solid metadata defining ownership, access, privacy exposure, locality, and so on. On top of this, the organization must establish a set of governance requirements that span the full information life cycle (create, process, store, transmit, destroy). Fortunately, both within and beyond the confines of the security industry, technology providers are presently laser-focused on helping companies build a consistent view of company operational, system, and analytical data using the concept of a data fabric.
Over time, Omdia expects these metadata efforts to more closely align between security and business practices. At that time, companies will likely provision an AI-capable malware tool in the same way they provision any cloud-native service, by specifying data sources and flipping the "on" switch. Until then, organizations without an existing investment in a data fabric may find themselves somewhat handicapped without the ability to "observe" the entirety of the system of resources they're seeking to protect. In other words, fighting malware, just like fighting data privacy risks, demands a high degree of data literacy, domain expertise, and governance. | <urn:uuid:56dc6156-468b-42e5-89b5-fe9d6cfad3e4> | CC-MAIN-2022-40 | https://www.darkreading.com/omdia/data-privacy-day-2022-how-can-ai-help-in-the-fight-against-ransomware- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00678.warc.gz | en | 0.926884 | 847 | 3.078125 | 3 |
4G is the newest generation of high speed wireless broadband technology which offers a higher rate of data transfer than 3G technology. The higher rate of data transfer accommodates more applications that demand a high speed Internet connection in order to function properly. The letter G in 4G technology stands for generation which means that 4G is fourth generation wireless broadband Internet connectivity.
4G networks are not yet available in some areas however the new technology provides additional capabilities for the use of high speed broadband Internet communications. There are several different technologies that make up a 4G network. In this article, we will help you to understand 4G technology and some of the capabilities a 4G network can offer when it comes to accommodating new applications.
Different Types of 4G Network Communications
2G, 3G, and 4G technologies are defined by an organization known as the International Telecommunications Union also known as the ITU. The ITU is an organization that sets the standards for global telecommunications to ensure that networks and technologies integrate well with one another which results in a uniform telecommunications system. Currently 4G technologies involve more than one type of communications system defined by the ITU which includes the following:
- LTE: 4G LTE is used by Verizon and other wireless carriers and is a technology that offers faster upload and download speeds. LTE stands for Long Term Evolution and utilizes a radio signal as opposed to signals that are transmitted via microwave technology. In order to access a 4G network with LTE it is necessary to use an LTE modem, ExpressCard, or PCMCIA card. Connectivity is also established via the use of cell phones that are designed with 4G LTE capability. 4G LTE is considered to provide competition for other cellular carriers that utilize WiMAX technology to establish high speed connectivity.
- WiMAX: Some 4G networks utilize what is known as WiMAX technology in which high speed broadband Internet is achieved via the transmission of microwave technology. Instead of the standard 802.11, WiMAX is based on 802.16 and provides the same connection as WiFi, only the connection is faster and more efficient with speeds of up to 70Mbps (megabytes per second). A line of sight connection is not necessary for cellular devices to operate on a WiMAX connection and a single base station is capable of handling thousands of users. WiMAX stands for Worldwide Interoperability for Microwave Access and is offered by cellular carriers such as Sprint, Nextel, and T-Mobile.
- Orthogonal Frequency Division Multiplexing: OFDM is another type of 4G technology which supports high speed broadband connections by transmitting more than one stream of data over a variety of mediums including microwave (WiMAX), coaxial cables, fiber optics and twisted pair connections. Although the concept of OFDM has been around for decades it is more widely used in the current age of technology due to its ability to adapt well to high speed data requirements for today’s mobile devices. OFDM is known for its bandwidth efficiency and allows data to travel faster even when noise is present in the lines. OFDM is typically used with 802.11n standard WiFi, LTE, and WiMAX technologies.
- Ultra Mobile Broadband: This 4G network technology is commonly referred to as UMB and supports faster data speeds to accommodate voice and online media applications and other programs that rely on a high speed broadband connection to operate efficiently. UMB supports a wide variety of 4G network low latency services and integrates well with EVDO (Evolution Data Optimized) systems. This type of technology is expected to revolutionize the mobile communications industry within the next few years due to its ability to support low latency applications and voice at one end of the network while handling high speed broadband data traffic on the opposite end.
In terms of actual speeds for each type of technology WiMAX and LTE 4G networks can offer speeds as high as 100Mbps (megabytes per second) and when you are located in a set position within a reasonable distance of the base station, 4G networks with this type of technology can offer speeds as high as 1Gbps (gigabytes per second). This represents a significant increase over third generation (3G) technology which typically operates at speeds of up to 3.1Mbps.
4G Network Applications and Devices
4G technology opens up a host of possibilities when it comes to greater coverage for high speed broadband connectivity. As 4G networks become more widely available, applications that require a high speed broadband connection to function without latency will be easier to use. Additionally, different devices that cannot function without a high speed connection will increase in popularity and become more widely available in the marketplace.
In addition to providing high speed wireless broadband 4G capabilities extend to LED and HDTVs which offer connectivity to the Internet. This provides for a wider genre of programming in addition to other multitasking capabilities in home entertainment systems. Additionally, access to and use of multimedia programs such as video streaming, video chat on Skype, SMS messaging, audio applications, and mobile TV applications is much more efficient and free of issues such as buffering, latency, and other problems that commonly occur with slower connections.
In terms of devices which offer 4G capability, when used on a 4G network you can construct high speed video camera surveillance systems, utilize web cams with zero latency, control your entire home entertainment system and even connect household appliances which are equipped with high speed broadband capability to a 4G network. By using 4G networks you can also enjoy Internet TV with little to no disruption, take part in video conferencing, use VoIP applications such as Skype in addition to enjoying the basics such as faster browsing speeds on the Internet and applications which offer 3D capability.
The Switch to 4G Networks
If you are asking yourself why communications technology is gradually switching to 4G the answer is quite simple. 4G networks offer a faster rate of data transfer and if the engineering of the network is executed correctly you can achieve data transfer speeds of up to 100Mbps for LTE with data transfer rate of up to 70Mbps for WiMAX 4G networks.
4G networks are also designed for high rates of data transfer which makes this type of technology an end-to-end type of connection with Internet Protocol. The advantage of this is that smartphones and other portable devices can function as an Internet hub which makes them an ideal device for transmitting data. Where 4G networks will take us in the future remains to be seen meanwhile the faster rate of data transfer has already opened up a new array of possibilities for staying connected. | <urn:uuid:74ef19b1-b20a-4621-bf47-26499988ca7a> | CC-MAIN-2022-40 | https://internet-access-guide.com/an-overview-of-4g-network-communications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00078.warc.gz | en | 0.947372 | 1,323 | 3.4375 | 3 |
Social media empowers businesses to communicate with prospects and customers like never before. At the same time, Facebook, Instagram, Twitter, and other social networks expose businesses to cyber threats. And today’s companies must plan accordingly. Otherwise, failure to account for social media cyber-threats can cause a costly and time-intensive data breach. In this scenario, the breach can put a business, its employees, and its customers in danger.
A clear understanding of how social media cyber threats and the risks associated with them is key. If a company knows the ins and outs of social media cybersecurity, it can prepare for current and emerging cyberattacks. Best of all, this business can protect against social media cyber-attacks and minimize their impact.
Social media cyber threats: what you need to know
Businesses use social media to share content and photos with prospects and customers from around the world. Yet companies may inadvertently “overshare” via social media. In doing so, they can expose themselves and anyone who engages with their social media pages to cyberattacks.
Research indicates cybercriminals use social media posts to target potential attack victims. Hackers also use social networks as delivery mechanisms for cyberattacks. Furthermore, they can utilize social networks to retrieve information about a user’s contacts, location, and activities.
Let’s not forget about how cybercriminals leverage social media to steal users’ authentication credentials upon login, either. In these instances, hackers can access a wealth of social media user data. They can even retrieve information about a user’s friends and colleagues across various social networks.
Meanwhile, U.S. Department of Health and Human Services (HHS) data indicates cybercriminals generate approximately $3.5 billion annually due to social media cyberattacks. HHS points out hackers use these attacks to cause operational and brand reputation damage. Social media cyberattacks can put a dent into a business’s bottom line, too.
Ultimately, cybercriminals can use social media cyber attacks to access substantial volumes of sensitive data. They appear likely to continue to launch advanced attacks against social media users. However, companies that prepare for social media cyber attacks can identify the early signs of such issues. Moreover, they may be able to stop social media cyberattacks before they lead to data breaches.
Common types of cyberattacks, and how social media users can guard against them
To protect against social media cyberattacks, businesses must first define these attacks, how they work, and their potential impact. Now, let’s look at three common types of cyberattacks, along with tips to protect social media users against these attacks.
Social media phishing occurs when cybercriminals set up fraudulent social media pages that replicate those associated with businesses. Hackers craft these pages in the hopes that social media users will enter their login information. When this happens, cyber criminals will be able to use this information to fully access a user’s social media account.
If a cybercriminal successfully initiates a social media phishing attack, he or she can spy on a user. At this point, a hacker can view any information shared via the user’s social media page. Also, he or she can send messages and publish content in the same way the user would.
To protect against social media phishing attacks, businesses must educate their employees and customers about such attacks. Encourage these individuals to click on social media links or attachments only if they come directly from a business. Otherwise, if a link or attachment comes from an unknown source, it should be avoided at all costs.
In addition, companies should urge employees and customers to protect their social media accounts. People can restrict access to their social media profiles, which can reduce the risk of cyberattacks that lead to data breaches.
Cybercriminals can use malware to hijack a social media user’s credentials. To do so, a cybercriminal can provide a social media user with a malicious link or email attachment. If the user clicks on the link, he or she may download malware onto their device.
Malware can be launched onto a desktop computer or mobile device. It can monitor a user’s actions and record information about him or her, without this individual’s knowledge. Or, malware can take over a user’s device. In this scenario, malware can prevent a user from accessing their device and compromise all information stored on it.
Once again, knowing the source of a link or email attachment is key for social media users. If these users watch for suspicious links or email attachments, they can avoid clicking on them. As a result, they can lower their risk of deploying malware.
Businesses can encourage their employees and customers to download ad blockers that protect against malware attacks. Ad blockers automatically remove or modify advertising content on web pages. They look at web pages and compare them against legitimate ones. If advertising content appears malicious, an ad blocker will prevent it from loading.
Hackers can launch man-in-the-middle attacks to exploit insecure applications. In these instances, hackers insert themselves between a user and a website. They can then capture user data.
A social media man-in-the-middle attack can involve email hijacking. For instance, a hacker can view a social media user’s profile to get information about him or her. The hacker can use this information to learn about the user and topics that garner their interest. He or she can then send a message to the social media user, urging him or her to click on a malicious link that contains terms and phrases that align with topics that interest the user. If the user clicks on the link, he or she can be taken to a fraudulent social media login page. And if the user enters their login information, the cybercriminal can gain access to this individual’s social media account.
Businesses should encourage employees and customers to verify any social media page that they access is safe. Most web browsers have a lock symbol next to the URL, which verifies a website’s security. If this symbol is not present, look for “HTTPS” preceding the site’s address, which indicates the site is secure.
Biometric authentication for social media sites can be beneficial as well. It can be implemented on a user’s mobile devices and requires him or her to use their fingerprint to validate their identity. Thus, biometric authentication offers greater protection than traditional passwords to access social media accounts.
Help social media users optimize their security posture
Social media cyberattacks are ongoing. Regardless, companies can do their part to help their employees and customers guard against these attacks. Here are three ways businesses can help their employees and customers with social media cyberattacks.
Offer security awareness training
Provide security awareness training to employees. The training can teach workers about social media cyberattacks and other cyber threats. It can offer tips and insights to help workers mitigate such issues.
As employees learn about cybersecurity, they can share their knowledge with customers. For example, employees can provide customers with recommendations to ensure their online accounts with a business are protected against cyberattacks. Customers can then apply this knowledge to other online business accounts, along with their social media profiles.
Conduct ongoing cybersecurity audits
Perform a cybersecurity audit to learn about any security issues that can impact a business, its employees, and its customers. The audit can be completed by a cybersecurity expert. It allows a company to obtain insights into its security posture and what can be done to improve it. Plus, the audit enables a business to identify and address any security vulnerabilities.
Ensure a cybersecurity audit accounts for a company’s social media presence, too. The audit can help a business examine its susceptibility to social media cyberattacks. Depending on the audit results, a company can then explore ways to take its social media cybersecurity efforts to the next level.
Take advantage of biometrics across all business operations. A company can use biometrics to secure its networks and systems against cyberattacks. It can also incorporate biometric authentication to ensure social media users can access the company’s pages without putting their sensitive data in danger.
Companies can integrate biometrics into their mobile applications. They can also encourage employees and customers to utilize biometrics when accessing their businesses’ social media pages. Over time, more businesses than ever before can realize the value of biometrics. Most importantly, these companies’ employees and customers can reap the benefits of biometrics for enhanced security.
Prioritize social media cybersecurity
Cybercriminals are working diligently to attack businesses of all sizes and across all industries. They are increasingly launching social media cyber attacks to illegally access the sensitive data of individuals who engage with these businesses.
Companies cannot stop social media cyberattacks, but they can warn their employees and customers about them. Businesses that educate their stakeholders about social media cyber attacks can encourage these individuals to proactively guard against them. And this can help employees and customers protect their sensitive data against hackers.
Get started on teaching employees and customers about social media cyberattacks today. From here, a business can ensure these stakeholders can remain safe against these attacks long into the future. | <urn:uuid:a788c3c3-8705-4b05-964a-0fd236c72135> | CC-MAIN-2022-40 | https://www.bayometric.com/how-social-media-use-exposes-you-to-cyber-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00078.warc.gz | en | 0.923245 | 1,904 | 2.921875 | 3 |
IPv6 is the most recent version of Internet Protocol (IP). It's designed to supply IP addressing and additional security to support the predicted growth of connected devices in IoT, manufacturing, and emerging areas like autonomous driving.
The primary reason to make the change is due to IPv6 addressing. IPv4 is based on 32-bit addressing, limiting it to a total of 4.3 billion addresses. IPv6 is based on 128-bit addressing and can support 340 undecillion, which is 340 trillion3 addresses. Having more addresses has grown in importance with the expansion of smart devices and connectivity. IPv6 provides more than enough globally unique IP addresses for every networked device currently on the planet, helping ensure providers can keep pace with the expected proliferation of IP-based devices.
In addition to addressing, IPv6 benefits include:
IPv6 uses hexadecimal digits (hex digit) for addressing with each hex digit representing 4 bits.
IPv6 addressing can reduce routing table size by allowing ISPs to aggregate customers' prefixes into a single prefix and present only that one prefix out to the IPv6 internet.
Many networks will implement IPv6 concurrently with IPv4 in a dual-stack design, while newer networks will deploy IPv6 natively but still allow for compatibility with IPv4 if needed. This addresses current government mandates for IPv6 use. | <urn:uuid:c27a5cc2-7f54-41c0-92e1-567ce9d224e8> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/solutions/ipv6/overview.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00078.warc.gz | en | 0.92279 | 276 | 3.84375 | 4 |
An NFT (Non-Fungible Token) is a data structure that points at a particular data object in a unique way. See it as a way of naming digital objects, such as photos, texts, audio or video, in a way that allows referring to them with no ambiguity.
The ability to refer to data objects allows to “mention” them in transactions. This seemingly trivial ability, when combined with the ability to create immutable records of transactions (as provided by Blockchains), allows us to create immutable records that refer to data objects.
Technically, NFTs do not require blockchains. You could take a photo of a cat, create an NFT for this photo, which is essentially a unique pointer to (or: a descriptor of) it, and then go on to write a real contract on paper that says “this photo of a cat, bearing this unique ID, is hereby assigned to John Smith”, whatever this assignment means.
Blockchains and smart contract technologies allow for such digital agreements to be stored in a public immutable record that does not allow anyone to change it once it was written. The combination of NFTs and blockchain-based smart contracts thus allows us to securely record agreements that declare ownership of digital goods. If you have any file (photo, text, video, etc.), you can create an attestation that tells the entire world that you assign this file to be owned by whoever. What does this “ownership” mean? – Good question; but whatever it means, billions of dollars have already been paid towards such ownerships. Is this real? The money surely is, but is also the value?Continue reading "On the value of NFT" | <urn:uuid:2d935722-6014-4ff9-b64b-e34a816f4b74> | CC-MAIN-2022-40 | https://www.hbarel.com/plugin/tag/NFT | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00078.warc.gz | en | 0.949305 | 352 | 3.296875 | 3 |
White Paper 5
Climate change continues to affect our environment at an increasing rate while our understanding of the impact between human activities and the increase in measured greenhouse gases (GHG) is becoming clearer.
The demand for IT services is contributing to immense growth within the data center sector and this contributes to increasing carbon impacts from construction and operations. In terms of built-out area and overall energy usage, data centers currently remain a small subset of commercial and industrial emissions. However, it does not take much imagination to see a future where they are one of the highest consumers of energy and resources.
As more systems move online, such as most things leisure, education, and work-related, contrasted by a reduction in commercial real estate, an increase in data center carbon footprint can be forecast.
Data Center Scale
Embodied Carbon vs. Operational Carbon
Measuring Embodied Carbon for a Whole Life Carbon Approach
Climate change continues to affect our environment at an increasing rate while our understanding of the impact between human activities and the increase in measured greenhouse gases (GHG) is becoming clearer. The demand for IT services is contributing to immense growth within the data center sector and this contributes to increasing carbon impacts from construction and operations. In terms of built-out area and overall energy usage, data centers currently remain a small subset of commercial and industrial emissions. However, it does not take much imagination to see a future where they are one of the highest consumers of energy and resources. As more systems move online, such as most things leisure, education, and work-related, contrasted by a reduction in commercial real estate, an increase in data center carbon footprint can be forecast.
Early data center design was primarily focused on cost, resiliency, and uptime but later evolved to include efficiency and the reduction of operational energy consumption. The economic approach to design decision-making has considered the data center lifecycle, but only in terms of capital and operational expense and total cost of ownership. Increasing the efficiency of the data center or reducing the resources required to build it lessens the GHG impact but there are limitations. There remains the risk of design decisions having unintended consequences if embodied carbon emissions are not considered.
Embodied carbon includes all emissions not attributed to operations and the use of electrical energy and water in the day-to-day running of the data center. These include those from extraction, manufacturing, and transportation, as well as installation of materials and components used to create the built environment. It also includes the lifetime emissions from in-use activities including maintenance, repair, replacement and the end-of-life activities of deconstruction, transportation, waste processing, and disposal. Reducing the GHG emissions in support of a carbon-neutral goal for the data center industry must be performed with a comprehensive approach. Embodied carbon is the sum of GHG emissions normalized to an equivalent CO² number.
Including the Whole Life Carbon approach to the design process can identify which design selections achieve the lowest carbon emissions over the entire lifecycle of the data center, where focusing solely on operational emissions may fall short.
2. Data Center Scale
According to the Jones Lang LaSalle (JLL) research group, there was 611.3 MW of data center capacity under construction in 2020 in North America, and another 418.2 MW under construction in EMEA¹. This is an increase from 549.8 MW at the end of 2018. Density is still increasing, however, the amount of building area required to keep up with demand for the services continues to increase.
The United Nations Environment Programme (UNEP) Global Alliance for Buildings and Construction (GlobalABC) publishes an annual global status report in which the two trends were highlighted². “CO² emissions from the building sector are the highest ever recorded” and “new GlobalABC tracker finds sector is losing momentum toward decarbonization.” These are both concerning trends with the percentage of global emissions and energy of 38% that can be attributed to the overall building sector. To decrease the share of global emissions that can be attributed to building operations, it requires continued effort on efficiencies and especially for data centers, the expedited decarbonization of the electric power grid.
The opportunities exist to improve these on new builds and continue in the future through technology innovation and equipment retrofits as advances become available.
There is an additional 10% of global emissions attributed to the construction of buildings. These emissions, created at the beginning of the building lifecycle, cannot be reduced over time. Addressing the sources of these emissions during design and procurement are an important consideration and contribute to immediate embodied carbon reductions. Another perspective to highlight the importance of embodied carbon in relation to operational carbon is grid decarbonization. If the grid providing the operational energy is decarbonized over time, and the embodied carbon emissions remain the same, the embodied carbon relative to operational carbon will increase.
3. Embodied Carbon vs Operational Carbon
Language of Carbon
There are terms available which we need to understand to ensure we all are speaking the same language. The Language of Carbon³. Greenhouse gases (GHG) trap heat in the atmosphere and include carbon dioxide (CO²), methane (CH⁴), and nitrous oxide (NO²). The release of these gases occurs while burning fossil fuels or biological materials, in chemical reactions during materials production, in the transportation of fossil fuels, from agricultural activities, and treatment of wastewater, among others.
Although synthetic, fluorinated gases are also considered to be GHG and are synthetic gases which have widespread use as substitutes for ozone-depleting substances in refrigerants, as well as industrial processes such as manufacturing of aluminum and semiconductors. About 92%⁴ of these gases are in the substitution of ozone-depleting substances and are often used in data center cooling systems.
Although typically emitted in small quantities, they are extremely potent greenhouse gases and deemed to hold high Global Warming Potential (GWP).
GWP is a metric used to allow comparisons of the global warming impacts of different gases. A high GWP means that small atmospheric concentrations can have disproportionately large effects on global temperatures. GWP and Carbon Dioxide Equivalent (CO²e) are used to normalize the emissions calculations.
Embodied + Operational = Whole Life Carbon
Significant efforts have been invested in data center operational efficiencies. And rightly so, because the energy use density for data centers per unit area is much higher than other types of buildings. Innovations have reduced the amount of additional energy that is required to support the critical load and can be seen by the reduction of average PUE (Power Utilization Effectiveness) over the last decade. There are also efforts underway to apply the excess heat and energy produced by the facility to offset the energy used by the critical IT load. This is termed waste heat reuse, and it is measured by ERE (Energy Reuse Effectiveness).
Investments to reduce the operational energy of the facility are recouped overtime during the lifecycle of the facility. Therefore, such reductions are not accounted for until 5, 10, 30 years into the future. Embodied carbon on the other hand is mostly spent upfront when the building is constructed. This is a major reason to include the embodied carbon within analyses and design decisions. Understanding embodied and operational carbon is needed to allocate and account for each appropriately.
Combining embodied and operational emissions to analyze the entire lifecycle of a building throughout its useful life and beyond is the Whole Life Carbon approach. This ensures that the embodied carbon (CO²e emissions) together with embodied carbon of materials, components, and construction activities are calculated and available to allow comparisons between different design and construction methods.The terms carbon and energy are often used interchangeably but they are not the same. They follow similar paths but can have different measurements depending on several factors e.g., the source of energy and the emissions associated with the
production and use of that energy. The emissions associated with energy can vary depending on geographic location and grid energy supply mix.
In the measurement of embodied carbon, Cradle is referenced as the earth or ground from which raw materials are extracted. The following provide boundaries to measure the embodied carbon and emissions of a building at different points in the construction and operating lifecycle.
Cradle to Gate - extraction, transportation, processing, manufacturing up to the factory gate.
Cradle to Site - adds transportation to the site for installation.
Cradle to Use - adds installation activities.
Cradle to Grave - adds use factors including maintenance, repair, replacements along with the end-of-life factors including deconstruction, transportation, waste processing, and disposal.
However, there is an additional boundary definition that views the holistic impact and benefit of design choices called Cradle to Cradle. Its scope considers the reuse, recovery, and recycling of the materials installed in the building, even the building itself and other activities beyond the lifecycle. Like the Energy Reuse Effectiveness (ERE) metric for operational energy, it assesses the circularity of the building and its components in terms of reuse or recycling. This is the Whole Life Carbon approach.
4. Measuring Embodied Carbon for a Whole Life Carbon approach
The most prevalent and accepted method to calculate the environmental impacts of buildings is EN 15987:2011 Sustainability of construction works - Assessment of environmental performance of buildings - Calculation method. This standard defines the methods to perform a Life Cycle Assessment (LCA). EN 15987:2011 is part of a suite of standards that assesses sustainability at the product component and building level.
Embodied carbon is included in Scope 3 of the GHG Protocol standards and a simplified description of these is:
Scope 1: Direct emissions from owned operations including onsite combustion and fugitive emissions of greenhouse gases
Scope 2: Indirect emissions from owned operations including emissions produced by the providers of purchased electrical energy and water
Scope 3: Indirect emissions from unowned upstream and downstream activities
Scope 3 is also referred to as Value-Chain emissions and could be significant depending on the breadth of operations and the lifecycle of products produced. Focusing on the data center facility, much of the Scope 3 emissions will be produced by upstream activities. These activities include the materials for construction but also include those for ongoing maintenance and replacement of the facility equipment.
Data Center Scope
A data center facility has unique aspects compared to traditional facilities. This is seen in the creation of addendums and exceptions by standards and codes organizations specifically for data centers because they do not fit into existing building types. In comparison to commercial buildings, their size and shape mimics warehouses whereas their MEP systems are similar to office buildings. However, there are significant differences. The limited number of publicly available data center case studies and reports creates challenges for owners, architects, and engineers to develop the practice of LCA.
That is not to say that performing an LCA on data center facilities is a challenge that cannot be overcome. If anything, it is a challenge that should be overcome. The knowledge and skills should be developed by the firms specializing in mission-critical facilities design. The best practices and contributions should be shared. The advantage of performing a full scope LCA is the ability to identify hot spots among the sources of impact. This will allow the team to focus on the areas of most benefit for reductions. There are also advantages to performing a partial LCA on a select boundary or system. This allows the comparison of two equivalent solutions with embodied energy included in the decision matrix. The ServerFarm Whole Building Life-Cycle Analysis Report⁵ illustrates the value that an LCA can bring to the design process.
Performing an LCA for a data center construction project requires data from trusted sources and an accurate model of the building to be able to calculate the embodied carbon and energy. The architectural, structural, and civil disciplines materials data will be available in generic form from a database. Multiple of these databases have been created and continue to be maintained, and appropriate selections should be made to ensure the data is accurate to the project location. Mechanical, electrical, and plumbing data will be available generically within an ICE (Inventory of Carbon and Energy) or specifically within the EPD or Environmental Product Declaration for a piece of equipment or component. LCA consultants are available and software packages have been created to streamline the assessment process.
Tools for Assessment
Performing an LCA requires an information database and methods for calculating and analyzing the results. The most widely referenced information database is the ICE database, researched and published by the University of Bath. Several academic, non-profit organizations, and government entities have also created and maintained databases. These datasets only cover the cradle to gate scope. Although no single database includes data for every geographical location, product, or situation, they are improving over time as more manufacturers and suppliers develop EPD’s in the standardized format.
A few examples of available tools include BIM360 from Autodesk, which allows the integration of data from the EC3 tool developed in partnership between the Carbon Leadership Forum⁶ and Building Transparency⁷. The EC3 tool is an EPD database and Building Transparency provides the Tally software to assist in the analysis and reporting of results. Other organizations with available tools are the Athena Sustainable Materials Institute⁸ and EDGE⁹ (Excellence in Design for Greater Efficiencies). These organizations are industry groups or academic partnerships. On the commercial side, there are companies such as OneClick LCA¹⁰ that provide full-service support for performing LCA.
The available databases, tools, and knowledgeable resources required to accurately perform an LCA are growing, but the undertaking of performing a full LCA requires a considerable amount of time and resources.
Each data center and design is unique, but research suggests that 10-20% of embodied carbon can be eliminated from construction projects with no increase in cost, and that embodied carbon accounts for 20-50% of the whole life energy and carbon of commercial buildings when operational energy is considered¹¹. Although this value is far lower for data centers, embodied carbon should be considered in conjunction with operational energy and water savings benefits that may be gained by design choices.
Less is more
Reducing embodied carbon in MEP disciplines can be accomplished by reducing the equipment and materials in the systems. Maximizing utilization of equipment and reducing complexity achieve the reduction in building area required and weight of the equipment, in turn the embodied carbon required.
Supporting an increased power density in the IT environment contributes to performing the same amount of “work” in a smaller area. The reduction in floor area required as well as the amount of IT equipment required affects the overall impact.
Life cycle assessment
Performing a lifecycle assessment on the data center design provides the opportunity to compare alternate design choices and allow for more informed decisions to be made. Choosing the design with the lowest carbon impact and shortest carbon payback period requires this measurement and analysis.
Including sustainability and embodied carbon reduction options in specifications directs designers and contractors to select materials with positive impacts. Recycled content and material substitution should be specified for reduced embodied carbon while achieving design requirements.
Equipment manufacturers play a role in achieving sustainability goals. Selecting manufacturers with developed EPD allow for the LCA to be performed effectively. Product lifecycle and service life play a key role in the longevity of the subsystems. End of life activities including recyclability can have a positive or negative impact on the viability of the equipment.
Reusing existing structures, commercial buildings, and warehouses to retrofit and reuse to meet new data center requirements or upgrading legacy data centers provides a significant reduction in overall embodied carbon. Designing data center buildings that may be used for other purposes in the future also contributes to the reduced future impacts. The Preservation Leadership Forum sums up the advantages here, “when comparing buildings of equivalent size and function, building reuse almost always offers environmental savings over demolition and new construction.”¹²
Performing deliberate analysis and making design decisions using the Whole Life Carbon approach, considering both Embodied Carbon/Energy and Operational Carbon/Energy, provides the opportunity to contribute positively to the global goal to reduce greenhouse gas emissions. The opportune time to create this positive impact and reduce the embodied carbon emissions occurs during planning, design, and procurement or in other words, now.
Brevan’s experience starts in 2007 with the development of standard and maintenance operating procedures, raised floor area design, analysis of electrical usage data and equipment operating parameters.
Brevan’s experience also includes serving as the lead project engineer designing and implementing multiple data center infrastructure buildouts in different territories, including in Ashburn, Chicago, Dallas, London, Sydney, Hong Kong and Frankfurt. His work during the design phase included focusing on the customer’s requirements gathering and design integration into development of specifications and documentation.
Brevan has advised and mentored operations personnel in development and implementation of preventive maintenance programs on mission-critical data center systems and technical change management efforts to mitigate risk and business impact during construction and maintenance operations.
He holds a Bachelor of Science degree in electrical engineering, and a Bachelor of Arts in Business Administration from Trinity University.
He is a team member of the recently launched EYP Mission Critical Facilities, Part of Ramboll and I3 Solutions Group Sustainability Initiative to offer a practical roadmap towards a Carbon Net-Zero data center by 2030. | <urn:uuid:1a44a6dc-5751-4fdb-8059-fdacf7a75ade> | CC-MAIN-2022-40 | https://www.eypmcfinc.com/data-center-embodied-energy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00078.warc.gz | en | 0.940925 | 3,638 | 2.796875 | 3 |
What is ransomware?
Ransomware is a dangerous type of malware designed to block access to a victim’s computer system by encrypting critical files until a hefty sum of money demanded by the attacker is paid. The vast majority of ransomware is distributed via email.
Anatomy of a Ransomware Attack
While modern ransomware attacks range from simple “ransomware in a box” to customized malware designed specifically
to evade detection, attacks generally follow the same four phases:
Cybercriminals can easily purchase ransomware on the Dark Web or quickly launch attacks using hosted ransomware services.
Threat actors launch attack campaigns, often masquerading as a known or trusted individual or organization.
Tricked into thinking that the email is from a trusted partner or colleague, the recipient unsuspectedly opens the malicious attachment it contains. Once opened, the ransomware is activated and payment is demanded.
When activated, the ransomware locks up the victim’s system until payment is made to the attackers in the form of untraceable Bitcoin. In many instances, victims never regain control of their systems - even once the specified ransom is paid.
The Guardian Digital Advantage
Protect your users, your key business assets and your reputation with a multi-layered email protection system
that keeps ransomware out of the inbox.
Secures the Inbox against Malicious Attachments &
Cybercriminals are constantly evolving their tactics, leveraging techniques such as identity deception and domain spoofing to deceive even highly trained security professionals into downloading ransomware. Zero-day ransomware attacks, which are launched with no advanced warning, do not contain any recognizable digital signature and employ advanced tactics to evade traditional detection methods, are becoming increasingly prevalent.
Guardian Digital EnGarde Cloud Email Security’s auto-learn security system employs a combination of dynamic malicious URL and attachment protection, real-time behavioral analysis and drive-by download protection to defend against emerging ransomware attacks before they exploit unknown vulnerabilities to reach the inbox.
Delivers Complete Ransomware Protection by Closing Critical Gaps in Native Microsoft 365 & Google Workspace Email Security
Despite the existing email protection provided by Microsoft Exchange Online Protection (EOP) in Microsoft 365, 85% of users have experienced an email-borne cyberattack in the past year.
Guardian Digital EnGarde Cloud Email Security’s proactive, multi-layered protection closes critical gaps that exist in the static, single-layered email security defenses built into Microsoft 365 and Google Workspace, enabling businesses to reap the benefits of cloud email, while also enjoying the peace-of-mind that they are protected from ransomware and other costly, disruptive email attacks.
Extends IT Resources to Offer Superior Ransomware Protection
Many businesses - especially SMBs - experience a shortage of cybersecurity resources and expertise, leaving them unprepared to repel a ransomware attack.
Guardian Digital’s expert ongoing system monitoring, maintenance and accessible support provide a remote extension of your IT team, improving your email security posture and enhancing your team’s productivity with reliable, cost-efficient ransomware protection.
Phishing Is Evolving.
Are Your Current Email Defenses
Modern phishing scams have introduced a new level of risk for businesses. Attackers are targeting Microsoft 365 and Google Workspace users in increasingly sophisticated campaigns designed to evade built-in security defenses.
Email Risk in Microsoft 365
is Greater than Ever
What's your strategy for preventing loss of email communication and theft in Microsoft 365? Guardian Digital secures Microsoft 365 against the cost of credential phishing and account takeovers. | <urn:uuid:95cab52a-4919-413e-9cae-d05ef773a751> | CC-MAIN-2022-40 | https://guardiandigital.com/email-threat/ransomware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00078.warc.gz | en | 0.887581 | 761 | 2.6875 | 3 |
Have you updated Windows recently and experienced any new bugs? Odds are you aren’t alone, and at this point in time, it almost seems like bugs are part and parcel of using a PC these days.
Why so many bugs? It’s because software updates have to come out fast enough to stay one step ahead of hackers. This means old bugs are sometimes replaced with new ones as the cycle continues. Tap or click here to see just how many bugs were addressed in Microsoft’s latest patch Tuesday update.
Some software bugs are more dangerous than others, and security flaws are usually the worst of the worst. That’s why the Department of Homeland Security is sounding the alarm on a recently discovered bug in Windows that can let hackers take over entire networks of computers in one go. Here’s what we know about it, as well as what you can do at home to protect yourself.
DHS warning: Update now!
The Cybersecurity and Infrastructure Security Agency, a wing of the Department of Homeland Security, has issued a warning to all federal departments and agencies to update their Windows computers immediately. The reason: A dangerous security flaw that gives hackers the keys to entire networks of PCs.
Yes, you read that right: Networks. This means if one computer is exploited, every other one its connected to can potentially fall victim.
The cause for concern is a bug known as Zerologon, which was identified and patched as a critical flaw back in August. Like so many other flaws, Zerologon affects the remote access systems of Windows 10 — which is normally used for remote work and downloading files.
What makes Zerologon so dangerous, though, is the fact that hackers don’t even need to know a username or password to break in. They just need to know which systems have the flaw.
If a hacker is able to exploit the bug, they could easily install malware or steal files — including files sensitive to national security interests. For this reason, CISA has deemed any vulnerable or unpatched Windows computers to be an “unacceptable risk.”
Will this bug affect me? What can I do?
Right now, the government is a higher priority target for anyone looking to exploit the Zerologon flaw. But home users shouldn’t rest easy, either. Cybersecurity firm Secura, which discovered the flaw, reports that a hacker can exploit the vulnerability in less than three seconds. That’s one fast hack!
Should the exploit become widespread, it’s easy to imagine entire home networks falling victim in addition to business and government ones. But home networks have an additional vulnerability to worry about on top of Zerologon: All the unsecured IoT devices that are also connected. Tap or click here to see how much risk all these gadgets actually face.
Just like for government and business users, the easiest fix is to update Windows 10 to its most recent version. The Patch Tuesday update for September includes patch data from the last several updates, so you won’t have to worry if you’ve been lagging behind.
To get the update, turn your PC on and click the Start Menu. Next, select the Settings gear icon, followed by Update and Security.
If the patch is available, you’ll see it ready for you to download and install. If you don’t see anything, your computer may have already updated itself automatically. This happens if you have Automatic Updates enabled.
This recent update addresses several bugs beyond the Zerologon flaw, so it’s a good idea to install anyway. At the very least, you’ll be current until the next batch of updates arrives.
Still, we haven’t seen the last of Zerologon just yet. Due to its complexity, Microsoft has acknowledged that a second patch is in the works to completely stamp out the problem by early next year. We’ll be letting you know as soon as that’s available.
We recommend getting this patch on your PC as soon as possible. If national security depends on it, we’d say it’s safe for you to take the plunge, too. | <urn:uuid:36e39f5c-053f-4476-9345-be8bb9160a5e> | CC-MAIN-2022-40 | https://www.komando.com/security-privacy/critical-windows-bug/755106/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00078.warc.gz | en | 0.945249 | 863 | 2.53125 | 3 |
The COVID-19 pandemic has had a widespread impact on policies and practices across both government and industry. Small businesses and large agencies alike have had a few weeks now to experiment with their mass telework and operations plans, and are beginning to understanding how secure their connections are, what their basic cyber hygiene is and what they need to minimize further disruption to their missions.
COVID-19 has not stopped malicious actors. In fact, it is typically at times of strife and uncertainty, like now, that cyber agents and hackers start targeting systems which are not set up for proper and effective cyber protection. Cybersecurity is perhaps even more critical than it was before, as opportunistic attackers target healthcare networks and remote connections. Malicious actors are taking advantage of the situation to exploit insecure virtual private network (VPN) connections and other poorly configured remote security controls.
So, what’s the next step? How can you assess and validate your network in order to ward off malicious actors?
Organizations need to be able to scale their maturity along with the growth of their business and their network environment. A good first step toward this goal is validation through adversary emulation. For smaller applications, this means penetration testing.
A penetration test, also known as a “pen test,” is a simulated cyber-attack against your computer system to check for exploitable vulnerabilities. In the context of web application security, penetration testing is commonly used to augment a web application firewall (WAF). Pen testing can involve the attempted breaching of any number of application systems, application protocol interfaces (APIs) and frontend/backend servers to uncover vulnerabilities, such as unsanitized inputs that are susceptible to code injection attacks. Insights provided by the penetration test can be used to fine-tune your WAF security policies and patch detected vulnerabilities.
Penetration testing is typically performed using manual or automated technologies to systematically compromise servers, endpoints, web applications, wireless networks, network devices, mobile devices and other potential points of exposure. Once vulnerabilities have been successfully exploited on a particular system, testers may attempt to use the compromised system to launch subsequent exploits at other internal resources – specifically by trying to incrementally achieve higher levels of security clearance and deeper access to electronic assets and information via privilege escalation.
Information about any security vulnerabilities successfully exploited through penetration testing is typically aggregated and presented to you and your network system managers to help you make strategic conclusions and prioritize related remediation efforts. The fundamental purpose of penetration testing is to measure the feasibility of systems or end-user compromise and evaluate any related consequences such incidents may have on the involved resources or operations.
Adjusting to this new global paradigm can be a challenge. We get it, and we are here to provide the tools, tips, and information you need to help you and your team meet that challenge and ensure your systems are validated and prepped for attempted intrusions. You don’t have to do everything all at once, but developing a relationship with a provider now will help you make regular improvements to your environment over time and prepare it to scale as your company grows.
Call to schedule your penetration test with us today and take advantage of a complimentary rescan after remediation of the initial test findings.
Our team of experts can and will work with the personnel at your organization to measure the effectiveness of your cyber defense and pen test your environment, so you know where the weaknesses are and how to mitigate them. | <urn:uuid:295ab0eb-750f-4436-9901-535c2619b518> | CC-MAIN-2022-40 | https://ardalyst.com/in-uncertain-times-pen-testing-supports-your-companys-growth/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00278.warc.gz | en | 0.953642 | 698 | 2.625 | 3 |
In 2008, researchers were touting the defeat of the Storm worm — the most notorious example of the malware category that self-propagates through remote exploits, email, network shares, removable drives, file-sharing or instant messaging applications. Some would argue that despite the name, Storm was not a traditional worm at all, since it required a human-computer interaction to spread.
Last year saw only two worms in the 20 most prevalent malware families, with trojans becoming the outright leader of malware categories, making up 46 percent of all malware, versus worms at 14 percent, according to the IBM ISS X-Force 2008 Trend & Risk Report. It seemed that the worm’s impact, although still present, was relegated to causing smaller-scale hindrances versus its previously significant role in causing larger organizational disruptions — a role that for much of 2008 seemed to be taken on by trojans and newer Web site attacks such as SQL injection and clickjacking.
Fast forward to 2009, however, and the worm has made a clear statement that it is back, smarter, more mature, and not going away any time soon. The Waledec worm, virtually identical to the Storm worm and likely formed by the same controllers, has found new ways to avoid detection and recently took over Storm’s annual February ritual of sending malicious Valentine’s Day e-mail greetings to grow a botnet. Even more significant, the new Conficker worm has spread to unprecedented levels, estimated to have infected anywhere between 2 million and 10 million computers, with no clear discovery yet of how these computers’ resources may be used by cybercriminals.
With Microsoft offering a US$250,000 bounty to find the makers of Conficker, and with the German military, British and French Air Forces, and entire hospitals infected, Conficker poses a severe threat to vital organizational functions. The best way to understand how to protect against this advanced worm — and the inevitable worms that will be developed based off its “success” — is to understand how Conficker has spread thus far.
Something Old …
In October 2008, Microsoft announced an out-of-cycle patch for a vulnerability (CVE-2008-4250) in its commonly deployed Server Service software that should have raised eyebrows in IT departments. For one, the vulnerability did not require any user interaction to be exploited, meaning that any unpatched computer running the software would be infected if exposed to an attack. Second, another worm, Gimmiv, had already been exploiting the vulnerability in limited capacity before Microsoft’s announcement. Add in the fact that similar vulnerabilities in the past had led to rapidly propagating worm outbreaks (e.g. 2003’s Blaster worm and 2006’s Sdbot), and it is no surprise that the Server Service vulnerability received the highest rankings on both the Microsoft Exploitability Index and the Common Vulnerability Scoring System (CVSS).
Even with this clear warning, Conficker was able to quietly spread in the months following this announcement because nearly a third of enterprises did not patch their systems with the Microsoft update. This outcome offers two lessons:
- Organizations must put measures in place outside of their normal protocol so that they can quickly update their systems as Microsoft issues highly critical patches.
- Because patching on a macro scale is nearly impossible, organizations must implement other proactive measures to protect themselves before these vulnerabilities are even announced.
These measures include strict firewall policies and the implementation of intrusion prevention systems (IPS) to recognize and block the primary infection vector of this malware before it even enters a company’s network and infects its computer systems.
Something New …
Despite its peak size, Conficker’s initial growth was very slow compared to past worms targeting similar vulnerabilities. The Blaster worm, for example, peaked within 8 hours, while Conficker didn’t start dominating headlines until this January. While it would be easy to say that the initial slow spread of Conficker is a clear demonstration of the innovative next-generation security technology that has been developed over the past five years and the successful use of best practices at a majority of enterprises, the reality is that Conficker had a few other tricks up its sleeves.
Recognizing that a worm’s rapid propagation activity could speed discovery and cause organizations to invoke radical measures to stop it in its tracks, the designers of Conficker used clever and complex algorithms to make sure that its scanning and infection activity did not raise alarms. The worm went as far as detecting the bandwidth available to its victims and adjusting its propagation rate to stay “under the radar” of many intrusion detection and behavioral anomaly systems.
Something Borrowed …
Like other multi-headed threats from the past, such as Code Red, Conficker did not stop at exploiting systems through their critical software vulnerabilities. The creators of Conficker also built in alternative attack vectors that would allow the worm to grow where other worms might die off. These creative additions include a forceful password breaking capability which seeks out servers nearby infected computers; an ability to spread to any shared networks or hard drives; and the power to copy itself to any device inserted into a USB port, whether it is a flash drive, MP3 player or digital camera.
For organizations to defend against these secondary vector attacks, it is important that they consistently update antivirus (now that most security software has the ability to identify Conficker) so that infected computers can be identified and isolated from access to shared networks or connected USB devices until they are cleaned. Organizations should also evaluate their access management policies to determine who has access to any shared networks where the Conficker worm could be introduced outside of company computers.
Additionally, organizations should strongly consider strengthening their enterprise-wide password policies, ensuring that passwords are complex; use a variety of numbers, letters and symbols; avoid dictionary words in any language; and do not use passowords that repeat themselves across systems or contain variants of another password.
What Will It Do?
For all the talk of the Conficker worm, researchers still have not witnessed any malicious attacks on its behalf, and that raises the question of what its organizers plan to do with the millions of computers they’ve taken over. There is a wide variety of attack possibilities when the collective power of these computers is tapped, including the delivery of millions of spam emails, denial-of-service attacks against organizations or governments via a botnet, or the theft of corporate data and cyber extortion. Some researchers say that Conficker has a design flaw and cannot execute these attacks; however, new variants of Conficker, such as Conficker B++, have emerged with the capability to download software that makes the worm more capable of controlling its infected machines.
Will Conficker’s attacks come about before it is eradicated? If they do, will the same next-generation security technologies that slowed Conficker’s initial growth be able to protect your organization if it becomes a direct or collateral target? While it’s impossible to predict with certainty, it is likely that any Conficker attack activity will include denial-of-service attacks and additional propagation attempts. By implementing advanced firewalls and IPS, organizations can reduce the likelihood of additional computers being compromised, protect against denial-of-service attacks, maintain network uptime and avoid potentially costly disruptions to their business.
Thus far in 2009, we’ve seen that the worm has found a new way to survive, like many forms of malware do, by marrying a new twist to an old idea. Hackers will always be creating new ways to exploit companies’ systems, and therefore, the best protection is not only to quickly react to patches as with Microsoft’s October Server Service vulnerability, but also to be proactive with a defense-in-depth strategy that includes a variety of technologies such as IPS, firewalls and antivirus combined with the proliferation and diligent application of user education. The Conficker worm and the new worms of the future will thrive on those organizations that do not follow these practices.
Mike Paquette is chief strategy officer at Top Layer Security, a provider of intrusion prevention systems. | <urn:uuid:f4af1af8-c6ab-4798-812a-11889a12163b> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/the-worm-returns-protecting-yourself-from-conficker-66403.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00278.warc.gz | en | 0.953872 | 1,659 | 2.703125 | 3 |
Recent information released by Gartner revealed that there could be over one million drones in the sky over the next 6 years. Throughout the last several months of the COVID-19 pandemic, drones have been used more and more in order to limit human-to-human contact while maintaining the ability to deliver supplies and medications all over the world. Drones are also proven to be more cost effective and can travel at a greater speed than on-ground deliveries.
Drones have played a part in the response to the coronavirus pandemic and this could speed their longer-term adoption in a wider range of areas. City leaders have a key role to play in adoption and deployment, Pedro Pacheco, Senior Director Analyst, Gartner, told Cities Today.
During the COVID-19 crisis, drones have been used to deliver medication and test samples in remote locations in Ghana, Rwanda, Chile and Scotland. From today, drones will deliver personal protective equipment and supplies to frontline teams in Charlotte, North Carolina, after the Federal Aviation Administration (FAA) granted a waiver to not-for-profit Novant Health. The initiative is part of the North Carolina Department of Transportation’s (NCDOT’s) Unmanned Aircraft System Integration Pilot Program (IPP). Unmanned aerial vehicles have also been used in several cities around the world to monitor compliance with virus-related safety measures as well as to spray disinfectant in India and China.
These uses could demonstrate how drones can enable faster transportation of goods and how, along with other robot deliveries, they could disrupt transport and mobility beyond COVID-19, Pacheco said.
Last year, DHL launched drone operations to tackle last-mile delivery challenges in urban areas of China. DHL claims the service reduces delivery time from 40 to eight minutes for an eight-kilometer distance and can save costs of up to 80 percent per delivery, with reduced energy consumption and a lower carbon footprint compared with road transportation.
“Autonomous drones offer lower cost per mile and higher speed than vans in last-mile deliveries,” said Pacheco. “When they deliver parcels, their operational costs are at least 70 percent lower than a van delivery service.” The estimates are based on several studies, assume a level of scale and include a safety co-efficient to make the figures more conservative, he said.
With a number of emerging applications for drones in cities, there are several issues for city planners and officials to consider. Pacheco notes that regulation remains one of the main roadblocks to the adoption of drone technology.
“In the US and China there have been fast-track approvals to use drones for COVID-19-related purposes,” he commented. “Even if these work on a regime of exception, they do open the door for a lot more in the future. This is an opportunity to show regulators, organizations and even citizens that drones, including delivery drones, are a very useful solution for several critical missions, which can only accelerate future adoption.”
Cities also need to address the privacy issues related to drones. A Paris court recently suspended the use of drone surveillance to monitor compliance with COVID-19 measures, citing privacy concerns. The Westport Police Department in Connecticut also dropped plans to pilot drones to enforce social distancing and detect COVID-19 symptoms following concerns from citizens and civil liberties groups.
Privacy and no-fly zones “should be captured by cities or governments centrally and enforced onto drone operators,” Pacheco commented.
Cities also have a role to play in security and making sure drones and their cargo are not victims of vandalism or theft, and officials will need to consider making space available for drone package pick-up and drop-off points, Pacheco added.
(Source: The Next Web) | <urn:uuid:95c32e3a-5e16-44cb-9917-e49c1ba0cd82> | CC-MAIN-2022-40 | https://domainnewsafrica.com/million-plus-delivery-drones-expected-to-fly-by-2026/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00478.warc.gz | en | 0.940892 | 792 | 2.875 | 3 |
How does DMARC work?
DMARC (Domain-based Message Authentication, Reporting and Conformance) relies on the established SPF and DKIM standards for email authentication. It also piggybacks on the well-established DNS (Domain Name System).
In general terms, the process of DMARC validation works like this:
- A domain administrator publishes the policy defining its email authentication practices and how receiving mail servers should handle mail that violates this policy. This DMARC policy is listed as part of the domain’s overall DNS records.
- When an inbound mail server receives an incoming email, it uses DNS to look up the DMARC policy for the domain contained in the message’s From (RFC 5322) header. The inbound server then evaluates the message for three key factors:
- Does the message’s DKIM signature validate?
- Or, did the message come from IP addresses allowed by the sending domain’s SPF records?
- And, do the headers in the message show proper “domain alignment”?
- With this information, the server is ready to apply the sending domain’s DMARC policy to decide whether to accept, reject, or otherwise flag the email message.
- After using DMARC policy to determine the proper disposition for the message, the receiving mail server will report the outcome to the sending domain owner via the | <urn:uuid:f04ac789-c706-4452-aa7f-931d9b20e8b8> | CC-MAIN-2022-40 | https://help.clouduss.com/ems-knowledge-base/how-does-dmarc-work | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00478.warc.gz | en | 0.841695 | 292 | 2.9375 | 3 |
Passwords are the first defense to protecting private information. It is important to implement strong passwords to make it difficult for outsiders to breach that wall and gain access to sensitive information. Having a weak password leaves not only the user at risk, but when dealing with a company managing any private data, the company is susceptible to hacking which can cost thousands of dollars or even millions of dollars. Over 83% of Americans have weak passwords and 53% of Americans use the same passwords for multiple accounts.
Here we discuss the common mistakes used when creating a password and provide some tips on creating a strong password.
Most Common Mistakes in Passwords
Many people include personal information that can be easily found on social media as passwords. In 2019, the UK’s National Cyber Security Center revealed that the top 3 most hacked passwords were “123456”, “123456789”, and “qwerty.”
Below is a list of common mistakes used in weak passwords:
- 16% of Americans use their name or a family members name
- 15% use a pet’s name
- 11% of people use their birthday
- 8% use words related to a hobby of theirs
- 5% use part of their address
- 4% of Americans use the name of their favorite book or movie
- 3% use celebrity names
- 3% use the name of the website the password is for
How to Tighten up Security Through Password Management
- Follow a password policy. More complex passwords should be used especially by privileged users and executives. Frequent password changes and the use of passphrases should be implemented. This will make it hard for hackers to sell password and username lists to other people who wish to breach your data.
- Use passphrases instead of passwords. Unlike a password which is only one word, a passphrase is a string of words used to gain access to a system. It is commonly known to be much harder to guess and with password cracking algorithms being less effective after 10 characters, passphrases tend to be more secure. Passphrases can contain complex rules, they are sensitive to punctuation, symbols, numbers and capitalization. Major operating systems such as MAC OS, Windows, and Linux allow passphrases to be up to 127 characters long. Passphrases really do give a user the ability to make their password very difficult to break.
- Implement your password policies into your systems. To help users remember passwords, a secure, encrypted password manager designed for business may be used. The system administrator should implement rules requiring passwords to be changed every 90 days and passphrases every 180 days. Password managers are also great to keep track of passwords to ensure they are not reused. Here are some more additional steps that can be taken to improve password security:
- Configure the minimum character length for passwords to be 10, and 15 for passphrases.
- Enable complexity requirements for both passwords and passphrases.
- Reset admin passwords every 180 days.
- Use strong admin passphrases for all domain admin accounts.
Hackers have found many methods to breach your data. You should take the utmost precaution to enhance your security. In another blog post, we talked about the advantages of using single sign-on to help consolidate password management for your organization. Implementing a strong password management process is one of the first steps you can take to practice good cybersecurity. Take the extra time to set up complex passwords and passphrases and implement a strong password policy on your systems. These steps are also aligned with the National Institute of Standards and Technology (NIST) SP 800-63-3 guidelines for passwords. | <urn:uuid:9a8b5764-46cd-4729-b418-ecbcf55e303b> | CC-MAIN-2022-40 | https://blog.24by7security.com/foresight-2020-passphrases | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00478.warc.gz | en | 0.927431 | 769 | 3.609375 | 4 |
Imagine this – you visit your local art museum for the first time in over a year. You’re excited to be back in the physical building! You get to be in the same physical space as the art! You make your way to one of your favorite art pieces in the museum, but when you finally arrive, you find something odd. Next to your favorite art piece is a small camera pointing at you and everyone else viewing your favorite art piece.
Is this to make sure people are wearing masks? Social distancing? Or is it something more?
Museum-goers in Italy are already facing this reality with the inclusion of the ShareArt system in several Italian museums. The system aims to track how long museum visitors spend at the museum piece, creating data to inform exhibition layout and scheduling decisions. In addition, there is interest in having the system capture and analyze facial expressions as mask mandates fall to the wayside. While this project aims to guide museums in making their collections more visible and accessible for museum visitors, it also brings up new and perennial concerns around privacy.
Tracking Bodies, Tracking Data
Libraries and museums are no strangers to counting the number of people who come into a building or attend an event. Door counters installed on entrance/exit gates are a common sight in many places, as well as the occasional staff with a clicker manually counting heads in one space at a specific time. The data produced by a door counter or a manual clicker counts heads or people in an area usually is relegated to the count and the time of collection. This data can get very granular – for instance, a door counter can measure how many people enter the building in the span of an hour, or a staff person can count how many people are in a space at regular intervals in a day. This type of data collection, if nothing else is collected alongside the count and time collected, is considered a lower risk in terms of data privacy. Aggregating count data can also protect privacy if the door or event count data is combined with other data sets that share data points such as time or location.
Patron privacy risk exponentially increases when you introduce cameras or other methods of collecting personal data in door or space counts. Door or space counters with webcams or other cameras capture a person’s distinct physical traits, such as body shape and face. This updated door counter mechanism is a little different than a security camera – it captures an individual patron’s movements in the library space. With this capture comes the legal gray area of if audio/visual recordings of patron use of the library is protected data under individual state library privacy laws, which then creates additional privacy risks to patrons.
Performing for an Audience
One good point on Twitter about the ShareArt implementation is that people change their behavior when they know they are being watched. This isn’t a new observation – various fields grapple with how the act of being observed changes behavior, from panopticon metaphors to the Hawthorn Effect. If a project is supposed to collect data on user behavior in a specific space, the visible act of measurement can influence the behavioral data being collected. And if the act of measurement affected the collected data, how effective will the data be in meeting the business case of using behavioral data to improve physical spaces?
Libraries know that the act of surveilling patron use of library resources can impact the use of resources, including curtailing intellectual activities in the library. Privacy lowers the risk of consequences that might result from people knowing a patron’s intellectual pursuits at the library, such as checking out materials around specific topics around health, sexuality, politics, or beliefs. Suppose patrons know or suspect that their library use is tracked and shared with others. In that case, patrons will most likely start self-censoring their intellectual pursuits at the library.
The desire to optimize the layout of the physical library space for patron use is not new. There are several less privacy-invasive ways already in use by the average library to count how many people move through or are in a particular space, such as the humble handheld tally clicker or the infrared beam door counter sensors. Advancements in people counting and tracking technology, such as ShareArt, boast a more accurate count than their less invasive counterparts but underplay potential privacy risks with the increased collection of personal data. We come back to the first stage of the data lifecycle – why are we collecting the data we are collecting? What is the actual, demonstrated business need to track smartphone wifi signals, record and store camera footage, or even use thermal imaging to count how many people enter or use a physical space at a particular time? We might find that the privacy costs outweigh the potentially flawed personal data being collected using these more invasive physical tracking methods in the name of serving the patron. | <urn:uuid:f19f7021-7fb5-4331-a55c-7174bcb16056> | CC-MAIN-2022-40 | https://ldhconsultingservices.com/2021/07/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00478.warc.gz | en | 0.940723 | 977 | 2.625 | 3 |
There are many ways to authenticate without a password. Software-based approaches typically utilize a mobile app on a smartphone alongside a biometric authenticator such as iOS Touch ID. Hardware-based methods include smart cards and physical security tokens. Native or built-in authenticators such as Windows Hello may also be classified as hardware-based approaches as they require specialized devices and biometric sensors.
Practitioners have many different options to implement a passwordless experience, each focused on reducing hard and soft costs, as well as to improve productivity of both users and IT operations teams. Below, we analyze the pros and cons of several technology options for buyers to hopefully simplify some of the decisions in the shift to passwordless authentication.
FIDO2 Mobile and Web Login
This category includes the use of smartphones for accessing mobile, web and desktop applications. Specifically, they cover the use of FIDO authentication when made possible with a mobile/web app or SDK.
Most common is the use of mobile authenticators such as Apple Face ID and the many different biometric authenticators available on smartphones today across iOS and Android.
It may also include the use of built-in biometrics, also known as "platform authenticators" such as Windows Hello and MacOS Touch ID.
It's important to note that simply utilizing the biometric authenticator on a device is not sufficient for going passwordless. Smartphones typically link a biometric to a user password but do not eliminate the actual password. That's why organizations adopt passwordless software solutions in order to bridge the gap between a person and the login experience. These solutions combine the use of a biometric smartphone with public-key encryption and open standards such as FIDO2.
FIDO-enabled services work by asking users to select one of several available authenticators, to then generate a public/private key pair. The public key is shared with the desired service, which then sends an encrypted challenge. The user digitally signs that challenge and verifies their identity. This methodology is often referred to as "True Passwordless" — where the user experience is familiar, but the underlying architecture is very different.
Organizations looking to migrate to a passwordless user experience should seek a partner that provides true passwordless multi-factor authentication, and can secure mobile, web, and desktop. HYPR is an example of a passwordless platform that is focused on this level of cross-platform functionality.
What are some advantages of using a FIDO2 mobile and web platform?
- Rapid time to value by focusing on devices your users already possess (e.g. smartphones, laptops)
- Excellent commercial vendor options available for buyers
- Provide integration of common passwordless methods including FIDO2 and biometrics
- Provide SDK support to enable extensibility for local systems and applications
- Interoperable with authentication plugins from commercial identity vendors
- Enable expert support partnership for enterprise authentication teams
If you're unsure if you're really truly definitely passwordless - Check our guide Am I Using True Passwordless MFA?
See a demo of a mobile-to-web login powered by a FIDO2 authentication:
FIDO2 Security Tokens
A popular passwordless method is the use of a hardware security token. The general idea is the same as using a smartphone only the client authenticator is a key fob, USB token, or a hardware dongle. YubiKey is a good example of a FIDO2 passwordless experience.
The underlying flow is very similar to other FIDO authentications, relying on a PKI handshake where clients provide a locally generated public key to a server as the basis for a subsequent authentication challenge. In some high-assurance environments such as public sector or financial services there may be a requirement to use a hardware token to generate a private key for this PKI handshake.
An organization would typically select and deploy their preferred FIDO hardware token to their users. However, the interoperability of the standard means that most users can utilize any FIDO-Certified token for this authentication. This is a key benefit of FIDO2 tokens. Unfortunately mass adoption has been slow, with enterprises citing additional hardware costs, logistics of distributing tokens, and user experience concerns as key factors.
For maximum control and visibility an organization might choose to deploy a platform approach to manage, provision, and deploy FIDO2 security tokens. By leveraging an authentication platform such as HYPR, organizations can leverage smartphones and security tokens as interoperable authenticators for their passwordless initiatives.
What are the advantages of using FIDO2 security token for passwordless authentication?
- Hardware-based security keys achieve a higher level of assurance, especially when used in conjunction with MFA
- Defined by open specifications that encourage use of strong authentication
- Open Standards managed by a large community of FIDO consortium vendor participants such as Microsoft, Google, HYPR, and others
- Reliably evolving from FIDO to FIDO2 as computing technology progresses
- Security protection is built on a mature public key technology foundation
See a demo of YubiKey, a FIDO2 token, for desktop login:
Windows Hello Passwordless
Microsoft has created a biometric authentication method for Windows 10 called Windows Hello that allows users to validate their identity with a biometric sensor – thus removing the need for a login password.
Windows Hello is Microsoft's paswordless login method for personal use and comes with most Windows 10 machines.
Windows Hello for Business (WHfB) is the edition used for enterprise access.
Both versions support multiple biometric modalities such as fingerprint, facial, iris, or retina scan.
Setting up Windows Hello is straightforward and usually happens upon a workstation's initial boot. Users set up the process in the sign-in options under their account settings by registering their biometric. Once done, they can access Microsoft accounts and applications without having to enter a password.
When enabled, Windows Hello offers the user multiple options at the login screen:
A key feature of Windows Hello is the use of cutting-edge biometric sensors such as Intel RealSense cameras. The technology works via 3D structured light alongside anti-spoofing methods to reduce the likelihood of impersonation. The system works for users with Microsoft accounts as well as other services (non-Microsoft) that support FIDO authentication.
As one would expect, Windows Hello was designed for both enterprise and consumer use – and it appears to be popular in both environments. Microsoft recently stated that more than 150 million people were using passwordless authentication on Windows each month.
Enterprises often use Windows Hello alongside passwordless authenticators such as a mobile app and security token. Windows Hello provides an excellent choice as a primary or secondary authenticator, especially in cases when users lose or forget their smartphone.
What are some advantages of Windows Hello for Passwordless Authentication?
- Integrated into the Microsoft Windows 10 operating system
- Designed to support FIDO specifications for open authentication
- Supports use of biometric validation with facial, iris, and fingerprint recognition
- Dramatically reduces need for password to access operating system and applications
- Includes anti-spoofing methods to improve accuracy and avoid fraud
Is Windows Hello multi-factor authentication (MFA)?
This question is often asked by businesses as it remains unclear if the use of Windows Hello is inherently "multi-factor." While it offers a strong passwordless authentication, the Windows Hello authentication does take place locally on one device. Enterprise security teams have hesitated to refer to this as a traditional MFA as it does not require an additional device or an out-of-band authentication. For this reason organizations often use Windows Hello as a component of their authentication strategy, but rarely is it the sole primary authenticator.
A smart card offers credit-card sized convenience for electronic identification, authentication, and authorization to resources. They are sometimes contactless and can include functions such as data storage and application processing. The most common use of smart cards is for personal identification, national population identification, financial services transaction enablement, and even mobile device processing (SIM cards are a form of smart card).
Smart cards have been used for many years to enforce additional step-up or strong authentication. They are particularly popular in the public sector. They are common in federal agencies, and in high-assurance environments.
Smart cards work via an embedded integrated circuit that operates like a small computer with a microprocessor and memory. Most people think of smart cards in the context of their associated readers through with direct physical contact is made. Many smart cards work through a remote interface with contactless operation via radio signaling. Such design greatly extends the potential use case options for smart cards.
What are some advantages of Smart Cards for Passwordless Authentication ?
- Offers credit-card sized convenience for users
- Well-suited for personal identification systems including national databases
- Extensible to support adjunct storage and processing (such as in a SIM card)
- Includes near field contactless operation through radio interface
- Supports many familiar use cases for business and consumer users
How do User Authentication Methods Compare?
2FA, MFA, Soft Tokens, Hard Tokens, Passwordless, Smart Cards – your users have more ways to log in than ever before. With so many authentication methods available, how do various approaches compare to one another? See how HYPR stacks up against alternative passwordless login methods.
How are Authentication Methods Attacked?
Find out how user authentication methods vary in terms of security level. The NIST Authentication Attack Matrix heat-maps known security threats against various modalities. It is offered to arm executives and security practitioners with added knowledge on which to base critical authentication sourcing decisions. | <urn:uuid:af48237f-8a2f-4003-859b-0fdc7c4dafb1> | CC-MAIN-2022-40 | https://www.hypr.com/passwordless-security-guide/passwordless-login-methods | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00478.warc.gz | en | 0.920139 | 1,999 | 2.609375 | 3 |
Most everything we use contains batteries of some kind, whether it’s alkaline, nickel metal hydride (NMH), or Lithium Ion, but some are more dangerous than others. There are car batteries and batteries for toys, but the most dangerous of all are the ones found in our electronics. Smartphones, laptops and tablets typically use Lithium Ion batteries, which are known to cause fires when damaged.
Lithium Ion batteries have been referred to as “mini bombs,” especially when they’re carried around in smartphones. LIBs are sensitive to high temperatures and highly flammable, and though most are safe unless they’re damaged, accidents are always a possibility. According to IonEnergy, “Lithium-ion cells like all chemistries undergo self-discharge. Elevated self discharge can cause temperatures to rise which if uncontrolled can lead to a Thermal Runaway also known as ‘venting with flame.’ If, however, due to some damage to the cell, impurities penetrate into the cell, a major electrical short can develop. This can cause a sizable current to flow between the positive and negative plates. During a thermal runaway, the heat generated by a failed cell can move to the next cell, causing it to become thermally unstable as well.”
Many factors can lead to impurities penetrating the cell such as manufacturing defects, design flaws, and abnormal/improper usage, however, one of the most common factors is damage to the battery. Think of how many times we drop our smartphones, even if it’s just a few feet off the bed. How many times has it bounced from the bed into the wall or bedside table? Now imagine a small explosion as a result. Though most battery fires occur in materials recovery facilities (MRF), LIBs can be extremely dangerous anywhere when damaged, especially when not disposed of correctly.
There are several methods of electronic waste disposal that include batteries, the safest being recycling via a professional disposition service. IT asset disposition enterprises like HOBI International collect old electronics and ensure that they are safely, and properly disposed of. Other methods like landfills and incineration can be more harmful to the environment, as well as the surrounding residents if the batteries’ chemicals leak into the soil or contaminate the air. For in-house professional battery removal there are precautions and safety measures to take to prevent fires. Recycling prevents any harmful chemicals from entering the atmosphere, and mitigates potential fires in landfills. | <urn:uuid:6f2119f4-8115-4f45-aad2-532049173c83> | CC-MAIN-2022-40 | https://hobi.com/dangers-of-libs-mitigating-potential-fire-risk-via-battery-recycling/dangers-of-libs-mitigating-potential-fire-risk-via-battery-recycling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00478.warc.gz | en | 0.948043 | 518 | 3.546875 | 4 |
The past decade has seen a burst of algorithms and applications in machine learning especially deep learning. Behind the burst of these deep learning algorithms and applications are a wide variety of deep learning tools and frameworks.
They are the scaffolding of the machine learning revolution: the widespread adoption of deep learning frameworks like TensorFlow and PyTorch enabled many ML practitioners to more easily assemble models using well-suited domain-specific languages and a rich collection of building blocks.
Looking back at the evolution of deep learning frameworks we can clearly see a tightly coupled relationship between deep learning frameworks and deep learning algorithms. These virtuous cycle of interdependency propels a rapid development of deep learning frameworks and tools into the future.
Stone Age (early 2000s)
The concept of neural networks have been around for a while. Before the early 2000s, there were a handful of tools that can be used to describe and develop neural networks. These tools include MATLAB, OpenNN, and Torch etc. They are either not tailored specifically for neural network model development or having complex user APIs and lack of GPU support. During this time, ML practitioners had to do a lot of heavy lifting when using these primitive deep learning frameworks.
Bronze Age (~2012)
In 2012, Alex Krizhevsky et al. from the University of Toronto proposed a deep neural network architecture later known as AlexNet that achieved the state-of-the-art accuracy on ImageNet dataset and outperformed the second-place contestant by a large margin. This outstanding result sparked the excitement in deep neural networks and since then various deep neural network models kept setting higher and higher record in the accuracy of ImageNet dataset.
Around this time, some early days deep learning frameworks such as Caffe, Chainer and Theano came into being. Using these frameworks, users could conveniently built complex deep neural network models such as CNN, RNN, and LSTM etc. In addition, multi-GPU training was supported in these frameworks which significantly reduced the time to train these models and enabled training large models that were not able to fit into a single GPU memory earlier. Among these frameworks, Caffe and Theano used a declarative programming style while Chainer adopted the imperative programming style. These two distinct programming styles also set two different development paths for the deep learning frameworks that were yet to come. | <urn:uuid:c7e2cad0-e050-4990-91f7-8862fee6c512> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/12/16/deep-learning-frameworks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00478.warc.gz | en | 0.952365 | 481 | 3.15625 | 3 |
Few industries are going through such extraordinary growth as technology, and this can be a fantastic area to work in for a vast number of different reasons. Technology is changing and shaping the world around us and is being used in all kinds of creative, valuable, and innovative ways. There are many excellent career options for those that have an interest in technology. So, if you feel that this is an industry that is right for you and you are looking for a rewarding career path, then read on to discover the main benefits of a career in technology.
- Many Different Areas to Consider
As mentioned, there are many different uses of technology in today’s day and age and a wide number of areas that you could consider for a long and rewarding career. A few good options include:
- Software development
- Web design
- Data analysis
- Job Opportunities
As an industry that continues to evolve at a rapid date, you will also find that there are many job opportunities in the tech industry, and this will only continue as it plays a more significant and more critical role in many areas of life. When there are lots of job opportunities, it means that it should not be too challenging to find work, plus you will benefit from job security, which can be a concern in some industries. Not only this, but this also means that often you can find work anywhere around the world, which can provide flexibility.
- High Salaries
As an industry that is changing the world and so important to many areas of life, you will find that the tech roleis in demand and can be very well paid when you begin to climb the ladder. Those that want to command a high salary will undoubtedly have the opportunity to do this with a career in tech if you are willing to put the work in and climb the ladder.
- Improve the World
Technology has many uses and can be used to improve the world in many ways. It can provide great job satisfaction when you know that you are doing good in the world, whether this is helping a business to be successful, fighting the battle against cybercrime, or creating essential new products for the healthcare industry.
- Be Creative
In many cases, a career in technology also allows you to be creative and to think outside the box while also using technical skills or knowledge. There are not many careers that allow you to do both, which allow many people in this industry to enjoy their work, feel challenged, and to be able to express themselves.
- Solve Challenges
Following on from this, you will find that when you work in technology, you will always be finding ways to solve specific challenges. Solving challenges is an excellent way to keep your mind active and engaged, and it can be immensely rewarding once you have found a solution that will benefit the world.
- Rapid Advancement
As the tech industry grows and becomes more competitive, there is the opportunity to rapidly advance your career if you are willing to put the work in. Usually, this will require further studying and earning a new degree, such as an online master’s degree in computer science, which is a highly valuable qualification which could help you to quickly climb the ladder and start earning a large salary. This degree is often used to progress in data science, software development, or cybersecurity and will prepare you to excel in these areas.
- You Don’t Need an Academic Degree to Get Started
While you will undoubtedly want an academic degree to climb the ladder and command a high salary, but you do not need one to get started in this industry. Those that develop critical skills like programming and web design can easily find entry-level work, and it is then a case of finding the best way to advance your career from here.
- Varied Working Habits
Typically, those that work in tech have varied working habits, which means that you are not usually tied to a desk 9-5. Those that work in tech often work remotely or in a freelance capacity so you can benefit from working from home, but often you will also be collaborating with others so it can provide the best of both worlds and keep you on your toes at all times. This also often allows for an excellent work-life balance, which is essential for those that value their free time and personal life.
- Continued Change
Since technology continues to evolve and change at a rapid rate, it means that this is an industry that is always moving forward, so you will always have work to do and be challenged. Additionally, you will find that the skills that you develop with a career in tech will be transferable, so people often change jobs with ease once they feel that they have gone as far they can play in one particular position. You could also often easily change entire industries with specific skills, whether this is FinTech, healthcare, business, cybersecurity, or any other area related to technology.
- Good Workplace Culture
As an industry that is pioneering and influential, it is no surprise that you find that tech companies often have the best culture, and employees enjoy coming into work each day. You hear of many of the tech giants having progressive and relaxed workplace environments with a laid-back yet productive atmosphere, which is ideal for those that do not want to work in the typical 9-5 role. Additionally, this also means that you often form strong connections with co-workers, and this can be a highly social industry to work in.
As you can clearly see, there are many reasons to pursue a career in technology, and it should allow for a happy, lucrative, and rewarding career no matter which path you decide to follow. As an industry that is so important to all areas of life in today’s day and age, this is undoubtedly an excellent industry to start working in and will only become more critical in the years to come. | <urn:uuid:09dc7901-b345-4d8b-b62a-f6cf16e7c6ea> | CC-MAIN-2022-40 | https://gbhackers.com/the-benefits-of-a-career-in-the-tech-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00478.warc.gz | en | 0.973044 | 1,192 | 2.515625 | 3 |
Unfortunately, until recently, people suffering from pain that was not clearly diagnosed would sometimes be treated as if that pain was imaginary. The pain is in their heads; it is psychosomatic; the patient doesn’t have the internal fortitude; any of these platitudes was used as an excuse for poor understanding of the physiology surrounding pain.
We now understand that pain is both highly subjective because of its ties to personal history and our emotions, we understand that the intrinsic biology of pain is as unique as a person’s fingerprints. The understanding that pain is indeed be in our heads, our brains to be precise, creates an opportunity for discussion and enlightenment to acknowledge how unique our pain is and how we can individualize therapy to patients.
The brain is akin to a computer
Just as computers process information, we know that our brain is the nerve center that informs the body that it is in pain, how severe that pain is and how we should respond to the pain. Using computers as a metaphor, we input data on a keyboard that is transmitted to a central processing unit when the computing occurs.
Likewise in our body, we receive an input from a sensory nerve and then another nerve brings the sensation along the spinal cord to the areas of the brain that registers the sensation telling us, “Ow! That hurts!” These pathways create a memory to remind us not to perform that act again because it caused an unpleasant sensory event. We now know that the brain has the ability to create new neural pathways for unpleasant and maladaptive events. In essence, we can train our brains! The ability to retrain out brain can be used to treat conditions related to pain as well, thinks like depression, anxiety, PTSD and even sleep disorders.
Retraining the brain
The ability to retrain the brain has created, for the first time, a new tool in the arsenal of pain relief – one that truly is efficacious, non-pharmacologic, low-risk and is long-lasting. The use of behavioral health guided virtual reality technology has created a platform that is achieves these goals. In a virtual reality experience, the brain is fully engaged in an immersive reality that is pleasant, exciting, and interesting.
The ability to immerse and redirect our cognitive direction leads to an almost immediate decrease in the sensation of pain and other untoward symptoms. Just as a teenager forgets about their hunger or fatigue while playing video games, patients begin to forget about their depression, their anxiety, or their pain as they experience a guided virtual environment. As the brain begins to experience these events with chronicity it then begins to form new neural pathways around maladaptive signs that are less distressing and more normative as the brain pursues homeostasis.
The outcomes are real
People using this approach report a 70 percent improvement in pain relief. Many are able to stop taking potentially addictive pain medications, or to significantly cut down on their medication dose. Freed from the side effects of these drugs, they are able to be present, and to engage in life again.
So perhaps the fact that pain is in our head is truly a good thing? Instead of taking pills to dull the pain, injections to numb the pain or surgery to remove the pain we can focus on the process of pain itself and allow the brain to redirect and course correct…with a little virtual guidance, of course!
For more information on these exciting new developments, visit www.harvardmedtech.com. | <urn:uuid:dd638fc5-fcf3-4c07-b44c-9f56415216c6> | CC-MAIN-2022-40 | https://coruzant.com/health-tech/pain-maybe-the-pain-being-in-our-heads-really-is-a-good-thing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00678.warc.gz | en | 0.955232 | 707 | 2.984375 | 3 |
Microsoft brings users a great operating system for personal computers, and while it has built-in security features they are not enough to fully protect yourself. Therefore, you need to improve the security level of your computer, which will make it more secure against viruses, malware, other malicious programs and even hackers. Here are the top seven methods you can use to protect your Windows computer:
- Install all Windows Updates:
To keep your computer is up to date, you have to download and install Windows updates frequently. This will help your PC fix bugs, improve the stability which makes it run smoother, and help your computer to be safe against attackers who are trying to exploit the security holes of old apps.
2. Turn on Windows Firewall:
The firewall is a handy built-in application on your Windows PC, which helps users to manage all incoming and outgoing connections. Enabling the Firewall will prevent unauthorized connections from accessing your computer as well as block unwanted outgoing connections to the Internet. You can also create customized rules or adjust existing rules to improve the security level to follow the way you want.
3. Install Antivirus Software:
If you don’t have any antivirus software on your computer, we would recommend selecting one of these antivirus softwares to protect your computer from viruses, malware, as well as other malicious applications.
4. Use a VPN when Connecting to a Public Wi-Fi:
This will help you to surf the Internet more securely, especially when connecting to a public wireless network, when you are out and about somewhere such as the airport or a hotel. When connecting through a VPN, all data packages will be encrypted and sent through a private connection tunnel, which helps to protect your personal information and sensitive data from being monitored or even exploited. Click here for everything you need to know about VPNs.
5. Use Google Chrome Browser:
If you don’t have it on your Windows computer, we would recommend downloading and using it as your default web browser. The main point is that this browser will show a warning when you visit a dangerous website, which can harm your computer. Therefore, you will know exactly which sites to avoid visiting to protect your Windows PC from being infected by bad apps.
6. Turn on BitLocker Encryption:
Using a strong password to log into your Windows PC can only help to protect it but a strong password only goes so far. We would recommend enabling BitLocker to encrypt and protect your data from unauthorized users. This application is very useful, especially when your computer is stolen as all sensitive data will be encrypted and can’t be viewed without a valid passcode. Using a Microsoft.com account to log into your computer is also beneficial because this type of account will help secure your PC as it’s unable to be reset with a few commands. Click here to see how to create a strong password the right way.
7. Don’t Download Email Attachment from a Stranger sender:
If you get an email with an attachment from someone that you’re not expecting, don’t try to download and view it, even if it’s a PDF file. Nowadays, phishing schemes are so common and it’s common for hackers to implement viruses or trojans into another file and send it through an email attachment. When a user downloads and opens it, that virus/trojan will be launched and infect your Windows PC, so be careful. Click here to see our guide to the various types of Malware. | <urn:uuid:c791c669-4b05-40d0-b324-feb6ea0c1550> | CC-MAIN-2022-40 | https://www.bvainc.com/2021/10/01/top-7-best-ways-to-protect-your-windows-computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00678.warc.gz | en | 0.929622 | 726 | 2.515625 | 3 |
Hackers are constantly throwing in new and clever phishing attacks that threaten email users’ security. KnowBe4, one of the top security attentiveness and simulated phishing platform contributors recently issued the top 10 phishing email subject lines from this year’s second quarter. Please note, the attacks used most often contain email subject lines that relate to a user’s passwords and security warnings.
An estimated 1 out of 3 people will open a phishing email each day. This tricky way of gathering people’s personal and financial information is getting bigger, despite all the warnings from technology experts.
What is Phishing?
Phishing is a technique that hackers practice to steal personal information, like credit card info or login authorizations. The hacker replicates an existing login page from an online service such as Dropbox, Apple, Gmail or your financial institution. This made-up website holds a code that delivers all the personal data you submit directly to the hacker. To lure you to the bogus website, hackers send a believable email to you. Quite often, the email sent to you will ask you to log in to your bank account because your bank has exposed a transaction that you did not authorize.
Hackers can make these emails look and sound real and their exploits have been very successful. They often use fear. The email will make it sound like you need to take action NOW! So without really checking, the victim clicks the bad link and continues to the bogus landing page where they give the cyber thief their log-in and password information.
Why is Phishing a Concern?
It is reported that consumers, businesses, and organizations will lose an estimated $9 billion in 2018 globally. With so much personal information tied to finances now shared online, hackers use phishing in order to illegally steal your money.
The Anti-Phishing Working Group (APWG) latest quarterly release reported:
- Over 11,000 phishing domains were created in the last quarter alone.
- The number of phishing sites rose 46% over the previous quarter.
- The practice of using SSL certificates on phishing sites continues to rise to lure users into believing a site is legitimate.
Is Phishing Just a Risk for Personal Users?
Because they store a lot of files in the cloud, Phishing is also a risk for all kinds of companies: Digital design companies, financial institutions, security companies, etc. According to hackmageddon.com, there were 868 reported company security breaches or cyber-attacks in 2017.
What do Hackers need to be successful?
There are generally three things hackers do to gain access to your information:
- Build an email account to send emails
- Buy a domain and set up a fake website
- Think of a tech company that is used often to mask itself as a legit website (Dropbox, Amazon, eBay, etc.)
What Can I Do to Avoid Phishing?
It has become increasingly difficult to guard yourself against phishing. As hard as Apple, Google, and other tech companies have worked to filter them out, hackers are always devising new ways to phish. However, here are some tips on spotting phishing emails:
- Try to avoid clicking on buttons and/or links in emails.
- Begin using password managers. A password manager aids the user in creating and retrieving complex passwords and storing the passwords in an encrypted database. Therefore, if hackers get one of your passwords, they can’t use it on any of your other accounts.
- Don’t put total faith in the green lock icon in your address bar. This only ensures that it is a private channel but does not inform you about who you’re communicating with.
- Allow 2FA (two-factor authentication). Two-factor verification is an extra layer of safekeeping otherwise known as “multi-factor authentication.” 2FA requires a password and username, and also something that only the user knows (mother’s maiden name) or has (passcode texted to another device, such as a cell phone).
- Be extra cautious if the browser plugin of your password manager doesn’t show your login credentials automatically.
- Be quick to report suspicious emails to your friends and colleagues. Organizations who make it easy for their employees to report attacks will see a significant decrease in cyber-attacks. The quicker an IT department can respond to a threat, it will minimize the threat potential damage inflicted on people.
Ironically, the trend for most of these phishing emails are warnings about security alerts.
Here are the top 10 from Q2:
- Password Check Required Immediately (15 percent).
- Security Alert (12 percent).
- Change of Password Required Immediately (11 percent).
- A Delivery Attempt was made (10 percent).
- Urgent press release to all employees (10 percent).
- De-activation of [[email]] in Process (10 percent).
- Revised Vacation & Sick Time Policy (9 percent).
- UPS Label Delivery, 1ZBE312TNY00015011 (9 percent).
- Staff Review 2017 (7 percent).
- Company Policies-Updates to our Fraternization Policy (7 percent). | <urn:uuid:f2ec3847-91e1-44a2-ab54-74402cd14d08> | CC-MAIN-2022-40 | https://www.hammett-tech.com/what-are-the-top-10-phishing-email-subject-lines-from-q2-2018/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00678.warc.gz | en | 0.922292 | 1,068 | 2.78125 | 3 |
Making the Transition from IPv4 to IPv6 represents an important crossroads for growing technology companies. Leverage our team’s depth of expertise to avoid the pitfalls that are inherent to legacy solutions. TCP/IP is the technology that devices use to interact online. What allows each device to get online and communicate is that each one has an unique IP address. IP addresses enable each device to interact with each other over the Global Internet. From desktops, to laptops, to cell phones, to airplanes, to IP enabled washers and dryers – most things will be connected online. This means we need a lot more addresses than are available today.
At the inception of the Internet, IP version 4 (IPv4) was and is currently the most widespread protocol used to communicate. By their binary nature, IP addresses are a finite resource and the original founders of the Internet established, at the time, 2^32 unique IP Addresses or ~ 4.3 Billion addresses. While 4.3 Billion might seem like a vast number, the growing amount of Internet participation has exhausted this supply – in fact, it has been predicted that by 2020, there will be more than 7 Internet-enabled devices for every man, woman, and child on planet earth. In February, 2011, the keeper of the free address pool, the Internet Assigned Numbers Authority (IANA) fully exhausted and allocated all of the IPv4 addresses.
To continue the operation of the Internet, Internet Protocol version 6 (IPv6) was created. The address space created in IPv6 is vast – 2^128 or more than 170 undecillion addresses – and unlikely to be depleted in the next 50 years. Everything online must transition to include both IPv6 and IPv4 and eventually transition entirely to the new IPv6 protocol.
IPv6 as created in the mid ‘90s as a result of engineering efforts to keep the Internet growing. It is an entirely new protocol that is not “backwards compatible” with IPv4. However, both protocols can run simultaneously over the same “wires”. This means that there will be a progressive transition (picking up pace from this point forward) from IPv4 to IPv6 commencing with devices that support both protocols (also known as dual stacking). Eventually, IPv4 will cease to be supported and in the end, all IPv4 only devices will no longer be able to communicate with the IPv6 enabled Internet.
Thankfully, the transition to IPv6 has been underway for a while now. For example, all US Government public-facing servers are slated to be IPv6 compatible by September of 2012, and internal US Federal systems must be IPv6 ready by 2014. Companies, starting with service providers like ATT & Comcast are well underway in their conversions. Furthermore, 256 out of 306 Top Level Domains (TLDs) (.com or .net or .nl or .biz) are already enabled for IPv6.
How the Internet is “Inter” connected
To understand how we will be affected, it is helpful to understand how the Internet is actually “inter-connected”. The Internet is literally a “web” of networks all connected to each other – from our home network that has 2 or 3 computers to ISPs to online companies like Amazon &eBay.
In the middle of this diagram, the “Internet” is a collection of all the world’s networks interconnected together so that we, an end-user, can get from point A to point B (or “routed”) across all of these networks. In the end, this means that everyone online and everyone who wants to be online will be undergoing the upgrade to IPv6 starting with getting a new IPv6 address.
At the end of the day, the biggest and most noticeable difference between IPv4 and IPv6 are the actual IP addresses being used. IPv4 has a 32-bit string of numbers that often looks like the following:
This “address” is a part of a pool of addresses managed by IANA as described earlier. As this address pool has been depleted, all new requests for addresses will only be able to get a v6 address. IPv6 addresses are quite a bit more complex – they are 128-bit addresses:
There are many advantages to this more complex address schema in addition to the fact that now every device will have it’s own unique identifier. Ironically, the longer address will actually help to improve end-user experience online as the Internet architecture will see improvements with respect to traffic congestion, application specificity, security, and much more.
We have established that every Internet-enable device must have a unique IP address. Now what does this mean for the various constituencies accessing the Internet?
For most end-users at home, this transition will happen automatically and will be mostly unnoticeable. They will get their current and updated addresses from their ISP; businesses will have their IT departments configure their own networks so that their customers (the business) will automatically get their addresses, etc. Therefore those most concerned about this transformation are those that actually manage portions of the Internet: service providers, I/PaaS providers, online content & application service providers, and businesses that run their own networks.
As we see from the chart below, most end-users and small businesses will really only be responsible for ensuring that they have purchased IPv6 enabled devices including: computers, wireless access points, smart phones, printers and game consoles. Most devices purchased after 2007 are in fact IPv6 enabled. For example, Microsoft has been IPv6 enabled since version Windows XP-SP1 as well as commensurate Apple OSs.
The heavy lifting will be shouldered by the ISPs, I/PaaS, Content/ASPs and businesses that manage their own networks.
There are approximately 66,000 registered Autonomous Systems (AS). These “networks” are run by ISPs, I/PaaS, ASP/Content as well as government & education organizations. All of these “networks” imply a level of self administration, hence “Autonomous,” and will require their Network Administrators to follow this simple review:
Assess the network for IPv4 only devices, dual stacked devices (IPv4 & IPv6), as well as IPv6 only devices (not many of these yet).
- Layout an IPv6 network architecture starting with an Address Schema (which entails sub-netting).
- Determine your “stop gap” measures for IPv4 only devices. There are many “translation” scenarios that can be employed temporarily to ease burden of next step, however, one should note that like 8 track tapes used for playing music, using IPv4 only will impact your Internet experience and over time cease to operate.
- Provide a rip/replace plan for those things not capable of supporting IPv6.
- Commence upgrade.
These are certainly not trivial steps in transitioning to v6, however again, these are exclusive to service providers – those directly involved in managing networks. It does not preclude end users or SMBs from being aware of this change and ensuring their own devices are compatible.
So hopefully this section has given a snap-shot of the “Internet Infrastructure Ecosystem” and how each “vertical” will be affected by this transition. Furthermore, while not intended to cry wolf nor claim the Internet will die, for those who are involved in the upgrade of their own networks this has catalyzed them to commence the transition. Now the next logical question – when do you really need to do this?
IPv6 Transition – when do we really need to start?
The transition to IPv6 is well under way and has been fueled by the IANA announcement andAPNIC announcement (RIPE & ARIN will be next to run out and both should be out in 2011). We also saw a massive IPv6 “World Day” on June 8th, 2011, that tested our IPv6 readiness. So how does this translate into when you have to get yourself, your business or your organization ready?
While everyone should be aware that this transition is underway, the “services providers” are really the ones impacted in the near term as it is their jobs to provide Internet access or access to Internet infrastructure, which has to be IPv6 moving forward. Given the lack of backwards compatibility, this will require some education, hardware and software upgrades, and re-thinking about how to layout a network. This is due to the fact that the IPv4 mind-set was one of “scarce resources” (we will run out of addresses). In an IPv6 world, you have nearly unlimited resources and can plan your network IP Address plan very differently.
ISPs, I/PaaS, ASP/content providers should be in the midst of transition and if they are not, now is the time. Enterprises will have to assess their own network needs but is not of immediate urgency. And, finally, SMBs and end-users will really only have to track their own ISP’s steps to upgrade to IPv6 as well as be aware of existing and future technology purchases.
At the end of the day the entire Internet should run more smoothly and securely on IPv6.
The steps those undertaking this transition will need to make are also a significant opportunity to automate many rote network processes. The general steps, and where automation can play a significant role are as follows:
In subsequent articles we will be diving into software tools to help service providers and enterprises in this transition and what some of the emerging best practices will be in the areas of IPv6 Automation, IPv6 Security, and IPv6 as it relates to Asset Tracking. | <urn:uuid:464de361-9a97-4789-885c-1f3002035dc2> | CC-MAIN-2022-40 | https://www.6connect.com/resources/from-ipv4-to-ipv6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00678.warc.gz | en | 0.9477 | 2,037 | 3.0625 | 3 |
In order to relieve pressure on teachers, the Education Secretary wants the IT and education industries to unite, cooperate and collaborate more readily.
Damian Hinds last week urged the technology sector to provide innovations badly needed in education. Stating the requirement for cutting-edge technology in educational settings from primary to higher, he noted the potential of AI and robotics in particular.
Speaking at the World Education Forum, he urged tech industry figures and organisations from grassroots level, such as startups, up to multinationals like Microsoft and Apple to contribute to education. This would enable staff and students alike to learn and grow through technology. Whether through tools or other teaching and educational innovations, the need to improve teaching practice is central to improving education across the UK and the world.
Close collaboration and ready information
Hinds noted that while schools, universities and colleges have the power to choose which technology tools and programmes best suit their needs and budgets, they lack the knowledge of the industry and its products to do so. By collaborating with education providers and governments, technology companies can build worthwhile, sustainable solutions which can support learners and educators for years to come.
For Hinds, five areas must be addressed in the tech industry:
- The improvement of teaching practices
- Enhancing processes for assessment
- Making training practices more effective and useful
- Updating administration processes to lessen their burden and confusion
- Offering support for those who aren't actively pursuing or engaging in education
In addressing these areas, educators will be freed up to concentrate on the basics of teaching children, young people and adults, and not constrained by the administration tasks that keep them from the classroom and working for longer hours than necessary.
Teaching is, and can be, used in revolutionary ways, Hinds noted - citing the abilities to explore the planet from the classroom via virtual reality and mapping, as well as being able to programme robots and other artificial intelligence devices. If technology companies and ambassadors were able to widen the training of teachers and share best practice, then education would be drastically improved upon.
If you want to find out more about how technology can enhance the education industry, why not download our free eBook, Transforming the Education Sector with the Cloud, now. | <urn:uuid:d8beb6ed-dadf-4a9b-b68e-3273bd348119> | CC-MAIN-2022-40 | https://vuzion.cloud/latest-news/education-secretary-calls-for-tech-and-education-industries-to-collaborate | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00678.warc.gz | en | 0.954999 | 448 | 2.828125 | 3 |
Master Data Management
Real-Time Data Capture Required for A Global Ageing Population
We are in the midst of a longevity revolution. In the 1890s, the chances of surviving above age 65 was pegged at less than 50% in countries like Sweden, with a relatively high life expectancy. Now the chance of surviving beyond age 65 in such countries with a high life expectancy is more than 90%. This is a testament to our socioeconomic and healthcare development. Population ageing, population growth, international migration and urbanisation are considered as the four megatrends that will have a lasting impact on the sustainable development of our world.
This, however, is a double-edged sword. The phenomenal increase in life expectancy, coupled with a low fertility rate, has resulted in population ageing across the world. Projections by the World Population Prospects 2019 (United Nations, 2019) predicts that 6 in 11 people will be above 65 years by 2050. This is a significant shift when you consider that only one in 11 people were above 65 in 2019.
Another interesting fact is that the world’s population hasn’t been ageing at the same rate. Developed nations have had more time to acclimatise themselves to the ageing phenomenon. For example, it took 100 years for France’s population of 65+ to go from 7% t to 14%. Most of the lesser developed countries do not have this luxury of time. Brazil, for example, witnessed the same demographic ageing of its population within a record two decades. This means that many countries around the world will have to adapt to this phenomenon within a short span of time. In short, they may grow old before they become rich enough to take care of their elderly.
Ensuring healthy ageing is at the centre of the effort to counter the effects of population ageing. Healthy ageing is not just the absence of disease. It is the maintenance of the individual’s functional ability even in old age. This can be achieved only through a global movement to promote lifelong health and preventive care. This movement is further challenged with the shift in epidemiology brought on by the transition from high to low mortality and fertility, and socioeconomic development. One of the major epidemiologic trends of the current century is the rise of chronic and degenerative diseases in countries throughout the world. It is projected that in the coming years, noncommunicable diseases such as heart disease, cancer and diabetes will be the leading cause of death and disability rather than infectious and parasitic diseases. Health and long-term care systems should be ready to overcome this changing epidemiology while also focussing on age-appropriate integrated care for the ageing population.
Longevity has been an aspirational goal for mankind from prehistoric times. We’ve spent billions of dollars and many human lifetimes researching ways and means to increase our lifespan. We, however, have now reached a juncture where we need to ensure that we are able to sustain the quality of life of our ageing population without being a burden to the whole. The answers to some of these crucial questions will shed some light on how this mega trend will affect our sustainable development.
Will we witness increased and extended periods of social engagement and productivity from our ageing populations in the future? How will ageing affect social and health care infrastructures and costs? What can we do to improve the health and productivity of our ageing population? How will the accelerated ageing in lesser developed countries affect their socioeconomic status and what can we do to alleviate the situation?
We don’t have all the answers now as this is an unprecedented phenomenon. What we can do is hypothesise and project possible scenarios and solutions based on research and data collected in the here and now.
Digital transformation has revolutionized the way we collect data and analyse it. Apart from the traditional methods of data collection, gerontologists and geriatricians can access real-time data about the health and lifestyle of the elderly through information and communication technology (ICT) devices such as smartphones and wearables. Multiple streams of data can be collected from users. The data can be self-reported survey data as well as data on physical activity. Devices like smartphones, smartwatches, and other wearables, which can be used to collect real-time data for analysis, are defined as mHealth (mobile health) technologies by the World Health Organisation. Real-time data collected and analysed in such a way gives a clear view of the current healthcare delivery systems while providing an understanding of the resources that would be needed in the future. Such data repositories can help organizations become more agile and hyper-productive, in addition to being able to predict future trends, which in turn will enable advance planning and preparation for the care of the elderly.* This will also help us ensure that care for the elderly does not affect the overall socioeconomic development of the emerging population.
*For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future. | <urn:uuid:018c46fb-5f86-446c-a05d-3814afa0d781> | CC-MAIN-2022-40 | https://www.infosysbpm.com/blogs/master-data-management/real-time-data-capture-required-for-a-global-ageing-population.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00678.warc.gz | en | 0.944319 | 1,126 | 2.953125 | 3 |
Any kind of virus is scary. The idea of the technology you use turning on you is unsettling at best. As we come to rely more on computers, smartphones, tablets and the cloud, a single cyber attack can be devastating.
And yet, there is one form of cyber attack that stands out. Ransomware is singularly chilling. When this malware finds its way onto your device, it demands payment . . . or you lose your files. Forever.
While ransomware may seem like a new form of cyber attack, it’s actually been around for a while. In fact, the first known ransomware attack happened in the 1980s.
Attack Number One
It was 1989, well before email or Instagram. The average PC user wasn’t logging into the internet, so the delivery method of that first ransomware attack may seem low-tech by today’s standards. It came on floppy disks.
20,000 of them.
The disks were distributed to users in 90 different countries, each labeled as a product of the PC Cyborg Corporation. No such company exists, but no one was counting on name recognition to get recipients to use the disks. They were counting on the content.
The disks included software designed to detail a person’s risk of contracting AIDS. In those days, AIDS was both terrifying and mysterious. New information was welcome, especially if it promised some measure of protection. The attack played on a common fear.
The software included a legitimate risk assessment tool, as well as a virus. After the user rebooted their computer a set number of times, they would be prompted to turn on their printer. At that point, a literal ransom note would print, along with instructions for paying the ransom (or “licensing fee”) in exchange for decryption software.
It was a deviously creative plan, and it set the stage for modern ransomware.
The Modern Threat
Today’s ransomware is fundamentally the same as that first attack, though there are some notable differences. The delivery method, for example, has changed. We’ll cover that in more detail in a bit.
Keeping your organization safe may seem like a tall order. There are so many clever ways a cyber criminal can infiltrate your network. Not only that, but ransomware attacks are alarmingly common.
And yet, the best cybersecurity is really just strict adherence to some basic strategies. In other words, it seems complex, but it’s not.
If you’re serious about protecting your company – and you should be – there’s a two-pronged approach that will stop most ransomware dead in its tracks. You need solid employee education, and you need the right technical tools.
The vast majority of ransomware relies on a single potential weakness in your network – the user. This is particularly true for ransomware.
Ransomware can only find its way into your system if it’s invited. Without an open door, it can’t touch you. The trick is to make sure your people know how to avoid inadvertently inviting ransomware onto your network.
Let’s look at three key areas.
Phishing emails are the modern-day equivalent of the same strategy the AIDS Trojan used. Even if you’re not familiar with the term “phishing,” you’re likely aware of this type of attack. The user receives an email with a link. Click that link and malware makes its way onto your system.
The thing about phishing emails is that they only work if the user clicks on the link, opting to download something. If the recipient doesn’t do that, nothing happens. Unfortunately, about one-third of all phishing emails work. Innocent users take the bait, clicking on malicious links.
The success of phishing comes down to a lack of employee education. If your people know and understand the danger of suspicious downloads, they’ll be far less likely to fall for them.
Email isn’t the only delivery vehicle for phishing.
Here’s a common scenario. Attackers create fake social media accounts on sites like Facebook and Twitter. The newest variation is a fake account that appears to represent the customer service department of a trusted company. Attackers then watch for complaints from real customers, promptly messaging them with “fixes” . . . which are, of course, loaded with dangerous links.
Make sure your employees know of this tactic. If you or any member of your staff is having issues with a product or service, make sure you initiate conversation with the vendor. Don’t trust anyone who initiates conversation with you without first verifying the authenticity of the account.
Remarkably, there are still a lot of folks out there using painfully ineffective passwords. In a recent survey. A surprising number of users were actually using the password “123456.” That’s not just an invitation for cyber attack. That’s a neon sign with a laser light show and door prizes.
Instruct your employees to use strong passwords, and encourage them to change them often.
In addition to employee education, there are some things you can do on the technical side of your network to protect your company from ransomware attacks. Like employee education, these aren’t particularly difficult to execute. But don’t be fooled by their relative simplicity.
These are crucial steps to keeping your network safe.
Software Updates & Upgrades
In June of 2017, the Petya ransomware virus made worldwide headlines, infecting an estimated 16,500 machines. Ready for the painful twist? Microsoft released patches to address the vulnerabilities Petya exploited in May.
Too many companies have a casual, relaxed attitude about updates and upgrades. Yes, it’s inconvenient to reboot your machine so the OS can update. Yes, it’s expensive to upgrade from the old version of a program to the new (current) version. And yes, it’s extremely important to do both anyway.
Software developers do their best to outpace cyber criminals. When they find holes in their products, they address them. But if you don’t update and upgrade appropriately, you’ll remain vulnerable.
Backups & Business Continuity
Even thorough security measures aren’t a guarantee that you won’t fall victim to a ransomware attack. After all, it just takes one employee clicking on a malicious link. Just one out-of-date program. It can happen, even if you’re cautious.
Because the threat is very real, your protection should include a worst-case-scenario plan.
Ransomware is engineered to hold your data hostage. That can ruin a business – unless you have recent backups and a solid business continuity plan. If you’re prepared, even a successful attack won’t unravel your company’s stability.
A word of caution here, though. Business continuity isn’t something we advise doing on your own. But, that’s a perfect lead-in to our final technical tool . . .
A cybersecurity partner should be a part of your ransomware defense plan. Particularly if you don’t have an internal IT department. There’s no substitution for expertise. Working with the pros makes protection much easier to manage.
A well-qualified cybersecurity partner can even handle employee education on your behalf.
CCS Technology Can Help
Ransomware is a serious threat. That’s why we recommend a serious, proactive response. The individual parts aren’t all that complex, but each piece is important.
If you’re looking for ways to shore up potential security holes in your network, the experts at CCS Technology are here to help. We have years of experience helping small businesses just like yours. We know what it takes to stop ransomware.
Plus, we’re just a phone call away. Let us know how we can help you. | <urn:uuid:329e1387-c147-4d7a-bbff-3b6f09a2d753> | CC-MAIN-2022-40 | https://www.ccstechnologygroup.com/ransomware-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00678.warc.gz | en | 0.938963 | 1,665 | 3.09375 | 3 |
The world is now experiencing a time of conflict unlike anything seen in years, but because this is a modern-day, 21st-century clash, the war is not just being fought on a physical battlefield but also a digital one. As the invasion of Ukraine began official state hackers from both Russia and the West engaged in digital warfare, as did independent organizations such as “Anonymous,” a rogue hacking collective that has decided to stand against Russian cyber warfare.
Unfortunately, the worlds of military, corporate and civilian government data spaces are now considered fair game for state-level cyberattacks, funded and driven by government agencies. Now some companies are taking a stand.
Google, one of the premier technology companies globally for data management and analytics, is now providing security keys to groups and individuals to help protect them against cyberattack activity. They are distributing 10000 Titan security keys to individuals like journalists, whistleblowers, government officials, and others with sensitive or confidential data to encrypt and protect systems against intrusion and spying.
A Titan security key is an added level of encryption and access control. It is two devices, one a USB card shaped like a key, a Bluetooth device that functions similarly for wireless connections. This adds an extra level of multifactor authentication to an account. In addition to inputting a password to access data on, for example, a Gmail account, an account linked to a Titan security key would require the presence of the USB key plugged into the computer or a confirmed connection to the Bluetooth version for verification.
In other words, like with older multifactor authentication, methods that require a one-time code sent to a phone as an added layer of security, a Titan key system means that even if a password has been stolen through phishing or other means, the data can’t be accessed without a Titan security key to open the second “lock” on the data.
Multifactor Authentication Is The Best Defense
What Google and many other technology companies are discouraging is the use of only a single password to gain access to data and control of systems. A single password means that everything from identity theft, spying on confidential data, and outright seizure of control of systems is possible once that password is discovered.
Now that government agencies like Russia’s GRU are going after many systems and data, security is more important than ever. If you’re still relying on a legacy password security technology and want to upgrade your network to modern identity and authentication security technology, (including the new global standard of key-pair biometrics), look at Nok Nok products for secure, password-free cyber authentication solutions. The largest global financial brands depend on Nok Nok’s modern auth platform for improving and protecting customer trust. | <urn:uuid:0889c545-d360-4641-b177-d0e0b9ee736a> | CC-MAIN-2022-40 | https://noknok.com/multifactor-authentication-google-provides-digital-security-keys/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00078.warc.gz | en | 0.937425 | 562 | 2.96875 | 3 |
Also tl;dr: whatever language you decide to learn, also learn how to use an IDE with visual debugging, rather than just a text editor. That probably means Visual Code from Microsoft. Also, whatever language you learn, stash your code at GitHub.
Let's talk in general terms. Here are some types of languages.
- Development languages. Those scripting languages have grown up into real programming languages, but for the most part, "software development" means languages designed for that task like C, C++, Java, C#, Rust, Go, or Swift.
- Domain-specific languages. The language Lua is built into nmap, snort, Wireshark, and many games. Ruby is the language of Metasploit. Further afield, you may end up learning languages like R or Matlab. PHP is incredibly important for web development. Mobile apps may need Java, C#, Kotlin, Swift, or Objective-C.
As an experienced developer, here are my comments on the various languages, sorted in alphabetic order.
bash (and other Unix shells)
You have to learn some bash for dealing with the command-line. But it's also a fairly completely programming language. Perusing the scripts in an average Linux distribution, especially some of the older ones, and you'll find that bash makes up a substantial amount of what we think of as the Linux operating system. Actually, it's called bash/Linux.
In the Unix world, there are lots of other related shells that aren't bash, which have slightly different syntax. A good example is BusyBox which has "ash". I mention this because my bash skills are rather poor partly because I originally learned "csh" and get my syntax variants confused.
This is the development language I use the most, simply because I'm an old-time "systems" developer. What "systems programming" means is simply that you have manual control over memory, which gives you about 4x performance and better "scalability" (performance doesn't degrade as much as problems get bigger). It's the language of the operating system kernel, as well as many libraries within an operating system.
But if you don't want manual control over memory, then you don't want to use it. It's lack of memory protection leading to security problems makes it almost obsolete.
None of the benefits of modern languages like Rust, Java, and C#, but all of the problems of C. It's an obsolete, legacy language to be avoided.
This is Microsoft's personal variant of Java designed to be better than Java. It's an excellent development language, for command-line utilities, back-end services, applications on the desktop (even Linux), and mobile apps. If you are working in a Windows environment at all, it's an excellent choice. If you can at all use C# instead of C++, do so. Also, in the Microsoft world, there is still a lot of VisualBasic. OMG avoid that like the plague that it is, burn in a fire burn burn burn, and use C# instead.
Once a corporation reaches a certain size, it develops its own programming language. For Google, their most important language is Go.
Go is a fine language in general, but it's main purpose is scalable network programs using goroutines. This is does asynchronous user-mode programming in a way that's most convenient for the programmer. Since Google is all about scalable network services, Go is a perfect fit for them.
I do a lot of scalable network stuff in C, because I'm an oldtimer. If that's something you're interested in, you should probably choose Go over C.
This gets a bad reputation because it was once designed for browsers, but has so many security flaws that it can't be used in browsers. You still find in-browser apps that use Java, even in infosec products (like consoles), but it's horrible for that. If you do this, you are bad and should feel bad.
But browsers aside, it's a great development language for command-line utilities, back-end services, apps on desktops, and apps on phones. If you want to write an app that runs on macOS, Windows, and on a Raspberry Pi running Linux, then this is an excellent choice.
BTW, "JSON" is also a language, or at least a data format, in its own right. So you have to learn that, too.
Thus, you find it embedded in security tools like nmap, snort, and Wireshark. You also see it as the scripting language in popular games. Like Go, it has extremely efficient coroutines, so you see it in the nginx web server, "OpenResty", for backend scripting of applications.
In addition, it was the primary web scripting language for building apps on servers in the 1990s before PHP came along.
Thus, it's a popular legacy language, but not a lot of new stuff is done in this language.
However, for writing web apps, it's obsolete. There are so many unavoidable security problems that you should avoid using it to create new apps. Also, scalability is still difficult. Use NodeJS, OpenResty/Lua, or Ruby instead.
The same comments above that apply to bash also apply to PowerShell, except that PowerShell is Windows.
Windows has two command-lines, the older CMD/BAT command-line, and the newer PowerShell. Anything complex uses PowerShell these days. For pentesting, there are lots of fairly complete tools for doing interesting things from the command-line written in the PowerShell programming language.
Thus, if Windows is in your field, and it almost certainly is, then PowerShell needs to be part of your toolkit.
This has become one of the most popular languages, driven by universities which use it heavily as the teaching language for programming concepts. Anything academic, like machine learning, will have great libraries for Python.
A lot of hacker command-line tools are written in Python. Since such tools are often buggy and poorly documented, you'll end up having to reading the code a lot to figure out what is going wrong. Learning to program in Python means being able to contribute to those tools.
I personally hate the language because of the schism between v2/v3, and having to constantly struggle with that. Every language has a problem with evolution and backwards compatibility, but this v2 vs v3 issue with Python seems particularly troublesome.
Also, Python is slow. That shouldn't matter in this age of JITs everywhere and things like Webassembly, but somehow whenever you have an annoyingly slow tool, it's Python that's at fault.
Note that whenever I read reviews of programming languages, I see praise for Python's syntax. This is nonsense. After a short while, the syntax of all programming languages becomes quirky and weird. Most languages these days are multi-paradigm, a combination of imperative, object-oriented, and functional. Most all are JITted. "Syntax" is the least reason to choose a language. Instead, it's the choice of support/libraries (which are great for Python), or specific features like tight "systems" memory control (like Rust) or scalable coroutines (like Go). Seriously, stop praising the "elegant" and "simple" syntax of languages.
Like SQL for database queries, regular expressions aren't a programming language as such, but still a language you need to learn. They are patterns that match data. For example, if you want to find all social security numbers in a text file, you looked for that pattern of digits and dashes. Such pattern matching is so common that it's built into most tools, and is a feature of most scripting languages.
One thing to remember from an infosec point of view is that they are highly insecure. Hackers craft content to incorrectly match patterns, evade patterns, or cause "algorithmic complexity" attacks that cause simple regexes to exploded with excessive computation.
You have learn regexes enough to be familiar with the basics, but the syntax can get unreasonably complex, so few master the full regex syntax.
Ruby is a great language for writing web apps that makes security easier than with PHP, though like all web apps it still has some issues.
In infosec, the major reason to learn Ruby is Metasploit.
Rust is Mozilla's replacement language for C and especially C++. It's supports tight control over memory structures for "systems" programming, but is memory safe so doesn't have all those vulnerabilities. One of these days I'll stop programming in C and use Rust instead.
SQL, "structure query language", isn't a programming language as such, but it's still a language of some sort. It's something that you unavoidably have to learn.
One of the reasons to learn a programming language is to process data. You can do that within a programming language, but an alternative is to shove the data into a database then write queries off that database. I have a server at home just for that purpose, with large disks and multicore processors. Instead of storing things as files, and writing scripts to process those files, I stick it in tables, and write SQL queries off those tables.
Back in the day, when computers were new, before C++ become the "object oriented" language standard, there was a competing object-oriented version of C known as "Objective C". Because, as everyone knew, object-oriented was the future, NeXT adopted this as their application programming language. Apple bought NeXT, and thus it became Apple's programming language.
But Objective C lost the object-oriented war to C++ and became an orphaned language. Also, it was really stupid, essentially two separate language syntaxes fighting for control of your code.
Therefore, a few years ago, Apple created a replacement called Swift, which is largely based on a variant of Rust. Like Rust, it's an excellent "systems" programming language that has more manual control over memory allocation, but without all the buffer-overflows and memory leaks you see in C.
It's an excellent language, and great when programming in an Apple environment. However, when choosing a "language" that's not particularly Apple focused, just choose Rust instead.
However, there's no One Language to Rule them all. There's good reasons to learn most languages in this list. For some tasks, the support for a certain language is so good it's just best to learn that language to solve that task. With the academic focus on Python, you'll find well-written libraries that solve important tasks for you. If you want to work with a language that other people know, that you can ask questions about, then Python is a great choice.
The exceptions to this are C++ and PHP. They are so obsolete that you should avoid learning them, unless you plan on dealing with legacy. | <urn:uuid:aeb81a1f-0588-43cb-b12e-9fc215259364> | CC-MAIN-2022-40 | https://blog.erratasec.com/2019/04/programming-languages-infosec.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00078.warc.gz | en | 0.950275 | 3,340 | 3.140625 | 3 |
Businesses need to know their Actual Cost to make informed decisions about where to allocate their resources.
Without accurate information, businesses can’t possibly hope to make sound decisions about allocating resources.
The actual cost is the real cost of a product or service after the deductions and adjustments have been made. It’s essential to understand this definition to make sound business decisions that reflect a product’s or service’s actual costs.
This blog post will explore the actual-cost definition and details. We’ll also provide some examples to help illustrate this concept.
Actual Cost Definition
The actual cost is a product or service’s correct, accurate price. It should include all expenses incurred in producing and delivering one item to its final destination, including indirect costs such as administrative overhead, depreciation charges for capital equipment used in production, labor, delivery costs, and other related items.
Actual costing is recommended when each production process is analyzed to determine the production costs at each phase.
Actual costing is essential when determining the production costs because it can give a more accurate estimate than other methods like estimating or break-even analysis.
This concept has been around since 1938, when it was first introduced by professor Jules Mairesse who published his theory on the subject in an article titled “The Economic Role of Accounting.” Today, we use this process with manufacturing and services where there are typically no tangible assets to account for, like consulting firms.
Actual cost formula
The formula for calculating it is as follows.
Actual Cost = Direct Costs + Indirect Costs + Fixed Costs + Variable Costs + Sunken Costs
Below is the meaning of the factors used in the actual cost formula.
- Direct costs: This is the precise cost that is directly related to your processes, such as fixed costs and variable costs
- Indirect costs: These costs are additional or extra costs to support your process, like administrative charges
- Fixed costs: A fixed cost is a cost that cannot be lowered or increased. Fixed Costs refer to expenses that do not change with changes in output or sales volumes – instead, they remain constant regardless of the level of business activity. Examples may include property taxes, equipment rent/lease payments, and labor-related expenses such as a company secretary.
- Variable costs: This is the cost that varies during the process—for example, labor charges.
- Sunken costs: This is the cost that occurs due to the error during the process
If you are manufacturing products, the actual cost calculation is as follows.
- (Number of units of material used) X (Per unit cost) = Actual material cost
- (Labor hours used in production) X (Wage paid per hour) = Actual labor cost
- Addition of all overhead expenses (electricity, rent, insurance ) = Actual overhead cost
- (A+B+C) / (Number of units produced) = Actual production cost per unit
Actual cost example
The actual product cost includes the price it took to make it. So, for example, a manufacturing company estimated $1500 for product repair. But the actual cost was $2000. So the company had a cost variance of $500.
Cost variance is the difference between planned or estimated costs and actual costs.
Here the actual cost is more than the estimated cost. Hence the cost variance is considered an unfavorable variance.
- It helps to calculate fixed costs for different stages of production.
- It is widely used in manufacturing sectors where more raw materials are utilized. It also enhances the inventory system and makes procurement easy.
- It assists in making several outsourcing decisions and also helps in setting up the correct prices for the products.
- It helps streamline procurement as it depicts the cost of all alternative sources of supply and helps choose the most feasible option.
Actual cost uses realistic numbers to ascertain the prices and helps decision-making an easy task. However, the disadvantage lies in the overhead expenses that can never be exact.
Even the labor charges vary, making it more challenging to use this technique than normal costing.
Also, this is useful in industries where the raw materials and other related factors are consistent with significantly fewer changes. It is ideal for standardized products.
Also, the process is time-consuming, requiring several technical skills.
Normal costs or regular costs
Normal costs are the cost that is pre-planned. That means you calculate the cost of the product before production based on the previous data.
What are the differences between Actual costing(AC) and Normal costing (NC)?
|Normal Costing||Actual Costing|
|The cost is calculated before the production process.||The cost is calculated after the production process.|
|It includes direct costs and indirect costs.||It consists of the costs of the production process in real-time. That means AC includes the costs like any variations in labor charge and raw material price.|
|Costs update once in a while||With this, costs of the product update for each batch after calculating the actual expenses.|
|It assumes the cost of the production.||It takes the actual expenses of each batch.|
What is the total fixed cost?
Total fixed costs, also called direct costs, are all the expenses directly associated with a project that you cannot avoid (recurring costs) or allocated in whole or in part to more than one cost object.
Total fixed costs are future cash expenditures for rent, utilities, debt service, insurance premiums, etc. Fixed cost does not include operating expenses such as labor, materials, and supplies.
For example, an organization might claim 1 million dollars in revenue per year, but $50% or $500 000 is committed to cover overhead/fixed expense items that cannot achieve the goal of generating revenue.
What are the disadvantages of actual costing?
Actual costing is disadvantageous because it takes longer to calculate and can be more expensive to implement.
In actual costing, the direct materials, direct labor, and overhead costs incurred in producing a product or service are assigned to that product or service. This approach is more accurate than allocation-based methods, but it’s also more time-consuming and costly to implement because of the need to track actual costs.
The actual cost is a fundamental financial calculation that can help you understand the actual price of your product.
The actual costing definition and examples we’ve provided today should give insight into how this concept works in business. | <urn:uuid:bf0c0e18-41f8-4c7e-8fa4-f9a8783bc8d5> | CC-MAIN-2022-40 | https://www.erp-information.com/actual-cost.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00078.warc.gz | en | 0.936829 | 1,339 | 3.796875 | 4 |
Given a constructive integer N, the duty is to inform whether or not it’s an anti-prime quantity or not.
Anti-Prime Numbers (Extremely Composite Quantity):
A constructive integer that has extra divisors than any constructive quantity smaller than it, is known as an Anti-Prime Quantity (also referred to as Extremely Composite Quantity).
Following is the checklist of the primary 10 anti-prime numbers together with their prime factorizations:
Anti-Prime Quantity Prime Factorization 1 2 2 4 22 6 2×3 12 22×3 24 23×3 36 22×32 48 24×3 60 22×3×5 120 23×3×5
Enter: N = 5040
Output: 5040 is anti-prime
Rationalization: There is no such thing as a constructive integer lower than 5040 having
variety of divisors greater than or equal to the variety of divisors of 5040.
Enter: N = 72
Output: 72 just isn’t anti-prime
This query may be solved by counting the variety of divisors of the present quantity after which counting the variety of divisors for every quantity lower than it and checking whether or not any quantity has the variety of divisors higher than or equal to the variety of divisors of N.
Observe the steps to unravel this downside:
- Discover what number of components this quantity has.
- Now iterate until N-1 and verify
- Does any quantity lower than the quantity has components extra or equal to the quantity
- If sure, then the quantity just isn’t an anti-prime quantity.
- If in the long run not one of the numbers have the variety of divisors higher than or equal to the variety of divisors of N, then N is an anti-prime quantity.
Under is the implementation of the above strategy.
Time Complexity: O(N3/2)
Auxiliary Area: O(1) | <urn:uuid:24294892-605f-4441-88f0-b964e42b4dcd> | CC-MAIN-2022-40 | https://blingeach.com/verify-whether-or-not-a-quantity-is-an-anti-prime-quantityextremely-composite-quantity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00078.warc.gz | en | 0.865578 | 429 | 3.140625 | 3 |
(58) 透明原則要求任何傳達予公眾或資料主體之資訊皆須簡潔、容易 取得且容易理解,以清楚簡易之語言作成,並且適當地視覺化。該等 資訊之提供得以電子形式,例如要傳達給公眾時透過網站呈現。尤其 於行為者繁多且實務技術複雜之情形,會造成資料主體難以知悉並理 解其個人資料是否、由誰、以什麼目的被蒐集,例如網路廣告之情形。 有鑑於兒童值得特別保護,任何提供予兒童之資訊及溝通應採用兒童 易於理解之清楚簡易之語言。
(58) The principle of transparency requires that any information addressed to the public or to the data subject be concise, easily accessible and easy to understand, and that clear and plain language and, additionally, where appropriate, visualisation be used. Such information could be provided in electronic form, for example, when addressed to the public, through a website. This is of particular relevance in situations where the proliferation of actors and the technological complexity of practice make it difficult for the data subject to know and understand whether, by whom and for what purpose personal data relating to him or her are being collected, such as in the case of online advertising. Given that children merit specific protection, any information and communication, where processing is addressed to a child, should be in such a clear and plain language that the child can easily understand.
(59) 為利於資料主體行使本規則之權利,應提供不同之免費管道,包 括請求之機制及(如有可能)獲得之機制,尤其是接近並更正或刪除 個人資料及行使拒絕權。控管者亦應提供電子化請求之方式,特別是 於個人資料係以電子方式處理時。控管者有義務回應資料主體之請求, 不得無故遲延且最遲於一個月內為之,並於控管者不同意該等請求時 附具理由。
(59) Modalities should be provided for facilitating the exercise of the data subject's rights under this Regulation, including mechanisms to request and, if applicable, obtain, free of charge, in particular, access to and rectification or erasure of personal data and the exercise of the right to object. The controller should also provide means for requests to be made electronically, especially where personal data are processed by electronic means. The controller should be obliged to respond to requests from the data subject without undue delay and at the latest within one month and to give reasons where the controller does not intend to comply with any such requests.
(64) 控管者應使用所有合理手段以驗證請求接近使用資料之資料主 體的身分,尤其是在網路服務或網路識別工具之情形。控管者不得為 了回應潛在請求之單獨目的而獲取個人資訊。
(64) The controller should use all reasonable measures to verify the identity of a data subject who requests access, in particular in the context of online services and online identifiers. A controller should not retain personal data for the sole purpose of being able to react to potential requests. | <urn:uuid:c3fdb140-e888-4291-b963-2400a6e6e2fa> | CC-MAIN-2022-40 | https://gdpr-text.com/zh/read/article-12/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00078.warc.gz | en | 0.734538 | 1,332 | 2.75 | 3 |
Despite being a concept born fifty years ago, virtualization has advanced and can satisfy complex applications currently being developed. Half of all servers run on Virtual Machines (VMs), and the IDC predicts that close to 70% of entire computer workloads will run on VMs by 2024. As virtualization components increase and the virtualized environment expands, the main concern becomes how to maintain safe security levels and integrity. Below is a brief look into some of the differences, issues, challenges, and risks caused by virtualization. This paper also provides some recommendations to ensure that the network is secure to the required degree.
Security benefits due to virtualization
The introduction of virtualization to the environment will lead to the following security benefits:
- It is possible for a properly configured network to share systems without necessarily having to share vital data or information across the systems. This flexibility provided by a virtual environment is one of its core security benefits.
- Virtualized environments use a centralized storage system that prevents critical data loss in case of a stolen device or when the system is maliciously compromised.
- VMs and applications can be properly isolated to minimize the chances of multiple attacks in case of exposure to a threat.
- Virtualization improves physical security by reducing the number of hardware in an environment. Reduced hardware in a virtualized environment implies fewer data centers.
- Server virtualization allows servers to return to revert to their default state in case of an intrusion. This enhances incident handling since an event can be monitored right from before the attack and during an attack.
- Hypervisor software is simple and relatively small in size. Therefore, there is a smaller attack surface on the hypervisor itself. The smaller the attack surface, the smaller the potential for vulnerabilities.
- Network and system administrations have a higher level of access control. This can improve the efficiency of the system by separating duties. For instance, someone may be assigned to control VMs within the network’s perimeters, while someone else may be assigned to deal with VMs in the DMZ. The system can be further integrated such that individual administrators specifically deal with Linux servers while others deal with the Windows servers.
Notice that I have frequently used the terms “if set up or configured appropriately”. This is to emphasize the complexity of virtualization. Therefore, it must be appropriately secured to gain the stated benefits.
Security challenges and risks
We can now proceed to some of the challenges, risks, and other relevant issues that influence virtualization.
Sharing of files between Hosts and Guests
- A compromised guest can remotely access a host file, modify, and/or make changes when a file-sharing is used. The malicious guest may modify directories used to transfer files.
- When API is used for programming or when guests and hosts use clipboard sharing to share files, there are higher chances of substantial bugs present in the area, compromising the entire infrastructure.
- VMs attached to hypervisors are affected when the ‘host’ hypervisor is also compromised. The default configuration of a hypervisor is not efficient enough to provide absolute protection against threats and attacks.
- As much as the hypervisors are small, provide relatively smaller exposure surface areas, and virtually controls everything, they also endanger the system by providing a single point of failure. An attack on a single hypervisor can put the whole environment in danger.
- Because hypervisors control almost everything, administrators can adjust and share security credentials at their will. The administrators have keys to the kingdom, which makes it difficult to know who did what.
- Current configurations or any modifications are lost when snapshots are reverted. For instance, if you modified the security policy, it implies that the platforms may become accessible. To make it worse, audit logs are also likely to get lost; hence, no records of changes can be traced. Without all these, it can be challenging to meet the expected compliance requirements.
- Like physical hard drives, snapshots, and images to contain PII (Personally Identifiable Information) and passwords, new photos or snapshots may be a cause for concern, and any previously stored snapshots that had undetected malware can be loaded at a later date to cause havoc.
- iSCSI and Fibre Channel are susceptible to man-in-the-middle attacks since they are clear text protocols. Attackers can also use sniffing tools to monitor or track storage traffic, which they can use in the future at their convenience.
Administrator access and separation of duties
- In an ideal physical network, network administrators exclusively handle network management while server admins deal with the management of servers. Security personnel has a role that involves both the two admins. However, in a virtualized environment, network and server management can both be delegated from the same management platform. This provides a novel challenge for the separation of duties that will effectively work. In most cases, virtualization systems grant full access to all virtual infrastructure activities. This normally happens when the system is hacked, and yet the default settings were never changed.
- A combination of VM clock drift and other normal clock drifts can make tasks run early or late. This makes the logs lose any elements of accuracy in them. With inaccurate tracking, there will be insufficient data if the need for forensic investigation arises in the future.
- For multiple VMs running on the same host, they are isolated such that they cannot be used interchangeably to attack other VMs. Despite the degree of isolation, the partitions share various resources such as CPU, memory, and bandwidth. Therefore, if a partition consumes an extremely high amount of one, both, or all of the resources due to a threat, say the virus, other partitions may likely experience a denial of service attack.
- For VLANs to be used, VM traffic must be routed from the host to a firewall. The process may lead to latency or complex networking that can lower the performance of the entire network.
- Communication between various VMs is not secured and cannot be inspected on a VLAN. And if the VMS is on the same VLAN, then malware spreads like a wild bush fire, and the spread from one VM to another cannot be stopped.
Virtualization common attacks
Below are some of the three common attacks known with virtualization:
Denial of Service Attack (DoS)
In case of a successful denial of service attack here, hypervisors are likely to be completely shut down and a backdoor created by the black hats to access the system at their will.
Host Traffic Interception
Loopholes or weakness points present in the hypervisor can allow for tracking of files, paging, system calls, monitoring memory, and tracking disk activities.
If a security vulnerability such as a hole exists in a supervisor, a user can almost seamlessly hop over from one VM to another. Unauthorized users from a different VM can then manipulate or steal valuable information.
TRADITIONAL SECURITY APPROACHES TO VIRTUALIZATION
Most of the current security challenges encountered in virtualization can be partly addressed by applying existing technology, people, and process. The main setback is their incapability to protect the virtual fabric composed of virtual switches, hypervisors, and management systems. Below is a look into some of the traditional approaches used to provide security to virtualization and some of their shortcomings.
Some security personnel imposes traffic between the standard system firewalls and VMS to monitor log traffics and send feedback back to VMs. Virtualization being a new technology, firewalls do not provide a well-tailored infrastructure to accommodate security-related issues. Firewalls came way earlier before virtualization was incorporated and adopted within data centers and enterprises. Therefore, the pre-installed management systems cannot handle current security threats to virtualization as they seem complex for the system. Such setbacks can lead to the deployment of manual administrations, which comes along with errors due to the human factor.
Reducing the number of VMs assigned to physical NICs/per Host
this method reduces the number of VMs to be placed on a host and assigns a physical NIC to every VM. This is one of the most efficient means to secure the firm though it does not allow the organization to enjoy ROI related to virtualization and other cost benefits.
Detection of Network-Based Intrusions
When there is multiple VMs residing on a host, the devices do not work well. This is mainly because the IPS/IDS systems cannot efficiently monitor the network traffic between the VMs. Data can also not be accessed when the application is moved.
VLANs are extensively used for booth environments with a good degree of virtualization and those without any form of virtualization. As the number of VLANs expands, it gets harder to counter manage the resulting complexities related to access control lists. Consequently, it also becomes difficult to manage compatibility between the virtualized and non-virtualized aspects of the environment.
The use of an agent-based anti-virus approach entails mapping a complete copy of anti-virus software on each VM. It is a secure method but will require a large amount of financial input to load copies of anti-virus across the entire VMs in the environment. The software is large and therefore increases hardware utilization. As a result, it causes negative impacts on memory, CPU, storage, and a decrease in performance.
A larger percentage of firms still rely on traditional mechanisms for their network security despite the above-mentioned drawbacks. Virtualized environments are highly dynamic and rapid change with the advancements in technology and IT infrastructure. To get the best protection for such an unpredictable environment, it’s recommendable to use the good aspects of the current security approach in addition to the below-listed recommendations for a virtualized environment.
Best practices and recommendations for a secure virtualized environment
- Eliminate loopholes into the system by disconnecting any inactive NIC.
- Secure the host platform that connects guests and hypervisors to a physical network by setting up logging and time synchronization, placing things in place to regulate users and groups, and setting file permissions.
- Use authentication and encryption on each packet to secure IP communications between two hosts.
- Eliminate the use of default self-signed verifications to avoid possible interference by man-in-the-middle attacks.
- Strategically place virtual switches into a promiscuous mode for traffic tracking purposes and allow the filtering of MAC addresses to prevent possible MAC spoofing attacks.
- Ensure that every traffic is encrypted, including those between hypervisor and host using SSL, between clients and hosts, between hypervisor and management systems.
- Have a proper change control so that the main site and the backup sites are kept as identical as possible.
- PEN test and auditing should be separately done for your DR site and the main site but with the same frequency and significance.
- Logging and other records sourced from the DR site should be treated with the same importance as those from your primary site.
- Ensure that your production firewall is active and with a good security posture at the disaster recovery site. Conduct regular audits at the main site if the firewall is disabled or until ab event occurs.
- Replica of valuable data or information should be encrypted and appropriately stored.
- Create a unique storage matrix
Separation of duties and Administrator access
- Server administrators should be provided, specifically, with credentials of the respective servers they are in charge of.
- Admins should be given the power to create new VMs but not to modify already existing VMS.
- Every guest OS should be assigned a unique authentication unless there is a compelling reason for two or more guest OS to use the same credentials.
- Contrary to common thought, security personnel have found out that the larger the virtualized environment, the easier it allocates responsibilities across functions. An admin can’t carry out the entire management process singlehandedly.
Below are some of the four effective measures that can be used to eliminate unauthorized and unsecured virtualization in an environment.
Clearly outline acceptable use policy.
Define the required approvals and the exact conditions under which a virtualization software can be installed.
Reduce the ratio of VMs to Users
Not every user will require VMs on their desktop. Restrict installation of freely available software’s on corporate laptops and desktops.
Implement security policies that second virtualization
Ensure that our system does not have conflicting security policies with the existing virtualization platforms.
Have a library of Secure VM builds
Set up a repository of VM builds for keeping security software, patches, and configuration settings that users can easily access for use or re-use if need be.
Virtual Machine Security
- Management networks connected to hypervisors should not be used to store VMs.
- Using processor-intensive screensavers on physical servers overwhelm the processor needed to serve the VMs.
- Only create VMs as per the requirement. Unused VMs in the environment can form potential entry points for black hats.
- The kennel or host resources, such as storage networks, should be easily accessed by VMs.
- Disable all unused ports, such as USB ports present on VMs.
- Encrypt data being conveyed between the Host and VM.
- Traffic segmentation can be achieved by employing VLANs within a single VM switch.
- Have a comprehensive plan I place on how to plan, deploy, patch, and back up the VMs.
- Place workloads of different trust levels in different physical servers or security domains.
- Dormant VMs should be routinely checked or have restricted access.
- Enable SSH, SSL, and or IPSec protocols to secure communication between host and management systems. This is elemental in eliminating any chances of man-in-the-middle attacks, loss of data, or eavesdropping.
- To avoid double-checking reports or analysis, installing a single unifying security policy and management system for both virtual and physical environments is necessary.
- Database servers and management servers should be distinctly separated.
- Restrict access to the management server. It should not be accessible from every workstation.
- Install new updates and patches as they are released. Installing sound patch management helps to mitigate hypervisor vulnerabilities.
- Eliminate unwanted services like file sharing
- Hypervisor logs should be analyzed consistently to weed out any weak points from the system.
- Employ the use of a multi-factor authentication process for the hypervisor functionalities.
- The management interface of the hypervisor should not be exposed to the LAN.
- Remote access management should be performed by a small set of authorized management system IP addresses.
- There should be a strong password policy for every remote access. For high-risk areas or attack-prone environments, a 2-factor authentication is most preferred or the use of a one-time password.
- Any data or information being sent to management systems should be encrypted.
- No root accounts should be used for backups.
- Disk backups are equally as important in the virtualized environment as they are in the traditional one.
- Perform a full system back once a week and frequent or daily backup of OS and data
- Encrypt every data sent to a disaster recovery over the network.
Virtualization is a dynamic and rapidly growing technology that has presented new challenges to most security firms. Therefore, existing mechanisms and the process cannot effectively provide security to the virtual environment and all its components. This is because virtualization is a hybrid of a physically centered network and a new logical or virtual environment. To ensure a strong security posture, additional protection and considerations must efficiently be put in place. The firm needs to plan and have prior preparations on how to handle the security perspective of the new virtual infrastructure and all its components. Virtualization security should be a priority and not an afterthought. | <urn:uuid:41d6d59f-d868-4249-bd54-2a953e3c15b4> | CC-MAIN-2022-40 | https://cyberexperts.com/virtualization-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00279.warc.gz | en | 0.918415 | 3,334 | 2.90625 | 3 |
Logical Access in computer security it is often defined as interactions with hardware through remote access. This type of access generally features identification, authentication and authorization protocols. Logical access is often needed for remote access of hardware and is often contrasted with the term “physical access“, which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used. Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, secure smart cards or tokens, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level. Government logical access security is often different from business logical access security, where federal agencies may have specific guidelines for controlling logical access. Users may be required to hold security clearances or go through other screening procedures that complement secure password or biometric functions. This is all part of protecting the data kept on a specific hardware setup. | <urn:uuid:89e5644c-b57a-403d-bdb6-b8665dacd7ba> | CC-MAIN-2022-40 | https://www.cardlogix.com/glossary/logical-access-control-smart-card/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00279.warc.gz | en | 0.940624 | 203 | 3.390625 | 3 |
Sustainability is at the centre of public discussion. In 2022, world leaders gathered at COP26, deciding that greenhouse gas emissions need to see a reduction of 45 percent by 2030 and net-zero will be achieved by 2050.
To achieve these goals, businesses throughout the nation must continue to minimise and reverse the negative effects of global warming. This can be achieved through actively reducing, reusing, and recycling wasted goods, including waste electric and electronic equipment (WEEE).
Here we will explore the importance of reducing, reusing, and recycling electrical items.
How can you reduce and reuse WEEE?
First and foremost, businesses should think about reducing and reusing electrical waste. If the waste hierarchy is taken into account, elimination comes before anything else. This means thinking ahead and purchasing sustainable equipment with an Electronic Product Environmental Assessment Tool (EPEAT) badge. EPEAT registered products must meet environmental performance criteria that address: materials selection, design for product longevity, reuse and recycling, energy conservation, end-of-life management and corporate performance. It may also mean buying protective gear and prolonging the life of electrical items.
In addition to this companies can consider the benefits of reusing electrical equipment. If business owners decide to upgrade their computers, they may wish to donate the replaced equipment rather than throwing these away. Not only will this prolong the life of such technologies, but it will also enable charities or organizations to give these to people who will benefit from them.
How can you recycle WEEE waste?
In 2014, the UK government introduced the Waste Electric and Electronic Equipment (WEEE) regulation. This has ensured that disposing of any electrical or white goods waste, from televisions to fridges and camcorders, is a legal requirement for consumers and businesses alike.
WEEE recycle is best left to the professionals, as the correct disposal of electrical equipment is of utmost importance.
In the UK alone, we produce two million tonnes of electric and white goods waste annually. Large items, such as fridges, account for over 40 per cent of this. If poorly managed, this waste can accumulate, release harmful gases and further damage our ecosystem.
This is where WEEE recycling, a system that ensures all electrical and white goods items are reused wherever possible. Certain materials such as metals and glass can be recovered, and these can then be recycled and placed back into the circular economy.”
What are the dangers of incorrect waste disposal?
The UK produces an inordinate amount of electrical waste. In 2019, this amounted to 1.6 million tonnes or 23.9kg of waste per person. To ensure the nation does correctly dispose of this waste, customers and corporations need to understand the dangers of wrongfully doing so.
Electrical waste has a slow decomposition period
Technology was built to last. We wouldn’t want our mobile phones decomposing in our hands, after all. In fact, electrical waste can take 2 million years to decompose naturally, compared to just 2 weeks for paper. As a result, waste in landfills is not an option for electrical and white products.
This means that incorrectly placing electrical or white goods in landfills can fast become an issue. The sites would no doubt overflow with the latest gadgets, and if the nation continues to amass a large amount of electrical waste, the chances of this happening grow more and more likely.
WEEE waste can contaminate soil and water
In addition to lengthy decomposition timeframes, electrical waste can also contaminate the ground they are buried in. This is because they contain hazardous chemicals, such as mercury.
Electrical appliances over 20 years old can contain asbestos, although it isn’t as commonplace today. Mercury can also be found in mobile phones made before 2006. These chemicals can be harmful to the ecosystem surrounding landfills and contaminate local water sources.
A final thought
Reducing, reusing, and recycling electrical equipment has never been easier. Businesses can access a wealth of information regarding the sustainability of electrionics, giving everyone the chance to think with the environment in mind. Once companies decide that an electronic device is of no use, they can always donate these to other places and spaces that need it more. One of the key objectives of the WEEE directive is to reduce the amount of electrical products entering landfill sites in the UK, which is set to be achieved by placing an obligation on producers and manufacturers to provide free and accessible recycling services to their customers.
So what about recycling? In the UK, less than 40 per cent of our electrical waste is collected and recycled by professional and governmental bodies. Therefore, to ensure we build a brighter future for younger generations, sustainable waste management within the private sector is more important than ever.
Whether you’re a small business or a multi-national corporation, you can benefit from partnering with a sustainable waste management company that provides WEEE recycling and the potential to increase social value. As well as this, you can use your status to influence those around you. How will you encourage WEEE recycling? | <urn:uuid:2da138f8-43f0-4c8c-b0e2-78177dcc239c> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/reducing-your-weee/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00279.warc.gz | en | 0.930242 | 1,034 | 3.46875 | 3 |
Indonesia is one of the most rapidly growing economies in South East Asia. Pundits have identified Technology, Media and Telecommunication (TMT) as among the sectors that are powering this growth. As is commonly the case, the rapid growth has not been followed by robust development on the regulatory side, particularly in the case of specific rules regarding personal data protection.
State of Indonesia’s Personal Data Protection
In order to address this issue, as well as providing umbrella legislation, the Indonesian Government, through the Minister of Communication and Information Technology (“MOCIT”), has taken the initiative to submit a Personal Data Protection Bill (the “PDP Bill”) for further deliberation in the Indonesian parliament.
In addition, in a move which has been termed ‘an interim measure’ pending the enactment of the PDP Bill, the MOCIT has also drafted a Regulation on Personal Data Protection in the Electronic Systems (the “PDP Regulation”). This should not be taken lightly as most of the personal data traffic and exchanges are occurring in the electronic space.
This article will discuss the definition of personal data under the PDP Bill and Draft PDP Regulation, and how they compare with the definition of a sectoral regulation, as well as identifying other potential implications. Considering the status of both regulations, our analysis will not be exhaustive and is subject to the final form of the proposed regulations.
Personal Data Definition
The PDP Bill defines personal data differently from how it is currently defined under a prevailing law. An example of such law is Law No. 24 of 2013 regarding Citizen Administration, as amended by Law No. 24 of 2013 (the “Citizen Administration Law”); the comparison is as follows:
Citizen Administration Law
“Every data regarding the life of a person, whether identified and/or can be identified separately or in combination with other information, either directly or indirectly, through electronic and/or non-electronic systems”
“Certain personal data of which the accuracy is kept, treated, and maintained, and of which the confidentiality is protected”
The elucidation of the PDP Bill further elaborates personal data as:
a living person’s personal data, including but not limited to full name, passport number, photo or video, telephone number, electronic mail address, fingerprint sample, DNA profile, and so forth, which can be used in combination to enable the identification of a specific person that can lead to illicit disclosure which may weaken his/her right to privacy
The definition of personal data under the Citizen Administration Law is also used in other legislations, including regulations pertaining to the electronic systems and transactions, as well as the Draft PDP Regulation. The definition of personal data under the Citizen Administration Law is viewed as overly generic for the purposes of personal data protection as the definition fails to set the parameters on what constitute personal data, causing uncertainty on which type of data is considered personal and therefore deserves protection.
Should the PDP Bill be adopted, there will be a shift from the definition provided under the Citizen Administration Law to the more specific definition under the PDP Bill. We believe the definition under the PDP Bill will provide better clarity and a greater degree of certainty as to what is considered as personal data.
The definition of personal data under the PDP Bill is also closer to that applied in other jurisdictions. For example, the definition of personal data from the European Union is as follows:
any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity
Sensitive Personal Data
A particular feature of the PDP Bill is the introduction of a new classification of personal data, i.e. sensitive personal data. The PDP Bill defines sensitive personal data as:
personal data that requires special protection, which covers data relating to a person’s religion/beliefs, health, physical and mental condition, sexual matters, personal finance, and other personal data that could potentially harm and detrimental to the privacy of the data’s subject
The classification of sensitive personal data is purposely restrictive. It can only be collected, processed and disclosed based on written consent from the person that it relates to, and specifically under the following circumstances:
Protection of the person in question;
Employment, medical, and law enforcement purposes;
Requested by authorized institutions for the purpose of performing its functions based on prevailing laws and regulations; or
Is in the public domain due to actions undertaken by the person in question.
While the ‘sensitive’ classification appears to provide an additional layer of personal data protection, the provisions regarding sensitive private data under the PDP Bill may cause complications and confusion in practice because what is considered as ‘sensitive’ is subjective in nature and may vary from one person to another. For example, for many Indonesians, details regarding their religion or belief is not regarded as sensitive and is even clearly stated in their identity card.
The Government might want to reconsider how sensitive personal data is determined. The right to define this might be better reserved for the individual citizen as opposed to being designated by the State. The fact that the right given to the State to add sensitive personal data is open ended (see definition), may also lead to concerns of State abuse in the future. Every person has the right to decide which of their personal data is treated as private and confidential and therefore prohibited from being processed or disclosed to other parties. We note that this approach is what is currently provided in the Draft PDP Regulation. We believe this is a better approach to deciding on the issue of a person’s right to privacy and the use of personal data; i.e. by handing the right of determination to the individual. | <urn:uuid:ab8cfe24-0102-47ed-bcad-560add36ec96> | CC-MAIN-2022-40 | https://www.cpomagazine.com/data-protection/developments-indonesias-personal-data-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00279.warc.gz | en | 0.938227 | 1,221 | 2.625 | 3 |
By now, everyone is familiar with the Internet of Things (IoT) where everything big enough to hold an integrated circuit or CPU has one on board somewhere. They’re in our refrigerators, stoves, thermostats, garage door openers, credit cards, smartphones, and probably not unexpectedly, in our running shoes, powered by our own motion, to accurately report how far we walked, calories burned, and distance travelled.
Some might think this is excessive, but we have not yet begun to computerize all that we can. Currently we make computers that are so small, they almost vanish in a spoonful of salt or sugar.
This particular image is of IBM’s latest creation from March of 2018, actually sitting on a tiny pile of salt. It has all the computing power of the x86 series of computer Central Processing Units (CPUs) of the 1990s. The “die” sitting on the fingertip is actually about 40 of these micro-computers which haven’t been separated yet. Want to see it up close? Here’s a (silent) CGI video tour of just one of these tiny miracles, showing the millions of transistors that allow its existence.
Heading for the Future
The progress we’ve experienced since the first substantive computers were invented has been stunning. Back at the dawn of the computer age we expected computers to solve every problem. The original room-filling machines were fully-expected to shrink substantially. We just didn’t know how much.
Contemporary science writers wrote of tremendous technical leaps forward like this one about the incredible ENIAC computer:
“Where... the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have 1,000 vacuum tubes and perhaps weigh just 1-½ tons.”
--Popular Mechanics magazine, March 1949, page 258
Getting cheap jeans may be perfectly satisfactory to some, but when it comes to health and safety, that is a different matter. Counterfeit products could be eliminated by use of these tiny 10¢ computers.
The talk is now of using them to mark everything ever manufactured with its own indisputably unique Blockchain code, thus eliminating the possibility of fakes and knockoffs forever. Edible versions could be printed on malaria pills to prevent useless fakes from making their way to disaster zones; liquid versions could absolutely identify real, original wines made by reputable vintners, free of poisonous automobile antifreeze used to artificially enhance sweetness by criminals invading this profitable market. Individual identification for everything may still be a few years in the future, but we’re getting there.
Industrial Internet of Things (IIoT)
The next obvious step for manufacturing was simply to connect all the various bit-and-pieces involved in the manufacturing and assembly process so they could “talk” to each other, sharing vital process information. This is called machine-to-machine (M2M) communication.
Nowadays, computer tablet equipped employees will be instantly notified if the last batch of canned peaches didn’t reach pasteurization temperature in the canning process. They’ll know if cans of paint are being overfilled or under filled, and be able to alter the process without ever laying hands on the machinery itself.
More importantly, however, through the use of Artificial Intelligence (AI), the machinery could respond to these problems as they arose, providing instant, timely solutions (alter the timing, cooking the peaches a little longer until they reach the proper temperature; changing the fill parameters of the paint line) so that no bad product ever gets created in the first place.
Machine Vision has allowed us to teach AIs to interpret data visually. Say, for example, that undersized sheet stock was going into the laser cutter. Instead of cutting 15 out of 20 parts correctly and having five errors, the AI could simply stop the process and demand the proper size material for the job.
This non-human process management would mean responses faster than any human could provide. The increase in efficiency would be manifold, along with the elimination of waste, and consequently, the monetary gains would be multiplied.
The most important thing to remember is keeping your employees informed. Their natural fear is that they will be replaced, but this is completely unrealistic. When you increase productivity and increase profits you don’t need fewer people—you need more!
Granted, tasks will change; you’ll need people to monitor, train, and program the machines so the AIs can take over these mundane tasks. The machines are perfectly suited for the uncreative, unrewarding, mind-numbing tasks that are found throughout the manufacturing sector.
People, on the other hand, excel at creative tasks. Your HR Department needs to get out the message that current employees will be retrained for these new requirements at company expense. Current employees are far too valuable to lose, because they already understand the company culture, procedures, and the unique methods or technology that makes a business possible.
The greatest resistance to this level of change comes not from upper management, but from the employees-on-the-floor who are not kept up to date and informed of how they fit into the new venture. It doesn’t matter whether you are the janitor or the CEO. Everyone needs to feel they have a place, are valued, and have security in their job.
We’re all looking forward to the day when we have a personal AI like Tony Stark’s (Ironman) electronic butler, Jarvis…and it is going to happen, eventually, because the technology is inevitable. In the meantime, the future is in our hands right now.
IIoT is not some passing fancy. It is already in use by the most progressive manufacturers. Whether you see it as a steam engine or a modern maglev, this train is leaving the station right now. Get on board, or your competitors will stampede right past you, leaving you in the dust cloud of historical failures.
Taylor Welsh is a content writer for AX Control. | <urn:uuid:34ff9499-5e04-4b59-b685-8798db1ac488> | CC-MAIN-2022-40 | https://www.mbtmag.com/best-practices/blog/13247980/iot-becomes-iiot-and-it-is-changing-everything | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00279.warc.gz | en | 0.94403 | 1,275 | 3.0625 | 3 |
Implementing a unified messaging system in an environment with traditional telephone systems requires different service servers and service gateways which communicate over a network. The server uses the respective gateway to connect to the various communication systems and ensures messages are processed, unified and made available in a central location. Incoming analogue messages are first digitised. In the case of printed information, this may be done using optical character recognition (OCR). Voice messages are converted to sound files or text files.
Since the introduction of Voice over IP telephony and merging speech and data networks, unified messaging systems can conveniently be made available through the cloud. Basic unified messaging functions are standard in many cloud telephone systems hosted online. Users only need an internet connection to access all of the information made available by the unified messaging system over the cloud. Specific hardware such as gateways or servers are no longer required at the business site. | <urn:uuid:34e39f46-6338-4648-a93d-c7e4243970d7> | CC-MAIN-2022-40 | https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/unified-messaging-system | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00279.warc.gz | en | 0.900864 | 178 | 2.75 | 3 |
Cybersecurity is a young and immature field, but it cannot remain so for much longer. We are at a point in time when it is clear that the future will be dramatically different just on basis of technologies that are already in the pipeline. Faster communications, faster computers, mobility with smarter devices, cloud computing, massive data stores, and many other technology trends are not science fiction but reality already being played out.
However, there is no clarity but just uncertainty about what will eventually emerge in the next five, 10 or 15 years. As technologists, it behooves us to develop some foundational principles as we look ahead to cybersecurity in this very challenging environment. There are two principles that I believe are intrinsic to the future of cybersecurity.
Security vs. Productivity
The first principle is that security cannot hold back productivity. Technology that makes us more productive will get deployed and used even if it makes information less secure.
This simple-sounding principle has profound implications. In a fast-changing world with new technologies constantly emerging, we will always be susceptible to compromise and leakage of information in new ways. Therefore, cybersecurity cannot just be about protecting information, but it must address the bigger goal of protecting the overall mission and purpose of the organization.
For example, the compromise of a few user accounts in an online banking application is quite manageable in dollar terms, since the cost savings of having most of the customers online outweigh these losses. The much bigger concern should be loss of confidence in the bank by a large fraction of the customers who may walk to another bank, or those who may walk because levels of security create inconvenience. These concerns, among many others, balance out, leading to different offerings in the marketplace.
This leads us to the notion that eventually the marketplace decides how much security versus risk society will tolerate in cyberspace. The analogy is to mechanisms such as stock values that determine the “true” worth of a company. However, the information on which security-risk tradeoff is assessed is much less mature than the information that drives the stock market.
Risk to the individual may be underestimated by naive consumers who believe the “bank will take care of security.” We have seen safety become an important factor in the automobile market — but only after significant effort by consumer advocates. Similar evolution is likely to occur in cybersecurity.
The marketplace, however, does not solve problems of systemic risk. Financial markets are historically prone to periodic meltdown, the most recent of which we are currently living through.
While the consumer has many choices of automobiles in the market, we are faced with dependence on oil and possible damage to the environment that is not sustainable. In cybersecurity, the market may be the best means to decide security versus rick issues at the individual level, but it may expose us to systemic risks leaving the system vulnerable to organized and well-funded attackers on behalf of nation states or terrorists.
How do we, as a society, address this problem? Who will be the regulators or defenders against such systemic attacks? One of the big challenges to our society is to figure out answers to these challenging questions that are deployable within existing political and social structures.
Do we wait for a 9/11 in cyberspace before we take these questions seriously? Can we be proactive in addressing these threats? To summarize, the proliferation of new compelling cybertechnologies drives us to market-based resolution of security-risk tradeoffs, but it leaves us increasingly vulnerable to systemic risks.
The second principle is that cyber and physical space will be increasingly entangled to the point where our activities and their impacts will seamlessly transition from one to the other.
Cyberspace came into existence as a means to support our activities in physical space. Data maintained in cyberspace reflects physical reality and helps us control it. This is especially evident in applications such as inventory control, supply chain management and online retail.
Cyberspace is also a container of information and knowledge, and a facilitator for their creation and dissemination. With the coming proliferation of sensors — be they stationary or mobile — cyberspace will increasingly capture data about the physical and social world. This will enable new applications and services that we can hardly dream of today.
By the first principle, there is no question that these will get deployed, regardless of privacy and security risks to the information. The productivity gains will drive adoption. With this tight integration, attacks in cyberspace will more readily spill over into physical space.
The U.S. military, as often is the case, is the first to explicitly recognize this, and it has declared cyberspace to be a war-fighting domain at the same level as land, sea, air and space. This has profound implications for cybersecurity.
Cybersecurity can no longer be delegated to the IT folks or the network administrators; it must be dealt with holistically in the context of the overall mission and objectives of the enterprise across cyberspace and physical space.
Our field is about to get a lot more complex than most of us are used to. We will need cybersecurity professionals with the depth of technical expertise we expect today and even more.
We will also need these professionals to have the understanding of their organization’s purpose and mission, and how to relate to these as part of the cybersecurity function.
Ravi Sandhu is executive director and chief scientist of the Institute for Cyber Security, UTSA. | <urn:uuid:608728af-b27c-46a3-8109-71303c4cc924> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/guiding-cybersecurity-principles-for-a-swiftly-changing-world-68717.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00279.warc.gz | en | 0.951174 | 1,104 | 2.59375 | 3 |
Employees may be a company’s primary liability when it comes to cybersecurity, but they can also be a critical asset. Yes, human error accounts for 84% of data breaches; yet, employees are why questionable links are not clicked, leading to ransomware infections. Often the difference between a liability and an asset depends on creating a company culture for security.
Companies that view employees as the problem tend to use fear as a motivator. Others see cybersecurity as a collaborative effort that makes for stronger defenses. While fear may be a short-term fix, it rarely solves the ongoing problem. When companies scare employees with cybersecurity statistics that point to employees as the problem, the employees become paralyzed and conflicted. As a result, they do nothing or as little as possible to avoid punishment.
Instead of viewing people as the problem, companies should see them as their last line of defense. When phishing emails make it through firewalls, virus scans, and spam filters, it’s employees that prevent them from turning into security compromises. That’s why it’s crucial that companies invest in creating a company culture for security that enables employees to successfully defend against cybercriminals.
Establish Security Groups
Companies should select individuals to become citizen security experts. These employees are trained in cybersecurity best practices, including detecting and reporting a possible compromise attempt. The individuals should come from different groups throughout the company, so they can return to their cohort to serve as a security resource.
These citizen experts receive updated training to stay current on all cybersecurity threats. They serve as a resource to the rest of their group. Letting employees ask people they work with about a questionable email is less intimidating than calling someone in IT. The trained staff can share the latest scams and phishing attempts as “stories” rather than examples of how employees failed their cybersecurity responsibilities.
Organizations often create security policies that restrict employee behaviors without removing obstacles that make it difficult to comply. For example, companies may prohibit files from being saved to an external device such as a flash drive. The IT department wants to reduce the possibility of a virus being spread from computer to computer.
The policy is sound, but how are employees going to adhere to it? Sometimes, it’s faster to transfer a file using a flash drive than downloading it from a server. When those instances happen, provide employees with options such as encrypted memory sticks or file compression software to reduce the size of a file.
With employees working remotely, VPNs have become a more secure way to connect employees to the office. However, installing VPN clients can be a challenge for some employees. Employees are overwhelmed when asked to:
- Find the VPN client on the server
- Download the correct version for their computer
- Install the software
- Configure the client
Instead, have the IT department install and configure the software remotely or provide a webinar that takes employees step-by-step through the process.
Giving employees resources that help them execute security policies easily and effectively increases the odds of the policies being followed. For example, clearly documented procedures for responding to cyber incidents should be accessible to everyone. Put them on the intranet or in a knowledge base and let people know they exist. In a highly regulated industry, failure to follow procedures when responding to a possible breach can have a negative impact, resulting in fines and penalties.
A friction point in most companies is passwords. Employees hate to change them and become frustrated when forced to create new ones every few months. People often increment the numbers used in a password, or they simply write them on sticky notes. When they create passwords that are easy to remember, they are most likely creating ones that hackers already know.
Consider providing password managers. Use those citizen experts to help those in their group install and use the software. Once people are comfortable with the software, they are more likely to use it, and the more they use it, they will see how it saves time without sacrificing security.
Constructing a hardened cybersecurity culture requires collaboration and cooperation. Creating a company culture for security is about more than issuing policies. It’s listening to employees to understand what obstacles stand in the way of strong security hygiene. It balances security protocols with organizational constraints to ensure resources are available. Finding a cybersecurity partner can also help build a cybersecurity culture. At Cask Government Services, we are dedicated to making the workplace secure. Contact us for more information on how we can help. | <urn:uuid:42a3606d-2907-4a35-a257-7f4524be1d4d> | CC-MAIN-2022-40 | https://caskgov.com/your-guide-to-creating-a-company-culture-for-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00279.warc.gz | en | 0.953274 | 917 | 2.640625 | 3 |
12 Mind-Blowing Data Center Facts You Need to Know
Everyone knows that data centers are vital to global connectivity. Our content – from cat videos to financial transactions – is stored and distributed from these data centers 24/7 and we expect on-demand, high quality, and real-time access to it whenever and wherever we need it. But just how big has the data center monster become? Here are 12 fascinating facts about data centers that just may blow your mind.
1) There are over 7,500 data centers worldwide, with over 2,600 in the top 20 global cities alone, and data center construction will grow 21% per year through 2018.
2) By 2020, at least 1/3 of all data will pass through the cloud.
3) The Natural Resources Defense Council (NRDC) estimates that data centers consume up to 3% of all global electricity production.
4) With just over 300 locations (337 to be exact), London, England has the largest concentration of data centers in any given city across the globe.
5) California has the largest concentration of data centers in the U.S. with just over 300 locations.
6) The average data center consumes over 100x the power of a large commercial office building, while a large data center uses the electricity equivalent of a small U.S. town.
7) The largest concentration of data centers in a U.S. city is within the New York-New Jersey metropolitan area (approximately 306 centers).
8) Data centers are increasingly using in-flight wire speed encryption, which keeps your data fully protected from the moment it leaves one data center to the moment it arrives at another.
9) The largest data center in the world (Langfang, China) is 6.3 million square feet—nearly the size of the Pentagon.
10) As much as 40% of the total operational costs for a data center come from the energy needed to power and cool the massive amounts of equipment data centers require. (see our data center power calculator here)
11) Google recently announced plans to build 12 new cloud-focused data centers over a 1.5-year period.
12) By 2020, nearly 8% of all new data centers will be powered by green energy.
For more data center facts and figures, check out the sources below:
- Big Data and Data Center Fun Facts
- Test Your Data Center Interconnect IQ
- Big Data: 20 Mind-Boggling Facts Everyone Must Read
- Leveraging Rich APIs for a New DCI Operational Paradigm | <urn:uuid:4f195ebd-877d-4f89-be11-693c2ef7c73a> | CC-MAIN-2022-40 | https://www.ciena.com/insights/articles/Twelve-Mind-blowing-Data-Center-Facts-You-Need-to-Know.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00279.warc.gz | en | 0.888037 | 529 | 2.625 | 3 |
Blockchain synonymous with the financial transactions and cryptocurrencies has stormed other sectors of the industry too.
The latest technological entry into the healthcare sector, the blockchain module has stirred up quite a controversy.
Is it really the answer to the woes plaguing the healthcare industry? Can the healthcare insurance, security, health records, and the drug supply chain be all managed by blockchain?
How does a blockchain work?
As we all know, in a blockchain, each and every transaction is recorded and stored. It is a decentralised register of ownership. So, every device that is connected with the blockchain stores a copy of this block in encrypted form.
It is a time stamped record of digital events in the chronological order that is shared on a peer-to-peer network.
The network participants or the nodes have the copy of the blockchain. They are authorised to validate the digital transactions. Any transaction has to be validated by the majority of the members if it is to be added permanently to the shared ledger.
How a blockchain will work for the healthcare sector?
In healthcare, blockchain offers easy access, security, integrity, privacy, and scalability.
The data will be shared and stored with all the authorised providers in a secure and standardised way. Since the data is encrypted, there would be no fear of data tampering.
All the dealings and records right from the patient history, results, treatments, medication, and healthcare insurance will be transparent and easily accessible.
Let us look into what aspects the blockchain technology can provide solutions and how it can be implemented.
1. Blockchain in Storing of PHI (Patient Health Information)
The main challenge of healthcare providers is to manage the voluminous health data on a regular basis. As the volume keeps increasing year by year, the medical organisations have to process and store the health information and see to it that it is protected.
The data includes Patient Health Information (PHI), health records, medical insurance claims, payment invoices, and other things.
The blockchain technology will prove beneficial to maintaining the health records as it provides security. With strong encryption techniques and decentralisation, data cannot be altered in the blockchain.
Moreover, the healthcare units have to adhere to the standards of HIPAA and other regulatory boards. The blockchain module helps medical organisations to verify the PHI integrity and ensure regulatory compliance. The data remains unaltered and safe with a timestamp verification which in turn reduces the medical audit expenses.
Another issue regarding the healthcare is the existence of duplicate or erroneous records. But in a blockchain, each party has a record linked back to the original thus removing any chances of duplicates or errors. The records can be updated without duplicating information by the concerned authorities.
2. Blockchain in Drug Supply Manageability
Drug counterfeits have rapidly made it to the shelves in developing countries. The medical businesses lose up to $250 billion annually due to the counterfeit drug racket. The fake drugs may prove to be dangerous to the health of the patient.
Using blockchain technology, the drug movement right from the manufacturer until the distributor and finally to the drug store is all recorded. Thus, the path is traceable thus bringing in transparency in drug supply chain.
It makes it easy to detect fraudulent drug transactions along the path.
3. Blockchain in Clinical Trials
In clinical trials, there is a lot of data involved in the test results, the statistics, the materials required, and the trials and so on.
There are many vested interests who want to secure the data and the results. Also, collaboration and management of the clinical trials become tough since each scientist is working on a specific task.
With the blockchain technology, the data remains secure. Since the data is timestamped from all the system nodes, the proof-of-existence can be readily available. So, any third-party cannot patent the drug wrongfully taking credit for it.
Also, the data cannot be modified since the data in a blockchain is stored in a secure way.
4. Blockchain in Health Insurance
The health insurance is generally plagued by a lack of trust between the insurance providers and the consumers.
The consumers suffer due to the processing delays. Also, when the patients change the plan, doctors or the insurance providers, the data will not be readily and completely available.
The health insurance providers also have the challenging task of managing the patient data. Any improper documentation and storing give rise to process mismanagement. This results in a problem of trust. Also, the sharing of information amongst various stakeholders like the insurance providers, policyholders, doctors, hospitals, tax authorities, and the regulators becomes cumbersome and limited due to the poor sharing processes.
The blockchain helps in removing all these problems. Data can be easily accessed and shared in blockchain among the various stakeholders in the network. Since the data is in a distributed ledger, there is no risk of loss of data or improper documentation. This will not only reduce the processing time but also the processing cost will be reduced.
Smart contracts will make the process claims faster and easier. This will help in establishing a trust factor between the health insurance providers and the consumers.
The road ahead
Though in its nascent stage, blockchain has a great potential provided the medical organisations are actively willing to explore and participate in its immense possibilities.
The road ahead is certainly not going to be smooth. It may take certain years for the stakeholders to be fully aware and satisfied with the blockchain technology. But the start has definitely begun.
Ankit Patel, Project/Marketing Manager, XongoLab Technologies (opens in new tab)
Image Credit: Zapp2Photo / Shutterstock | <urn:uuid:f7c706e6-10ee-41eb-beda-ff9e910a4a4f> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-top-healthcare-areas-impacted-by-blockchain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00279.warc.gz | en | 0.928395 | 1,143 | 2.765625 | 3 |
The main purpose of using home & garden pesticides is to eliminate insects that could have a harmful effect on human health. Growing incidence of insect-borne diseases is one of the key factors responsible for the growth of the home & garden pesticides market ecosystem. West Nile Virus is a cause of fatal mosquito-borne diseases, and it has become a major health concern in several countries across regions such as North America, Europe, and Asia Pacific, among others. It can cause neurological diseases in humans that ultimately lead to death. Further, there are no vaccines available for humans, as of now.
Health officials in the U.S. have warned the public about the arrival of the West Nile virus in the country. In 2018, the country witnessed more than 2,600 cases of West Nile viruses, of which about 200 cases occurred in the state of California. In order to prevent mosquito bites, the Center for Disease Control and Prevention (CDC) has urged people to use insecticide products recommended by the Environmental Protection Agency (EPA). Further, bed bug infestation is on the rise in major cities in the US, such as Baltimore, Washington D.C., Chicago, Los Angeles, Dallas, and New York, to name a few. These factors have resulted in increased sales of home & garden pesticides in the country, and the impact of this trend is expected to increase further in the coming years.
Get a Sample Copy of Report @ https://www.alltheresearch.com/sample-request/404
In the base year 2018, the U.S. and Germany were the major developed countries contributing to the market demand for home & garden pesticides, collectively contributing about XX% market share of the global market demand. Gardening is a key factor that is augmenting market growth of home & garden pesticides in the U.S. The residents of the country have shown a greater interest towards indoor planting as most of them live in apartments. The trend of gardening has become extremely popular among millennials, as they have embraced this practice as one of their prime hobbies.
Hanging gardens are another popular trend in the U.S. Further, the onset of global warming has resulted in dramatic changes in temperature levels in the U.S. According to NASA, droughts in the South West U.S. and long spells of heatwaves are to intensify in the coming years, and summer temperatures are projected to rise constantly. Precipitation levels are expected to increase, along with a strong forecast for category 4 & 5 hurricanes. These factors are expected to pave the way for insect infestation in the country, which in turn, will propel the market growth of home & garden pesticides.
The onset of global warming has also affected Europe. Soaring temperatures during summertime has created a record in countries such as Netherlands, the U.K., Germany, and Belgium, among others. The U.K. witnessed a temperature of 38.7 degrees in July 2019, which is unlike any previous happenings. Further, Netherlands, Belgium, and Germany also recorded temperatures exceeding 40 degrees. Such abnormal temperatures are an ideal condition for the proliferation of insects along with insect-borne diseases, thus creating opportunities for the growth of the home & garden pesticides market in Europe. However, the market is expected to witness a minor slowdown due to several bans imposed on synthetic pesticides by the EU.
The trend of using home and garden products has witnessed an upward growth trajectory over the past few years, and this growth is most prominent among millennials. Further, this trend is also noticeable among baby boomers, as approximately 40% baby boomers consider gardening as a leisure activity. Consumers living in urban areas where there is usually less space in and around their homes have opted for small space gardening, which typically includes hanging gardens and herb gardens. These are some of the factors that have augmented the demand for home & garden products, which in turn, have stimulated the demand for home & garden pesticides.
India has witnessed a favorable increase in both per-capita income and per-capita expenditure, and the trend is likely to continue in the coming years as well. Owing to the growth in per capita income and expenditure, the demand for home & garden products has witnessed an upward growth trajectory and is likely to remain the same in the coming years as well. Further, due to rising pollution levels in the country, especially in the metropolitan cities, consumers have become extremely conscious about their health and wellbeing. This has led people to opt for outdoor and indoor gardening, which has given rise to the demand for home & garden pesticides. | <urn:uuid:438cc9bd-ab70-440c-ae9c-8ff8106e0f51> | CC-MAIN-2022-40 | https://www.alltheresearch.com/press-release/evaluating-home-and-garden-pesticides-market-ecosystem-with-global-and-regional-aspects | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00479.warc.gz | en | 0.963881 | 921 | 2.859375 | 3 |
A new IoT Botnet branded RapperBot has been discovered within the wild since mid-June.
“This family borrows heavily from the original Mirai source code, but what separates it from other IoT malware families is its built-in capability to brute force credentials and gain access to SSH servers instead of Telnet as implemented in Mirai,” a report from FortiGuard states.
The name has been provided due to an embedded URL within the code to a YouTube rap music video within earlier versions. Once the Botnet is created, it begins targeting Linux-based servers with a suspected 3,500 unique IP addresses, which are brute force attempted.
To reduce the impact of brute force attacks, there are several simple steps that can be taken. The use of MFA will create that extra layer of security which, even if the password is successfully attained, the MFA will prevent further access. Implementing a lock-out stage upon a defined number of unsuccessful attempts will reduce the number of attempts an attacker can make within short time frames. This can also be linked to alerts on identifying these unsuccessful attempts.
To make it harder to brute force a password, it is essential to ensure the use of complex passwords and keep them unique to every account. The use of password managers can help with this as they can generate unique and complicated passwords and store them securely, so you do not have to remember every single password. | <urn:uuid:6b73eb6f-1bb1-44e3-b2de-c9f268e32f36> | CC-MAIN-2022-40 | https://neuways.com/neu-cyber-threats-11th-august-2022/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00479.warc.gz | en | 0.940308 | 289 | 2.53125 | 3 |
The abbreviation SSID stands for Service Set Identifier. This is the unique name that identifies a wireless network. It is in the packet header when a data packet is transmitted. The devices on the Wi-Fi network use this identifier for communications via the network. The name is up to 32 alphanumeric characters in length and is case sensitive. A company in one physical location may have several WLANs, and that means the business will also have wireless access points using different SSIDs.
Beaconing or SSID Broadcasting
A wireless network access point will broadcast its availability using its service set identifier. This SSID broadcast displays the Wi-Fi network name, though access to the network can be limited through security measures. The broadcasting of an SSID may also be called beaconing. Password-protected wireless networks broadcast the SSID but require the correct password for network access. SSID broadcasting can also be disabled when required.
A Service Set Identifier Is the WLAN Name
The wireless network name, the Service Set Identifier, is the reason that data is delivered to the proper destination. The SSID differs from a router name. When looking for a wireless network, users will usually see the router or station name, which should not be confused with the 32-character SSID. Every packet that moves over a WLAN has the network service set identifier. Without this identification, data delivery might not reach the correct destination when multiple wireless networks are operating in one area. Comms Express representatives are prepared to explain the specific features on popular wireless equipment. | <urn:uuid:bc1336dc-6730-48c6-a0bf-d12e5d8a53e7> | CC-MAIN-2022-40 | https://www.comms-express.com/infozone/article/ssid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00479.warc.gz | en | 0.90159 | 315 | 3.359375 | 3 |
At the heart of any electronic device is a processor (or processing unit) that controls it. There are several flavors of processors, each designed with different purposes in mind. The following are the most prevalent types:
- CPU (central processing unit)
- GPU (graphical processing unit)
- FPGA (field-programmable gate array)
- ASIC (application-specific integrated circuit)
A CPU is basically the standard, vanilla-flavored processor. It can have one or several core microprocessors which each operate sequentially, processing information. This is used in ordinary computers, and is the default option for any application that doesn’t require high bandwidth or very efficient use of resources.
GPUs were created to handle large amounts of graphical throughput, although they are used for other applications with a similar need to process a lot of data simultaneously. A GPU consists of thousands of processor cores to enable significant parallel processing. However, in applications that cannot provide a near-constant stream of high-bandwidth data, too many cores sit idle, resulting in an inefficient solution that wastes power and can cause high latency.
FPGAs are by definition the most versatile processors. At its simplest, these are processors that can be programmed at the hardware level to fit the application’s need. In fact, FPGAs can be reprogrammed in the field as well (hence, the “field-programmable” part of their name). This is a big advantage in any application where the need for changes are anticipated – for example, in new applications where there are evolving standards and requirements. Like GPUs, FPGAs also employ parallel processing, but without the penalty for periods of low throughput.
ASICs are the opposite of FPGAs in terms of programmability. These are fully optimized for a particular application and cannot be changed once produced. For applications with large enough scale and relatively unchanging requirements, this lack of flexibility and the high cost of design and production of ASICs is deemed worthwhile because the end result is a processor that is tailor-made for the intended use.
When it comes to processors of any kind – whether CPUs, GPUs, FPGAs, of ASICs – there are several ways to quantify their performance. At the risk of oversimplification, let’s examine just three:
- Memory capacity – how much information can be stored on the chip
- Compute – how well (fast) information can be processed within the chip
- I/O bandwidth – how quickly information can be ported in and out of the chip
- Memory bandwidth – how quickly the chip can read and write memory
While all of these features are fundamental and exist in all types of processor, different applications have different priorities which can be executed better by placing a higher focus on a certain one or another aspect.
In applications that require a lot of compute power, such as weather modeling, simulations, and semiconductor design, high compute CPUs or GPUs work best. In other cases, there may be minimal computational requirements, but large memory capacity is needed.
FPGAs in Networking Applications
When it comes to edge networking and other low latency applications, FPGAs are the most attractive option. The parallel processing trait of FPGAs enables them to handle the complex networking functionalities of a telecom network with competitive performance, while allowing for future changes since FGPAs can be reprogrammed in the field.
Timothy Prickett Morgan made a similar argument on The Next Platform in his in-depth overview of the upcoming Xilinx Versal high-bandwidth memory (HBM) device: He explained that “many latency sensitive workloads in the networking, aerospace and defense, telecom, and financial services industries simply cannot get the job done [without HBM devices].” He also quoted a Xilinx senior product line manager who pointed out that CPU-based HBM devices do not include a hardware switch, which means they are obligated to cannibalize some of the internal software to achieve this. (A switch is necessary to connect all ports to all sections of the internal memory.) Using an FPGA means that some of the hardware logic can be designated as a complete working switch without relying on software. This lowers the power consumption and latency of the device.
At Ethernity, we are stalwart proponents of the idea that FPGAs are the perfect building block for networking cards and appliances. For over 18 years, we have developed and improved upon our proprietary FPGA flow processor technology to fully harness the power of FPGAs for networking. In tests we ran comparing software running on white-box servers (CPUs) to our FPGA-based accelerated solutions such as the ACE-NIC100 SmartNIC, we found that the FPGA-based solution takes up less overall space because it requires fewer cores, uses significantly less power leading to much lower operational costs, and provides deterministic 100Gbps performance with less than 3 microseconds of latency.
Thus, it remains true that FPGAs offer the performance of ASICs with the flexibility of software running on CPUs. | <urn:uuid:5597d90d-6833-409a-ba39-d15aaa360fdb> | CC-MAIN-2022-40 | https://ethernitynet.com/advantage-using-fpgas-telecom-edge-networking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00479.warc.gz | en | 0.938033 | 1,061 | 4.125 | 4 |
Smart business owners know that you lock the doors and activate the security system when you leave your physical location for the night. You take these steps to ensure that burglars can’t access your inventory, money, computer, and business network. In the same way, you need to always protect your digital location—your website— from criminals. With a website security certificate, you can add a layer of protection to prevent hackers from accessing your sensitive data.
At EZComputer Solutions, we’ve been helping small businesses like yours protect their proprietary information with our advanced cybersecurity solutions. One way we do that is by ensuring that your website has an SSL certificate. This security measure not only helps keep hackers out of your network but can also improve your search rankings with Google.
Get started today with cybersecurity services from EZComputer Solutions to ensure that your business and your customers are protected from cybercrime.
What Does SSL Stand for?
SSL stands for secure socket layer. It’s also sometimes called a TLS certificate, an HTTPS certificate, or an SSL server certificate. No matter what you call it, it all means that a website is secure, authentic, and legitimate.
What is an SSL Certificate?
An SSL certificate is a digital stamp of approval from an industry-trusted third-party certificate authority (CA). This certificate is basically a digital file with information from the CA that verifies a website is secure and uses an encrypted connection.
You’ll know secure websites from unsafe ones by looking at the web address at the top of your browser. Secure sites start with HTTPS versus HTTP and show a closed padlock icon. Sites without a current validation certificate may show a red error icon or say “Not Secure.” A website with a proper SSL certificate demonstrates to its users and customers that it is a legitimate website, asserts the identity of the website, and reassures the customer that their personal information is protected through an encrypted connection.
SSL works by creating public and private keys that encrypt data. Hackers can’t get to the data because they don’t have the proper key for decryption. This video from HubSpot explains it well:
Why Does My Website Need a Security Certificate?
Your website needs a security certificate for several reasons. The most important of which is cybersecurity. One of the most common cyberattacks is a man-in-the-middle attack. Without a website security certificate, your website is vulnerable to a cybercriminal intercepting and stealing the data passed from your users to your web server–data like email addresses, passwords, credit card information.
Having a website security certificate and robust security protocols in place from a managed IT service provider, like EZComputer Solutions, can help protect your business from these costly attacks.
Establishes an Authentic Connection
Another reason you need a security certificate is to build trust between your business and your customers. Data privacy is a primary concern among consumers. If you operate an eCommerce website, customers want to know that their name, address, phone number, email address, and credit card information are safe to give to you. Without the SSL certificate, your customers will get a warning not to use your site because it’s not secure.
Makes Your Site Reachable
If customers navigate to your website without an SSL certificate, or if a certificate is revoked or expired, the padlock icon will either be open or have an X through it, depending on the web browser. Browsers like Google Chrome and Internet Explorer will also display a page saying there’s a problem with the site if the website security certificate is not valid. Customers will likely then click off your website, never to return because of security issues.
Impacts Your Ranking in Google
A fourth reason you need a website security certificate is that it is a ranking factor for Google. In 2014, Google announced that as part of their goal of making the internet safer, having a domain validated through a CA and a web address that begins with HTTPS is a ranking factor. This statement means that if two web pages have equal ranking factors and one is secured, and the other isn’t, the secured website will rank higher.
What Types of SSL Certificates Are There?
There are three main types of SSL certificates, depending on the level of security your business requires. The lowest level is generally less expensive than the highest level. If users exchange private information with your company via your website, you likely need a higher website security certificate. Choose from these types of SSL certificates:
- Domain Validated (DV) Certificates: Best for blogs or small companies that don’t exchange customer information.
- Organization Validated (OV) Certificates: An OV certificate will work well if you have lead-capture capabilities.
- Extended Validated (EV) Certificates: Ideal for a business that conducts financial transactions online.
How Much Do SSL Certificates Cost?
SSL security certificates can be free to thousands of dollars. For larger websites and businesses, the cost of encrypting vast amounts of data is high. However, small business owners may use a free or low-cost option. Our web design company, EZMarketing, installs an SSL certificate on every website we build. Contact EZMarketing today if you need a new, secure website.
Protect Your Business Today with Cybersecurity Services from EZComputer Solutions!
Cybersecurity is not a topic to be taken lightly. An SSL website is the first step into ensuring the data shared between your company and your customers is secure. You also need robust cybersecurity services from a reputable IT company, like EZComputer Solutions.
Our cybersecurity solutions keep a watchful eye on your business communications 24/7, so cyberattacks are quickly discovered and resolved. We can help you get a website security certificate to keep your website safe, establish trust with your clients, make your site reachable, and rank in Google. Get started today to see how EZComputer Solutions can help your business. | <urn:uuid:a515b819-6ace-4d24-af99-737846922e49> | CC-MAIN-2022-40 | https://www.ezcomputersolutions.com/blog/website-security-certificate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00479.warc.gz | en | 0.924733 | 1,236 | 2.65625 | 3 |
When we talk about using “Big Data”, we normally mean using “Social Media Data”. In fact, it’s so common to talk about big data as narrowly referring to data generated by social media platforms that the two terms have become almost synonymous in both tech and the social sciences.
The tools available for working with big data also reflect this problematic elision. When we talk about gathering big data sets, we talk about the best social media scraping tools. Comparing the security of big data platforms typically means comparing social media security protocols. And the marketing insights that big data analysis is (rightly) valued for are often social-media specific as well, such as the observation that visual content and social media are well matched.
The problem, as we will explain in this article, is that social media is not “real life.” Despite this, because of the ease of working with social media data, it has become the almost exclusive source of data for big data analysts. This is a problem because when we think we are dealing with information on our customers, too often we are still actually dealing with meaningless data points.
Our Online Lives
There is a cliché that our lives are now lived entirely online. Like most cliches, it contains a grain of truth: our communication, and the way in which we interact with brands, has been hugely affected by the digital revolution. Nonetheless, there are many reasons to believe that the “people” on social media are not real people at all.
One is that, as much recent research has found, the views that we express on social media are not representative of the spectrum of opinions we actually hold. Related to this fact is another – that the articles, groups, people, and brands that we interact with online only represent a tiny fraction of the information we are exposed to. We shouldn’t forget, after all, that traditional news media still has a much greater reach, both in terms of demographics and geographical area, than even the biggest social media platforms.
As a result, trying to make predictions on consumer behavior based on social media data is difficult even if it’s assumed that all users are acting in good faith. In addition, it is almost impossible if they are not. A growing percentage of social media users now use pseudonyms, or simply fake accounts, in order to preserve their online anonymity. Doing so defeats the ability of big data analysts to generate any value from social media data.
How “Big” is “Big”?
Why, then, has social media data become so closely associated with the idea of big data?
Well, there is one obvious reason. Social media data is certainly “big.” Or at least it is by some metrics. Social media platforms tend to advertise how much data they have in terms of the number of petabytes currently sitting on their servers: by 2014 Facebook advertised that its data warehouse held more than 300 petabytes and grew at a rate of 4 petabytes per day.
This makes them extremely attractive for researchers and analysts looking for data that is truly “big.” For comparison, the entirety of the New York Times’ total output from 1945 to 2005 consisted of just 5.9 million articles totaling a scant 2.9 billion words, whereas a month of the Twitter Decahose in 2012 contained 2.8 TB of data, including 112.7 GB of text containing over 14.3 billion words.
The problem with measurements like this is that they don’t capture how meaningful such datasets are, or even how much of this data is unique. In the NYT, an article appears only once, and might contribute 1000 words to the total data available for analysis. In comparison, a single 10-word Tweet, shared 100,000 times, will be counted as a million words of data. Analyzing this dataset will certainly allow a big data analyst to claim they are working with a large dataset, but it’s likely that their conclusions will be of a fairly mundane type: a particular tweet was shared a lot.
Data vs. Information
In proposing a way to overcome this difficulty, it’s worth revisiting a distinction that you probably last heard as a freshman – between “data” and “information.” Without getting into the technical aspects of information entropy and related fields, we can say that “information” is generally unique, useful, and insightful, and data may not be.
This distinction is particularly appropriate when it comes to analyzing social media datasets, because many of them are largely made up of fairly meaningless data. You might be able to see, for instance, how many times a particular news article has been shared, and by who, but in a world where the majority of links shared on social media are never even read by the person sharing them, merely blindly forwarded on by title alone, this doesn’t mean much.
Unfortunately, access to genuinely informative information is sometimes hard to secure for big data analysts. In order to improve the ability of our big data systems to make accurate predictions on consumer behavior, we desperately need to widen our scope beyond social media data. Data on healthcare interactions, for instance, or textual processing techniques that can actually extract meaning from social media posts, would help.
In fact, it’s tempting to conclude that big data won’t really come of age until it can start comparing social media datasets with those generated from other sources. Our dependence on these datasets is explicable – after all, social media platforms make their money from making them accessible – but is problematic nonetheless. In other words, if data can indeed improve creativity, we need to start getting creative with our sources as well as the way we process big data. | <urn:uuid:45adc7c7-c6b4-4ced-84af-ae666b6c0ea8> | CC-MAIN-2022-40 | https://www.crayondata.com/big-data-collection-beyond-social-mining/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00479.warc.gz | en | 0.954657 | 1,194 | 2.734375 | 3 |
Railways are the bedrock of the modern world. For nearly 150 years, rails have provided the mobility needed to drive global economic growth and support social development. Today the mission of rail is more significant than ever as railways feature prominently in efforts to combat climate change through decarbonization and efficiently transport growing populations and their goods. The recent panel discussion Exploring the North American Rail and Transit Markets at Hitachi Social Innovation Forum 2021 highlighted how the transformation of rail and transit infrastructure is evolving.
Urgent Rail-Related Challenges on Many Fronts
Paving the road to Mobility 4.0 is ultimately at the heart of the societal transformation required to progress toward a more sustainable future.
Changes in society, such as growing populations, increased urbanization, and climate change are having a profound impact on the way we think about mobility, the domain of moving large numbers of people and things around the planet. A host of new initiatives have now become far more urgent. These include creating faster, cheaper intercity transport, reducing reliance on cars in city centers, smart solutions to manage the flow of large numbers of people in densely populated areas, and rethinking how to power rolling stock that makes up the rail system. These initiatives are critical to help us meet global decarbonization and sustainability goals.
A better, more efficient rail system is an important part of the solution for many of these challenges, and progress is already being made on this important task. According to the American Public Transportation Association 2020 Fact Book, public subways currently produce 73% less CO2 emissions than private cars for a given commute. Providing green transportation alternatives and increased access to public transportation results in increased ridership and reduces carbon emissions significantly.
Improving privately owned infrastructure such as rail freight systems will also be crucial to making progress on climate change. Freight rails are three to four times more fuel-efficient than trucks. Private business in the U.S. spends more than $25 billion per year maintaining and upgrading its freight railroad infrastructure, which in turn creates 1.5 million jobs, $200 billion in economic output, and over $26 billion in tax revenue.
Companies that make eco-friendly decisions promoting decarbonization will have an impact in the coming century and beyond, adding critical momentum toward a cleaner, more sustainable future. Forward momentum is also seen as governments, investors, and large firms apply pressure to promote greener business initiatives. A rise in demand for more sustainability from consumers is also increasing expectations for progress out of concern for the future of our world.
- $85 billion to modernize and expand existing transit
- $80 billion for Amtrak to repair, modernize and expand service
- $35 billion for decarbonization initiatives
- $25 billion for regionally beneficial projects
- $20 billion programs to reconnect neighborhoods
- $50 billion National Science Foundation grants for basic research.
The bi-partisan infrastructure bill is critical to the future of our country and the mobility of Americans. We are confident the stakeholders will agree on a final form that will benefit all and enable the above.
Hitachi’s Role in Improving Rail and Mobility Infrastructure
The importance of rails has the world’s attention, and Hitachi Rail is well-positioned to play a leading role in meeting many of these challenges.
First, Hitachi Rail has been around for 140 years. It has the scale to make a difference on hundreds of large projects. Hitachi Rail has more than 12,000 employees in 38 countries with 11 globally distributed manufacturing sites. More than 18 billion journeys are completed each year using Hitachi Rail technology.
Hitachi Rail’s scale is amplified by the benefits it receives from the R&D investments made by other Hitachi divisions and the leading-edge capabilities it leverages from units like Hitachi Vantara, which provides solutions for data management and advanced analytics.
Hitachi Rail is a global leader ready to tackle global problems.
Second, Hitachi Rail is not only large, but it is also fully vertically integrated, able to take a project and all its components from the cradle to the grave. Hitachi Rail manufactures a full portfolio of vehicles for any given transportation situation that includes rolling stock, commuter trains, autonomous trains, streetcars, high-speed trains, inner-city systems, and monorail.
The same is true for train control systems. Hitachi Rail makes systems for localized train control, centralized train control, state-of-the-art communication-based train control systems, European standard ERTMS systems, and more traditional track circuit-based systems. These control systems have what is needed to support GOA 3 and GOA 4 autonomous operation, which is increasingly in demand.
Hitachi Rail’s portfolio also includes the systems needed to support and enhance mobility, including solutions in smart ticketing, human flow applications, and cybersecurity. Hitachi Vantara is an important partner in providing these systems.
As a result of its large collection of solutions, Hitachi Rail can provide an integrated, complete solution with next-generation digital systems and an integrated supply chain to serve the transit market. Once installed, Hitachi Rail can then operate, service, and maintain these systems, supporting large complex projects from initial concept to end of life. Hitachi Rail is one of only a few companies in the entire world that have this level of capability.
Third, Hitachi Rail is a leader in research and innovation for the mobility and rail domains, providing insights that are charting the path forward to the future of mobility. That future is based on five pillars, including a focus on reducing hardware infrastructure, providing related services, moving systems to the cloud, expanding the role of autonomous trains, and making maintenance far easier. All of this is supported by smart factories that themselves embrace these properties. A continuing stream of acquisitions expands Hitachi Rail’s portfolio with new innovative capabilities. An example of this is Hyperdrive, a U.K. battery manufacturer that is supporting Hitachi Rail as it develops next-generation solutions that transition diesel locomotives to alternative clean power sources. Another example is the recent acquisition of Hitachi ABB Power Grids, now Hitachi Energy, which provides next-generation power grid technology.
Fourth, Hitachi Rail has made having a meaningful impact on decarbonization and climate change a tenet of its corporate mission. Hitachi Rail is a world leader in developing next-generation power solutions for transit and freight that reduce carbon emissions through the use of smart power grids and on-board energy storage. Hitachi Rail initiatives to replace diesel locomotives and make existing traction power grid systems more efficient to operate, go a long way toward improving our environment: These initiatives make systems more energy efficient, use less carbon fuel, and are an important part of the Hitachi Decarbonization Strategy.
The practical impacts of Hitachi Rail’s scale and breadth can be seen in current projects around the world such as:
- Sustainable Mobility: Designing, building, operating, and maintaining the first fully automated, driverless urban transit systems in Honolulu, Panama, Lima and other places around the globe.
- Revitalized Metro Systems: New metro cars and CBTC system for the Miami Metro Rail and the Baltimore Metro Subway Link based on state-of-the-art technologies. More recently, we were awarded the largest resignalling contract in North America with Bay Area Rapid Transit (BART) to provide a next-generation CBTC system for its entire network.
- Connecting Cities: The Intercity Express Programme in the U.K. increases capacity, improves reliability, and reduces environmental impact across the cities served by Great Western and East Coast main lines by introducing vehicles as a service to train operating companies.
- Trains of the Future: Next-generation high-speed commuter trains using Shinkansen and European technology are 30% more efficient and are constructed using 95% recyclable material. In addition to our new vehicle contracts with Miami MetroRail and the Baltimore Metro we were recently awarded a contract with Washington Metropolitan Area Transit Authority (WMATA) to deliver their next-generation 8000 Series vehicles.
- Next-Generation Solutions: Fully autonomous trains for Rio Tinto, supported by AI, on-demand operations, and asset management tools with predictive maintenance systems. A great example of this is the monorail system we are delivering in Panama. This next-generation autonomous solution connects the city across the canal.
As an industry leader, Hitachi Rail, globally and in North America continues to grow our business by delivering next-generation, turnkey-systems solutions for the rail transportation market. We offer a full line of solutions, including systems, vehicles, operations and maintenance. These innovative products are focused on efficiently moving people and goods by rail across the city and the nation, and they are an important part of the Hitachi Rail Decarbonization Strategy.
Allan Immel is Turnkey System Business Development Lead for North America at Hitachi Rail. | <urn:uuid:5223974f-c8bc-445a-a625-71e4173051e4> | CC-MAIN-2022-40 | https://www.hitachivantara.com/blog/how-rail-transit-investment-leads-way-to-greener-planet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00479.warc.gz | en | 0.930257 | 1,831 | 2.96875 | 3 |
As the COVID-19 pandemic continues to spread across the globe, many people are worried that the current vaccination may not be sufficient to curb the spread of the virus. While the successful development of effective vaccines has been one of the greatest weapons against the pandemic, Delta and other new variants of the virus are still making victims in the US and abroad. With clusters of people still unvaccinated throughout the country, and concerns about waning vaccine immunity on the rise, booster shots are emerging as a way of increasing immune response and providing better protection against the virus.
However, multiple questions are posed when considering boosters. Is the immune protection offered by the normal vaccines really dropping? And if so, are boosters the only option available? While social distancing measures are indeed relevant, vaccines are now considered the best option for dealing with the COVID-19 pandemic. Can the same be said for the booster doses? Is mixing COVID-19 vaccines and boosters recommended or should the same vaccine act as the third dose? American scientists and policy-makers are struggling to provide the public with answers, while biopharma companies are working to provide both boosters and cures.
Waning vaccine immunity
According to the Centers for Disease Control and Prevention (CDC) getting the vaccine protects people from falling ill or becoming indisposed with COVID-19. Vaccines are considered the best method available for protecting the public and ensuring communities are safe. However, some reports indicate that the immune protection offered by initial doses of the vaccine decreases after several months, although protection against severe illness, hospitalization, and death may remain high for a longer period of time. According to the CDC, as of October 20, 57.1% of the total US population was fully vaccinated, while only 5.9% had also received a booster shot.
While studies support that fully vaccinated individuals should still abide by social distancing measures, they also indicate that booster shots have now become necessary for those with a greater risk of developing a severe reaction to the virus. Therefore, the Food and Drug Administration (FDA) have authorized booster shots for Moderna and Johnson & Johnson Covid-19 vaccines targetted at older and high risk individuals (after Pfizer’s was previously authorized on September 22). “The available data suggest waning immunity in some populations who are fully vaccinated. The availability of these authorized boosters is important for continued protection against COVID-19 disease,” said Acting FDA Commissioner, Janet Woodcock.
Mixing and matching COVID vaccines
While many studies point to fading immunity, others indicate that mixing and matching COVID-19 vaccines may be a better option than administering the same type of shot. According to a report from the National Institutes of Health (NIH), those vaccinated with one of the three coronavirus shots authorized in the US could enjoy better results from receiving a different booster dose than their first. That is why the FDA also authorized the use of “mix and match” booster doses for currently available COVID-19 vaccines. According to the federal agency, the mixing and matching method has positive effects that ultimately outweigh the potential risks.
The FDA admits that both healthcare providers and COVID-19 vaccine recipients will have more questions about booster doses and the new method of administering them. The agency says that the individual fact sheets provided for each available vaccine will ultimately help healthcare providers make the right decision. However, the decision made by the FDA in October will not only change the future of the fight against COVID in the US, but it will also influence other countries around the world to make a similar decision. According to the Financial Times, this influence could ultimately increase demand for Pfizer and Moderna’s mRNA vaccines in locations that have previously used viral vector vaccines, like AstraZeneca.
As the COVID-19 pandemic continues to play an important role on the world’s stage and the development of vaccines and cures is more important than ever. However, so is providing both healthcare providers and vaccine recipients with the best instructions on how to administer them. For now, only people over 65 years old, or those between 18 and 64 years, are more likely to experience worse outcomes from COVID-19. Whereas those with frequent institutional or occupational exposure are eligible for a booster shot. However, with the immune protection decreasing for others in the following months, the federal agency might soon extend further recommendations. | <urn:uuid:9e6be3af-b612-4b6e-b23f-5a8a3c0b964a> | CC-MAIN-2022-40 | https://biopharmacurated.com/do-covid-19-boosters-increase-immune-response/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00479.warc.gz | en | 0.955565 | 894 | 3.515625 | 4 |
Finding renewable energy sources for huge data centers is a daunting challenge. It’s a far more complex issue than reflected in recent headlines, in which the environmental group Greenpeace International has bashed Facebook over its power choices for a new data center the social network is building in Oregon.
In its stinging critique of Facebook's power sourcing, Greenpeace asserts that "the only truly green data centers are the ones running on renewable energy." Given that stance, one might expect Greenpeace's hosting operations to be housed in a "truly green data center" powered entirely by 100 percent renewable energy.
You'd be wrong. Although Greenpeace has taken steps to account for the carbon impact of much of its IT infrastructure, some of its servers are housed in data centers powered primarily by coal and nuclear power.
RECs, Offsets and Wind-Sourced Power (Mostly)
Greenpeace hosts its main web site in a Global Switch data center in Amsterdam. Gary Cook, a Climate Policy Advisor for the Greenpeace CoolIT Campaign, says Greenpeace chose the site because Global Switch bought renewable energy certificates (RECs) to offset the carbon output of its data center facility.
“We’re definitely trying to run the greenest operation we can,” said Cook. “We’re buying RECs because we want to put our money where our mouth is.” The organization’s U.S. operations include about 30 servers housed in its Washington D.C. office, which is supported by wind power purchased from West Virginia, Cook said.
But Greenpeace also has a number of servers in a colocation center in northern Virginia. “They’re using whatever the grid mix is in Virginia,” said Cook, who added that the colo deal was arranged about five years ago. “At that point in time, there weren’t providers that met our requirements (for renewable energy). We’re in the process of reworking some of our IT infrastructure, and we’ll clean that up.”
Most data centers in northern Virginia are supplied by Dominion Virginia Power, which gets 46 percent of its production from coal, 41 percent from nuclear, 8 percent from natural gas, and just 4 percent of its power from renewable generation.
Greenpeace criticized Facebook for its decision to locate its new data center in Prineville, Oregon, where the facility will receive utility power from PacificCorp., which relies on coal for the majority of its power generation.
A Higher Standard?
Is Greenpeace holding Facebook to a higher standard than it applies to its own Internet operations? Cook says the data center industry’s largest power users have a higher obligation to use renewable energy to power their servers.
“We’re drawing attention to Facebook’s practices because Facebook has a much bigger energy choice to make because of the size of its data centers,” said Cook. “Ultimately, we need to be driving more and more toward renewable energy sources and use less coal. These are really important investments.
"We don’t want these data centers to unintentionally increase demand for coal," Cook said. "If you’re building data centers in places that will lead to increased demand for coal-based electricity, that’s a problem. We’re trying to challenge the data center sector to provide IT solutions with as low a carbon output as possible."
Are Offsets Enough?
What if Facebook simply bought renewable energy certificates to offset the carbon overhead from its utility provider, as Greenpeace has done for its hosting operation?
"If you offset those emissions with RECs, that’s better than doing nothing, but you’ve still increased demand for a load center that’s dependent on coal," Cook said. "A lot of these companies are trying to do the right thing. They need to be transparent about the carbon problem. We’re really looking for stronger leadership."
Is Power Sourcing The Right Focus?
We have previously noted the growing interest in using renewable power in data centers, and the growing prominence of utility energy sourcing in site selection issues. Cook made it clear that Greenpeace intends to remain engaged on the issue of data centers using renewable energy.
It's a development foreseen by Nick Carr back in November 2006. "As soon as activists, and the public in general, begin to understand how much electricity is wasted by computing and communication systems – and the consequences of that waste for the environment and in particular global warming – they’ll begin demanding that the makers and users of information technology improve efficiency dramatically," Nick wrote. "Greenpeace and its rainbow warriors will soon storm the data center – your data center."
That moment has arrived. But is Greenpeace storming the wrong data center? In seeking to shame Facebook for difficult choices about utility energy sourcing, is Greenpeace targeting a company that should be seen as an ally, not an enemy?
'As Efficiently As Possible'
That's Facebook's view, which was outlined in a detailed response to Greenpeace. In building its own data center, Facebook is investing more than $180 million in improved energy efficiency. The company says the new facility in Prineville will allow Facebook to be "greener" than the third-party LEED Platinum and LEED Gold data centers where it has leased space during its growth phase.
“It is simply untrue to say that we chose coal as a source of power,” Facebook said in response to Greenpeace. “The suggestions of ‘choosing coal’ ignores the fact that there is no such thing as a coal-powered data center. Similarly, there is no such thing as a hydroelectric-powered data center. Every data center plugs into the grid offered by their utility or power provider.
"It’s true that the local utility for the region we chose, Pacific Power, has an energy mix that is weighted slightly more toward coal than the national average," Facebook added. "However, the efficiency we are able to achieve because of the climate of the region and the reduced energy usage that results minimizes our overall carbon footprint. Said differently, if we located the data center most other places, we would need mechanical chillers, use more energy, and be responsible for more overall carbon in the air—even if that location was fueled by more renewable energy.
"Facebook’s commitment is, regardless of generation source, to use electricity as wisely and as efficiently as possible.”
In making headlines with its critique of Facebook, Greenpeace has accomplished part of its goal by making renewable energy a front-of-mind issue for large companies building new data centers. The resulting headlines have raised awareness. But what comes next? The "truly green" power Greenpeace yearns for would likely require data center operators to enter the renewable generation business at utility scale, which is outside the business scope of most social networks.
In the meantime, power sourcing likely continues to move up the check list of data center site selection criteria, as industry groups step up their engagement with utility providers, as The Green Grid has recently pledged.
Here's a look back at our previous coverage of Facebook's energy efficiency and its Prineville data center:
- Facebook Goes Green With New Data Center Space: The fast-growing social network’s infrastructure isn’t just getting bigger, it’s getting greener with its leasing of LEED Gold and Platinum data centers.
- Should Servers Come With Batteries? The data center team at Facebook believes it should, and is pledging to share its best practices as it presses its case for change.
- Facebook to Build Its Own Data Centers: The company has grown to the point where the economics favor a shift to a custom-built infrastructure.
- It's Official: Facebook is Oregon's Company X: Facebook says the 147,000 square foot Prineville data center is expected to have a Power Usage Effectiveness (PUE) rating of 1.15..
- Facebook's Green Data Center: Powered By Coal?While Prineville are utility Pacific Power gets some hydropower from BPA, its primary power-generation fuel is coal.
- Facebook Responds on Coal Power in Data Center: Cites LEED Gold design, grwoing mix of renewables in future power sourcing.
- Facebook's Response to Greenpeace: "We’re thrilled at our choice in Oregon and that we’re challenging the industry to think creatively to meet the standards we’ve set in efficiency." | <urn:uuid:4f089281-dce4-41e3-b9e3-d27787b8ecec> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2010/03/03/greenpeaces-hosting-not-truly-green/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00479.warc.gz | en | 0.949109 | 1,748 | 2.625 | 3 |
No doubt, data is the most important thing in this decade and you can estimate the importance of data by the fact that several companies pay bucks to collect data of their users. Data can be considered as an asset as we can sell it, we can buy it. There is no doubt that data plays a very important role in our everyday life. From YouTube to Facebook and Instagram, our data is used everywhere. Have you ever pondered upon how YouTube is able to show personalized ads? Well, it is a result of data science.
Data science was ranked as the most lucrative profession in 2020 and it is not only limited to this but in the coming few decades, the demand of data scientists is just going to increase and increase as the amount of data of each internet user is increasing. So today, we are going to discuss the pros and cons of data science as a profession and some online courses which could help you become successful in this area of computer science.
Advantages and disadvantages of Data Science:
Following are some benefits and drawbacks of data science.
This is the problem with many careers that by the time you are able to work in it, the demand declines but in the case of data science we can say that it is one of the most rewarding and secure careers because their demand is not going anywhere in coming few decades instead, according to some predictions, the demand of data scientists will increase a lot in coming time.
No repetitive work:
Another great benefit of data science is that you would not have to do repetitive work instead you will be working on interesting and innovative projects. The satisfaction you get after completing these mind-blowing projects is another story.
Thanks to the huge demand for data scientists in the market, companies are willing to pay unimaginable salaries to them. According to Glassdoor.com, the average salary of a data scientist is around $115,000. The lowest is reported at $82,000 and the highest at $165,000.
You will transform the world:
Sounds strange but in reality, all the modern technology and advancement in future will rely mostly on data science because data is the way to learning about the world. We can learn about anything happening in the universe on the condition that we have enough data available. So, it can be said that data scientists will transform the world.
Unlike other professions, data scientists are not expected to only work in a specific field. A data scientist can work in robotics, artificial intelligence, medicine, machine learning, natural language processing, and much more.
Learning throughout your life:
We all know that technologies change very quickly. In 2021, you would find a radio in any house. The same happens in data science. The old technologies will be outdated new ones will be introduced in the market after a period of time. Because of this, you would have to spend your life learning and being updated.
The barrier of entry:
Data science is a broad field and incorporates numerous subfields like artificial neural networks, machine learning, deep learning, data mining, artificial intelligence, and more. So, you would have to know these technologies in order to land a job in data science. Not all of these technologies have to be mastered but you should specialize in a few of them. This increases the barrier of entry and that is why most of the data scientists have masters or Ph.D. degrees.
Job satisfaction level:
Although data scientists earn well enough in a survey, 22% of data scientists said that they are being paid less and only 8% said that they are earning more than their colleagues. Well, this is an important part to consider because satisfaction is the most important thing in corporate life.
Six Best Data Science Courses:
We have prepared a list of some of the best courses available online which you should definitely take if you are interested in data science. So, let’s take a look at them.
Data Science Dojo is the leading online data science and deep learning teaching platform and has more than 6500 satisfied alumni. That staggering number makes us know that Data Science Dojo is a reliable and authentic website. You can find a number of courses on their website including data science Bootcamp, practicum, introduction to R, best data science with Python course, deep learning in Python, introduction to SQL data, introduction to data engineering, data science courses for experienced professionals, and data skills for business.
A great reason to choose Data Science Dojo is that they provide a certificate for your course. Getting a certificate in the course will not only help you stand out among others but will also help your resume be shortlisted. A verified certificate will increase the credibility of the skill you will receive from the course.
Although, Data Science Dojo provides valuable and lucrative skills the fee they charge cannot be afforded by everyone. But to solve this problem, Data Science Dojo donates to fund the course fee for passionate students. You can easily get a student loan and start learning without worrying about the cost. These courses are offered by Data Science Dojo.
|Data Science Dojo:||Dojo Guru Practicum||$2799 $2999 $9999||If you are interested in Data Science Dojo, then you should visit their website here. You can also download their app on Google play store.|
If you have ever searched for resources to learn programming or advanced data science, then you must know about Datacamp.com. You cannot only take data science course on Datacamp.com but can also learn programming languages like python, introduction to R, and SQL etcetera. You can also find one of the best advanced data science courses and introduction to deep learning courses on Datacamp.com.
You will get access to the first chapter of the courses, Python, R, and SQL assessments, and projects in the free package and you can change your package whenever you want to. Surprisingly, their paid plans are also not much expensive and anyone who has a try passion for data science can spend a few dollars for learning.
Like Data Science Dojo, Datacamp.com will also provide you a certificate and we all know its importance. You will have to complete some assignments and coding challenges in order to get a certificate. As Datacamp.com is a very reputable and authentic source of learning, certificates provided by them will attach a lot of credibility to your skill and it can even help you get a job quickly. Following are the courses available on Datacamp.com.
Contact them for pricing
|Visit their website by clicking on this link. Datacamp.com also has an app available on Google play store and App Store.|
If you are an absolute beginner and have zero knowledge about programming and data science, then Edureka.com can be a perfect available option for you to choose. All the courses provided by Edureka.com are specifically designed for beginners to help them out. But if you already have a good insight about data science and have little or more than that previous experience in programming, then you might find Edureka.com useless. Edureka.com provides the best courses to learn data science like data science for everyone and data engineering with Python etcetera.
One thing wrong with Edureka.com is that the certificate they provide is worth nothing and it will not help you in getting a job. You can just consider it as a piece of paper. A better approach will be to learn the basics from Edureka.com and then move towards more advanced courses on other websites mentioned in this article. In this way, you will not only get familiar with the roots of the programming and data science concepts but will also learn advanced and complex stuff which will help you get a job easily. The beginners should definitely purchase their courses to increase their knowledge.
|Edureka||Data Science Certification Course using R |
Data Science Certification Course using Python
|There are so many courses available on Edureka, that we advise going to the website and viewing all the courses. You can visit their website here.|
GreyCampus.com is another well-known name in the data science community. GreyCampus.com claims that their alumni work at different reputable companies including Tesla, JPMorgan, Amazon, Ford, Apple, Microsoft, and Samsung etcetera. They also claim that 87% of their alumni have a successful career.
GreyCampus.com offers a number of courses like Python for data science, data science online course, R for data science, introduction to data engineering, and programming in other languages and you can choose a course for yourself by clicking on this link. Their courses have been proved helpful by the fact that most of their alumni are well-settled.
One thing which is appreciated in GreCampus.com is that their courses are not short like other websites. Most of the courses you will find online specially in data science are very short and require only 30-40 hours and it is obvious a couple of hours are not enough for learning hard skills like data science. The data science course provided by GreyCampus.com is six months long which means you will be focusing on every concept in detail. The details of their data science course are written below.
|GreyCampus||Data science career program||$2500||You can visit their website here.|
CodingNinjas.com is a small platform as compared to other websites on this list and that is why they are able to provide courses. They currently have tons of courses on different programming languages and on data science. CodingNinjas.com caters to anyone from beginners to professional level. They have a number of courses on data science and data science course by CodingNinjas.com is one of the best online data science course. You can also take introduction to deep learning course on CodingNinjas.com which will give you an insight of neural networks.
Another benefit you will get from CodingNinjas.com is that they will provide you personalized coding problems and by solving them, you will be increasing your problem-solving capability in the field you are interested in. These are the courses related to data science available on CodingNinjas.com.
|CodingNinjas||Machine learning course |
Data science and machine learning
|In a nutshell, we can say that CodingNinjas.com is very helpful for everyone and you should definitely visit their website here. CodingNinjas.com also has an official app on Google play store.|
Flatiron School is also an amazing source from where you can start learning programming, deep learning in Python, and advanced data science. Flatiron School provides one of the best Python course for data science. Flatiron School says that they have an 86% global employment rate and their average on-campus alumni gets a starting salary of $75,000. Well, in case you do not know, an average salary that a Python developer gets on their first job is $65,000 which is much less as compared to $75,000. This means that Flatiron School is worth trying.
Like GreyCampus.com, the courses of Flatiron School are also very extensive and you will learn a lot from a single course. They do not offer a large number of courses but the courses they offer are very detail-oriented and focus on practical skill development rather than theory. In comparison with other courses, the courses of Flatiron School are very expensive but the price is normal if we look at the value which their courses are providing. So, Flatiron School is also a great choice for aspiring data scientists and machine learning engineers. Following are the courses available on Flatiron School.
|Flatiron School||Data Science||$16,900 with $500 deposit||You can visit their official website to get more details here.|
We have discussed the importance of data science and best online data science courses to learn and become a master of data science. Well, it is likely that the future of mankind will depend on data science mostly and most of the modern technologies like self-driving cars, robots, quantum computing, artificial intelligence will rely on data science. By taking the courses we have mentioned above, you can immediately start your practical life as a junior data scientist but the journey of learning will never stop. So, be prepared to learn and earn a lot. | <urn:uuid:671c1c15-35d7-49b8-ba16-584366643261> | CC-MAIN-2022-40 | https://www.finsliqblog.com/product-review/data-science-courses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00479.warc.gz | en | 0.945611 | 2,738 | 3.015625 | 3 |
Disclaimer: This article was originally posted on Dataconomy.com in 2014 covering the spread of the Ebola
With the Ebola threat still looming large, widespread efforts are being made to identify, quarantine and treat possible carriers. BigMosaic, a big data analytics program developed by the Centers for Disease Control in collaboration with Healthmap will aid CDC in monitoring new cases and work with the West African expat community.
Marty Cetron, director of CDC’s Division of Global Migration and Quarantine, explained at a public discussion on Tuesday – “We have the near real-time availability of the global air transportation network, and we’re able to identify, and in a sense target, the risk populations, the diaspora populations from Liberia, Sierra Leone and Guinea, where they’re distributed down from the county-and-below levels, so we have a mosaic map of the U.S., and in some cases with other countries’ data.”
CDC, HealthMap, Boston Children’s Hospital, and Toronto-based BlueDot (formerly, BioDiaspora), worked together to see this app to completion. Utilizing census, demographic and migration health data of expat populations in the U.S. from 105 countries of birth, it can track the spread of infectious diseases globally, while also breaking down the population by education level, household income, and English speaking ability, reports MedCity News.
BioDiaspora had developed, an online tool charting spread of infectious diseases through international travellers. BioMosaic, using BioDiaspora, maps census data, migration patterns and health status identifies countries where international travel may give rise to emerging disease.
“CDC layers many data sets atop one another to create this mosaic map of the diaspora population both on the move and statically in terms of the resident population,” Cetron points out. “There are a number of big data sets that we access and aggregate, the common feature is that all of them are geo-coded,” Cetron said.
“So we bring in weather data, climate data, we bring in global distribution of poultry, we bring in distribution of swine populations, vector disease incidents from [the World Health Organization] and other sets, and pull all these things together and then put them in a way that they can be easily visualized or queried.”
Health kits with thermometers and mobile phones were distributed by CDC at airports that also debriefed on marking symptoms and instructions for the use of the kit. An estimated 100 mobile phones would allow CDC to exchange information with the users for a month.
Asked about the calls the CDC has handled for the program, Cetron said people queried about a fever but it turned out to be unrelated to Ebola. It has also helped direct people to local health facilities.
He further added in an Q&A with Peter Beinart of The Atlantic- “Epidemics of disease are frequently followed by epidemics of fear … and stigma. The epidemic of fear is understandable given the nature of this disease. But we need to make sure we get the balance right when we speak to the media…The disease needs to be controlled at the source.”
Read more here.
(Image credit: DFID) | <urn:uuid:7325cc32-f074-4840-9f60-a499c84e1cb9> | CC-MAIN-2022-40 | https://dataconomy.com/2020/03/big-data-analytics-to-help-cdc-track-pandemics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00679.warc.gz | en | 0.933782 | 687 | 2.71875 | 3 |
As deepfake technology becomes more potent, experts warn that “deepfake geography” may be an increasingly pressing problem.
Globally, the alternative data market reached around $1.6 billion in 2020, with from Research and Markets estimating that it will reach $11.1 billion by 2026. Alternative data is useful because it falls outside of the more traditional data sources used by organizations.
Consultancy firm Deloitte highlights the value of alternative data, especially for investment management (IM) firms.
"In the near future, IM firms will likely use news feeds, social media, online communities, communications metadata, satellite imagery, and geospatial information—to name a few data sets—to augment their traditional processes for securities valuation as the rule, rather than the exception," the authors write.
For this data to be valuable, however, it needs to be accurate and reliable. Research from the University of Washington suggests that geospatial imagery might be the latest victim of deepfake attacks. The researchers highlight the rise in what they refer to as “location spoofing”, which is when geospatial images are faked to mislead people.
What’s more, as deepfake technology becomes more potent, they warn that “deepfake geography” may be an increasingly pressing problem. The researchers set out to try and construct reliable ways of detecting fake satellite images, and call for a reliable means of fact-checking geospatial images. They warn that this is beyond simply photoshopping images to resemble the real thing, as deepfake technology can look incredibly realistic.
Of course, deliberate inaccuracies have been a part of mapmaking from their earliest times, due in large part to the inherent difficulties in translating information from real life into a map form.
While most inaccuracies are simple and honest mistakes, so-called “paper towns” are deliberate insertions of features such as mountains, rivers, and even cities into a map to avoid copyright infringements.
One notorious example of this was the inclusion of the fictional cities of Goblu and Beatosu in the official highway map from the Michigan Department of Transportation in the 1970s. The cities were inserted as a joke because the head of the department wanted to promote his alma mater, so inserted two places that were a play on “Go Blue” and “Beat OSU” while simultaneously protecting the copyright of the map.
Digital information systems
These japes seem almost quaint in an era dominated by digital mapping systems, such as Google Maps and Google Earth, which make the task of location spoofing far harder. The widespread use of these tools also means that such manipulation can have grave consequences, as the National Geospatial Intelligence Agency highlighted in 2019.
The Washington researchers examined how easy and effective deep fakes could be in terms of satellite images. They used the same framework that had been used in the manipulation of other forms of imagery to the field of mapping. They found that the algorithm was able to learn the crucial characteristics of each satellite image, before then generating a fake image based on those learned characteristics into a new base map. The researchers explain that it’s a similar approach to when software imposes the face of a human onto a cat.
They then took maps and satellite images from Seattle, Beijing, and Tacoma, and combined them to allow for the features to be compared. This was then used to allow the algorithm to create new images on one of the cities based upon characteristics from the other two. Tacoma was selected as the base map, with the features of Seattle and Beijing then used to incorporate new features into the deepfake map of Tacoma. | <urn:uuid:5654454f-e1d3-4999-8946-3a354b4279a4> | CC-MAIN-2022-40 | https://cybernews.com/security/is-deepfake-satellite-imagery-the-next-battleground/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00679.warc.gz | en | 0.952924 | 752 | 2.578125 | 3 |
This week Hamburg, Germany’s second-largest city, announced its decision to ban single-serve coffee pods from government-run buildings.
According to a CNNMoney report, city officials said that they could no longer allow taxpayer money to be spent on products that don’t meet Hamburg’s high sustainability standards. For instance, a city spokesperson said that each pod contains 3 grams of waste for every 6 grams of coffee.
Hamburg’s new guidelines specifically cite Keurig’s K-cups and a similar product from Nestlé.
The ban is yet another footnote in a string of bad news for single-serve coffee pod manufacturers —particularly for Keurig Green Mountain.
According to The Atlantic, for example, “enough K-cups were sold that if placed from end-to-end, they would circle the globe 10.5 times.”
The fact that Keurig’s K-cups cannot be recycled in most places has contributed to much of the “Kill the K-Cup” backlash.
Additionally unhelpful is that Keurig has promised to make all K-Cups recyclable but not until 2020.
Is Hamburg’s ban yet another sign of the slow demise of the single-serve coffee pod?
Comment below or tweet me @MNetAbbey. | <urn:uuid:d054dbe4-b06e-4fa4-a37d-a37aa9aaef5d> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/news/13105457/germanys-secondlargest-city-just-banned-singleuse-coffee-pods | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00679.warc.gz | en | 0.929052 | 289 | 2.515625 | 3 |
Protect Data from Ransomware What is Ransomware? Protect Data from Ransomware? What is Ransomware? It’s a malware attack, and that encrypts the specified files in your computer and mapped drives! How is Ransomware getting spread?
The most common way is via mail attachment. Specific file types in your network drives and the local computer will get encrypted when you open the Ransomware attachment from your mail. What is the impact of Ransomware? You won’t be able to access the files which are encrypted.
Protect Data from Ransomware What is Ransomware?
Think about this from an enterprise perspective – most of our machines have at least a couple of network drives/file shares access, and these file shares are mapped to your machine.
All those files (with specific file types) will get encrypted, and to decrypt those files, you need to pay ransom money to hackers!! These kinds of attacks are increasing day by day! Protect Data from Ransomware What is Ransomware?
Altaro is organizing a Webinar to explain what ransomware is? How to prevent this from happening on your Hyper-V file servers? What are the methods to recover impacted Hyper-V hosts (file servers) from Ransomware?
And real-world infections and resolutions (and failures!).
Free webinar is scheduled for 23rd Aug 2016 2PM CEST / 1PM BST (RoW) OR 10AM PDT / 1PM EDT (US).
“Beating Ransomware – Real Stories & Best Practices”
Anoop is Microsoft MVP! He is a Solution Architect in enterprise client management with more than 20 years of experience (calculation done in 2021) in IT. He is a blogger, Speaker, and Local User Group HTMD Community leader. His main focus is on Device Management technologies like SCCM 2012, Current Branch, and Intune. E writes about ConfigMgr, Windows 11, Windows 10, Azure AD, Microsoft Intune, Windows 365, AVD, etc… | <urn:uuid:bf0ee4b8-e8b0-4d47-afd1-1b43a6af3fc2> | CC-MAIN-2022-40 | https://www.anoopcnair.com/what-is-ransomware-and-attend-webinar-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00679.warc.gz | en | 0.91226 | 432 | 2.546875 | 3 |
Mining and Exploration Industry Unearthing Opportunities of Location Intelligence
The mining industry is implementing location data to help save lives and improve performance.
FREMONT, CA: Mining methods have been leveraged to pull important minerals and ore from the ground since ancient times. For years these processes were labor-intensive and hazardous. But as new technologies based on data are being developed, mining companies now have secure and precise methods of finding and removing materials from the earth. Collecting data, including location data from every mining system, including fleet management, plant control, vehicle sensors, maintenance systems, and blending these data for actionable insights, can quickly answer critical questions. Read on to know more.
Mineral exploration needs and utilizes a diverse range of data types and techniques, including satellite and geophysical images, geologic maps, and a variety of databases. Over its lifecycle, a mining exploration creates a vast amount of spatially significant data. Through better tracking equipment components, polling of health parameters, and other measures, mining firms can accelerate equipment uptime and lower the cost of mine operations. Mining corporations have several assets and infrastructure dispersed over vast locations. Monitoring the location of devices in relation to active sites and maintenance facilities will allow organizations to answer vital questions.
Monitoring the location and use of beneath ground devices ensures efficient, safe, and optimal performance. Mining firms employ a huge number of staff, many of whom are employed on a shift or block roster basis. Keeping up-to-date with their availability and location, relative to specific project sites, is vital. Tracking the location of competitor equipment, resources, and infrastructure can determine their capacity to compete for contracts and new locations. Additionally, being able to track the location of safety incidents can reveal patterns and help unearth unsafe work practices.
Location data have the ability to help mining organizations streamline operational efficiencies by averting risk and saving time. However, much of this location-based information, capable of offering significant advantages, is dispersed across many information systems. A solution with robust potentials can pull this information together to uncover hidden trends, relationships, risks, and opportunities. | <urn:uuid:ece3e2c5-778a-469f-a1ed-65ba9ba0cdbe> | CC-MAIN-2022-40 | https://www.cioreview.com/news/mining-and-exploration-industry-unearthing-opportunities-of-location-intelligence-nid-34057-cid-69.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00679.warc.gz | en | 0.936405 | 432 | 2.703125 | 3 |
Sending Quantum Sensors to the Moon with Q-CTRL
(AzoQuantum) Q-CTRL has announced that it will be providing quantum sensors to aid space exploration. The sensors will be sent into earth orbit, to the Moon, Mars, and possibly beyond, marking the first time quantum sensing and navigation systems have been used for space exploration.
Q-CTRL will work in conjunction with Fleet Space Technologies, a nanosatellite start-up and founder of the SEVEN SISTERS consortium, which will load the sensors aboard their sophisticated satellites.
Sustainability is the watchword for future space missions, so making the leap to long space stays requires finding as many ‘in-situ’ resources as possible. The quantum sensors — including quantum-based gravity detectors and magnetic field sensors will search for mineral deposits and liquid water from a nanosatellite located above the lunar surface.
Gravity-based sensors can detect tiny changes in a gravitational field of a planet or a moon that indicate a change in density. This, in turn, could indicate the presence of mineral deposits or even liquid water.
Whilst the most advanced standard gravimeters currently available work by the ‘bobbing’ of a tiny mass — usually a silicon chip — quantum variations use atomic interferometry to measure tiny variations in gravitational field strength.
This means, that at the heart of Q-CTRL’s system is a cloud of atoms trapped in a vacuum chamber¹. A laser is used to place the atoms in a superposition that can be described by a wavefunction. Whilst in this superposition an atom can simultaneously have two energy levels — one ‘low’ and one ‘high’.
Q-CTRL says that their focus on engineering founded on quantum phenomena results in applications that weren’t possible just a few years ago. Biercuk predicts that the geospatial intelligence services the start-up is developing will ultimately find widespread use in defense and even climate change mitigation. | <urn:uuid:a7e01185-f56e-4737-8dc9-da042274511f> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/sending-quantum-sensors-to-the-moon-with-q-ctrl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00679.warc.gz | en | 0.915557 | 416 | 2.890625 | 3 |
The U.S. National Research Council (NRC) is recommending planetary science missions for the decade 2013-2022 that could provide important new clues about our solar system.
After sorting out budget issues, five expert panels selected research priorities through a rigorous review that included input from planetary sciences experts, town hall meetings, and a contractor who provided independent cost and technical analyses.
“Our recommendations are science-driven, and they offer a balanced mix of missions — large, medium and small — that have the potential to greatly expand our knowledge of the solar system,” said Steven W. Squyres, Cornell University professor of astronomy and chair of the committee that wrote the report. “However, in these tough economic times, some difficult choices may have to be made. With that in mind, our priority missions were carefully selected based on their potential to yield the most scientific benefit per dollar spent.”
All the Rage
Astrobiology is all the rage these days, as evidenced by the controversial fanfare that greeted Monday’s announcement of bacterial fossils purportedly from outer space that may have hitched a ride on an Earthbound meteor.
Likewise, “the NRC report’s top recommendations focus on the popular theme of ‘searching for life beyond Earth,’ by endorsing big missions to Mars and to Jupiter’s moon Europa, which likely harbors an ocean beneath its icy surface,” University of Toronto astronomy and astrophysics professor Ray Jayawardhana told TechNewsWorld.
NASA’s highest-priority large mission, the NRC recommends, should be the Mars Astrobiology Explorer Cacher (MAX-C), a Mars mission to help determine whether the planet ever supported life.
“This mission will be the first step in a multipart effort to eventually return samples from the planet,” NRC spokesperson Molly Galvin told TechNewsWorld. NASA and the European Space Agency would run the mission jointly, but only if it comes in some US$1 billion less than current cost estimates.
A mission to Jupiter’s moon Europa and its ocean below a frozen surface — considered a promising nearby environment for life — should be the second priority for NASA’s large-scale planetary science missions, the report claims. But again, “the committee concluded that unless costs could be brought down,” conducting that mission “would preclude too many other important missions,” Galvin explained.
An orbiter and probe to the planet Uranus that would investigate that planet’s structure and atmosphere is listed as a third priority in the report. Assessed at $2.7 billion, it too is under the budget microscope.
“The Uranus mission will give us our first in-depth look at an ice giant planet,” said Jayawardhana, author of the new book Strange New Worlds:The Search for Alien Planets and Life beyond Our Solar System. “As someone interested in extra-solar planets, I find a such a mission especially interesting and timely, because we are finding that ice giant planets are quite common around other stars. It is important for us to understand the nearest example of that class of worlds.”
NRC recommendations for medium and small missions are less specific, and seem to come with fewer strings.
Based on competitive peer review, NASA should select several missions for its New Frontiers program, which explores the solar system with frequent launches of mid-size spacecraft, NRC recommends.
For smaller missions, NASA’s Discovery Program of low-cost planetary science investigations should continue at its current funding level with adjustments for inflation. The committee also endorsed the Mars Trace Gas Orbiter, a small 2016 mission and another NASA-European Space Agency joint venture.
More general recommendations include expansion of National Science Foundation funding for existing laboratories, new facilities, and the Large Synoptic Survey Telescope. | <urn:uuid:710040ba-de8f-4c85-a034-8c852212e287> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/to-mars-europa-and-beyond-budget-permitting-72016.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00679.warc.gz | en | 0.921591 | 806 | 2.875 | 3 |
There’s been a lot of buzz around network synchronization and why it is critical for 5G networks. In fact, the concept of timing and synchronization is not new to the wireless world. Previous network generations (i.e., 2G, 3G, and 4G) all required a certain degree of timing and synchronization for correct handover to occur between macro base stations and user equipment. However, unlike its predecessors, 5G is bringing more stringent performance requirements to wireless networks and mandating nanosecond timing between the various elements in the radio access network (RAN).
Why is timing and synchronization critical in a 5G world?
As more radios and small cells are being deployed to achieve the right level of coverage and performance, it is vital that they are in-synch with each other and sharing the same time reference with all surrounding macro cell towers, user equipment and RAN elements. Furthermore, timing accuracy is needed to support technologies like Time Division Duplex (TDD), where both the uplink and downlink are on the same frequency, and beamforming which allows beams to be directed to multiple users and IoT devices such as sensors, machines, robots and connected cars. And there are other advanced technologies that come with 5G, like dynamic spectrum sharing (DSS), carrier aggregation and massive MIMO—all requiring good timing to operate correctly.
These technologies give rise to complexities in network synchronization not seen in earlier generation networks. For example, TDD uses one dedicated frequency band for both the downlink and the uplink. As each direction must transmit during specific time slots, the synchronized timing in frequency and in phase between the user equipment and the radio is critical to ensure that the downlink and the uplink are not interfering with each other. The deployment of many more small cells can also cause big timing issues. If they are not on the same time reference, they could interfere with each other and impact RF performance. A timing issue on one cell site router risks affecting many radios. In turn, timing issues could lead to handover failure, corruption of transmitted data, poor throughput and reduced voice quality—ultimately impacting the performance of 5G networks. To learn more, watch our webinar: Why timing and synchronization is critical in 5G networks.
Figure 1. Advanced technologies bring complexities to 5G network synchronization.
When do we need to validate timing accuracy?
Let’s look at a couple of scenarios:
During 5G radio and small cell rollout and turn-up phases, operators need to make sure, from day one, that the timing is accurate and reliable, ensuring that networks are ready for future expansion. Throughout the maintenance and troubleshooting phases, operators need to eliminate as many variables as possible and quickly identify the root cause of issues. If a timing issue is suspected or an RF performance problem is being investigated, a first step is to make sure that the timing is accurate before continuing to test other elements in the network.
What are some of the requirements for network synchronization and what to look for?
There are several ways to make sure every network element is synchronized together in frequency and in phase. The first option is to have a GNSS receiver at every cell site, which up until recently was the preferred method in some regions like the U.S. The second option gaining popularity since the introduction of 5G is 1588 PTP (Precision Time Protocol). In its most basic explanation,1588 PTP uses IP/Ethernet switching/routing to distribute highly accurate timing information to every element across the network that requires synchronization.
Time error (TE) is one of the key metrics used for measuring clock inaccuracy. It is the difference in time between the time of the clock under test T(t) and the time given by a high-quality reference clock Tref(t). If the clock under test is in advance of the reference clock, a positive TE will be measured. If the clock under test is trailing behind the reference clock, the TE will be negative. The goal of timing accuracy is to achieve a TE measurement as close to 0 as possible.
Figure 2. Variation of TE over time.
The 1588 PTP protocol was specifically designed to deliver the highest level of timing accuracy. The way 1588 PTP works is by having a grand master exchanging synchronization packets with the PTP client. These synchronization packets include timestamps that are used to calculate and correct the time between the master and the client. First, the master will send a sync message to the client with a timestamp (t1). The client will then receive this message and generate a second timestamp (t2). Next, the client will send a delay request message and create another timestamp (t3). The master will reply with a delay response and send a final timestamp (t4). At the end of these exchanges, the client has all the timestamps and can calculate the delay and adjust the time error with respect to the master. This correction mechanism is continuously running and correcting the time on the client side many times per second.
Another approach used for network synchronization is Synchronous Ethernet (SyncE), an ITU-T standard that facilitates the transference of frequency information from one node to another over the Ethernet’s physical layer and traced back to a reference clock – in the same way timing is passed in SONET. As 5G networks are deployed, SyncE will support the many applications that require accurate frequency synchronization and can be used in combination with 1588 PTP.
Validating a network’s timing and synchronization can be challenging as there are multiple variables to consider when trying to identify the potential root cause of timing issues. To add to that, a timing issue on a cell site router can affect many radios making it that much more complex to resolve a problem.
A few factors can increase the time error in a network or cause the 1588 protocol to work incorrectly such as, equipment failures, configuration issues, path reroutes by a router and outage and protection switching.
Figure 3. Examples of variables that can cause timing issues.
How to test timing and synchronization in 5G networks
When deploying or troubleshooting new 5G cell sites and radios, there are test tools that can quickly and easily assess whether SyncE and 1588 PTP services are active, validate clock quality level and SyncE frequency across the network and confirm timing accuracy by measuring time error between the base station and the grandmaster clock.
Traditionally, test solutions for time error have relied on costly and sensitive rubidium oscillators that need to be “warmed-up” and “disciplined” for more than 3 hours to get the highest level of accuracy. Consequently, cell techs spend nearly half a day validating timing accuracy at just one site rendering the process inefficient.
EXFO has introduced a different approach that accelerates the entire test process. By integrating a next generation, multi-constellation, high-accuracy GNSS receiver specifically designed for 5G, EXFO’s solution can achieve nanosecond accuracy in less than 20 minutes. This is 90% faster than any other timing test solution in the industry. In addition, the solution includes a stratum 3E oven controlled crystal (Xtal) oscillator (OCXO) to provide holdover measurement capability for scenarios where sky visibility for the GNSS receiver is not possible. Getting that level of accuracy in such a short prep time is really a game changer for time error measurement in the field.
Another key feature is the capability to measure time error directly through the fiber interface. EXFO's test solution acts as a PTP client and exchanges PTP messages and timestamps with the boundary clock. A key advantage of this solution is that any length of fiber test cable can be used between the device under test and the test instrument. This allows the user to move the test instrument away from the cell site router, use a longer fiber and ensure the best location for sky visibility. In the case where the cell site router is centralized in a C-RAN hub, it is even possible to measure the time error at the radio site directly on the fiber interface connecting this radio. This provides the easiest and most efficient way to validate that the timing is right and that it meets the stringent 5G timing requirements, from day 1.
As we mentioned earlier, 5G will require tighter synchronization in frequency and in phase to support stricter timing accuracy requirements and advanced technologies. That said, we are currently in the early stages of 5G deployments with mostly non-standalone architectures that rely partly on the existing 4G LTE infrastructure. But as 5G transitions to standalone—and more importantly, is required to scale in volume with the deployment of more radios and small cells serving many more connected users—the impact of timing issues will be far greater than what we saw in previous generation networks. And accurate timing and synchronization will be imperative to getting 5G done right.
Network synchronization is an in-depth topic and we have only covered a few key concepts here. To learn more on EXFO's latest timing and synchronization test solution, click on the learn more button. | <urn:uuid:b93a607d-7151-41e2-a38d-28211afbc408> | CC-MAIN-2022-40 | https://www.exfo.com/es/recursos/blog/timing-synchronization-5g-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00079.warc.gz | en | 0.930357 | 1,874 | 2.625 | 3 |
Amid all the coronavirus chaos, hackers have seized the opportunity to strike: a new wave of ransomware attacks has struck companies of all sizes across the globe.
The Israel-based software company Sapiens reportedly paid hackers a $250,000 ransom in Bitcoin earlier this year. Though the company has yet to confirm or deny the attack, an anonymous inside source reported it to Calcalist. Sapiens also hasn't reported the incident to either the American or Israeli exchange commissions.
While this ransom cost substantially more than the usual average of $40,000, it's nothing new. GPS giant Garmin also fell victim to a ransomware attack this year and remained similarly quiet about it. Reports claim that hackers demanded $10 million from the company, but it's unclear if Garmin paid them.
More recently, Canon reportedly lost 10 terabytes of data to hackers holding it for ransom. Canon hasn't confirmed or denied anything but says they're investigating the situation. While it's still uncertain, the incident has all the signs of a ransomware attack.
Rising ransomware attacks amid COVID-19
These cyberattacks are just a few of many that have taken place since the onset of the pandemic. Cyberattacks are far from rare, but they saw an unprecedented spike in 2020. Large-scale attacks are up 273% in the first quarter of 2020, with ransomware rising by 90%.
This spike in cybercrime is most likely due to cybersecurity challenges amid the ongoing pandemic. The Sapiens cyberattack came as hundreds of its employees moved to remote work, creating new vulnerabilities. Companies have to reconfigure their security to work on employees' personal devices on their home networks, which can create an opportunity for hackers.
Even if a company handles the shift to remote work well, there's still the human element to consider. With all the confusion surrounding COVID-19, people may be more likely to click on any email or link that promises answers or support. Most ransomware comes from phishing emails like this, so panicked people are much more vulnerable.
Defending against ransomware and other cyberattacks
Cybercrime may be on the rise, but companies can still protect against it. Perhaps the most critical step is reemphasizing the importance of proper security practices with employees. As much as 90% of cyberattacks are the result of human error, so making sure employees remember protocol is essential.
Ransomware, like the Sapiens cyberattack, usually starts as a suspicious email or link. Employees should know how to spot these questionable messages and remember not to click anything they can't confirm is from an official source. Companies can hold videoconferences to remind workers of security best practices and policies.
As employees work from home, securing their devices is crucial to cybersecurity. Providing anti-malware software and looking for a solution that can handle these new endpoints can help mitigate new threats. Requiring employees to use a dedicated network for work purposes can also help companies make sure workers' connections are more secure.
It's also essential to keep all software up-to-date. A lot of ransomware, like the recent NetWalker attacks, takes advantage of vulnerabilities in programs that a patch may fix in short order. Updating everything as soon as possible helps prevent these attacks.
Cybersecurity is as critical as ever
In the face of these rising threats, companies must pay close attention to their cybersecurity. Defending against cyberattacks has always been crucial, but it's more critical now than ever. With incidents like the Sapiens, Canon, and Garmin cyberattacks growing costlier by the year, businesses can't afford to ignore these threats.
Keeping employees safe amid the COVID-19 pandemic is essential, but so is securing sensitive data. Recent trends indicate that companies need to consider both when transitioning into new processes. This spike in cybercrime is alarming, but with the proper security measures, businesses can stay safe. | <urn:uuid:9be467da-363b-49ac-8f10-96f1c2f815f8> | CC-MAIN-2022-40 | https://cybernews.com/security/companies-falling-victim-to-rising-ransomware-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00079.warc.gz | en | 0.95499 | 795 | 2.578125 | 3 |
Most people who acknowledge the reality of global warming tend to focus on its environmental and meteorological effects. But rising temperatures also can have a serious impact on modern computing technology, creating a number of physical and economic challenges for organizations and networks.
Excessive heat can wreak havoc on equipment and personnel. Most server rooms and data centers have air conditioning (AC) units* installed to reduce temperatures to acceptable levels based on the requirements of the equipment used. Some data centers are running so much equipment that their cooling needs are substantial and a failure in the AC system can be catastrophic in terms of either outright failure or higher-than-normal component failure rates.
As a result, we’ve learned the importance of dedicated AC systems and monitoring to trend temperature and humidity levels and generate alarms when thresholds are exceeded.** Groups in high-reliability situations have even built in redundancy on their AC units and installed power backup to ensure the data center doesn’t bake during a power outage. Ensuring cooling during a power failure is often overlooked and can result in an unplanned shutdown due to overheating.
The problem is that we will see more such risks as the environment warms. As temperatures rise, AC systems will have to work harder than ever to lower temperatures to acceptable levels. This means that power requirements are going to increase.
Increased Demand, Decreased Efficiency
Moreover, large AC units have to dump their heat into the atmosphere and, as the already-hot surrounding air is heated further, efficiency will decrease. If you’ve ever stood next to cooling systems running, you will notice they generate a tremendous amount of hot air. For groups running self-contained portable/temporary AC units, they should ensure that the exhaust is vented to the outside of the building, otherwise they will be heating the very air the system is trying to cool.
As the demands for power increase, higher prices will be inevitable. This will affect budgets as actual expenditures may well exceed budgeted amounts. If relationships don’t already exist with the local power company, contacts should be developed and pricing forecasts discussed, along with what the utility is doing to address power requirements in the area, its continuity plans, and how calls should be escalated.
More air conditioning causes an increased strain on the power network and increases the likelihood of power grid failure. A transformer may blow, a line may burn out, etc. Moreover, the power company may be forced to implement rolling blackouts or even emergency shutdowns to protect equipment. Bear in mind that sizable portions of the power infrastructure were not designed for today’s consumption levels. For a whole variety of reasons — cooling being a major one in the summer — we are using a tremendous amount of power.
Hotter temperatures will directly impact the power utilities. As they try to react to higher demand, there will be accusations of poor planning when scenarios related to global warming have not been sufficiently thought out. While it is easy to pin blame, to be fair it is hard to plan for something you’ve not encountered before. Once it happens, you know what to expect and how to prevent a particular type of incident from occurring again.
Have An Action Plan
IT needs to plan for ways to reduce temperatures in data centers. The following are some of the actions that are possible:
For final consideration, with the higher air temperatures melting glaciers and warming waters, it is predicted that the number and severity of storms will increase. Gulf and coastal states are watching this closely. The resulting weather will affect not just environmental controls in the data center, but the continuity of operations for organizations and entire economies. Having current and tested business continuity plans in place is always a wise idea.
There aren’t any easy answers on this topic. Groups need to plan and mitigate their risks. Not only do we need to react and deal with the threats it introduces, we also must think how, as nations, corporate citizens and individuals, we can reduce global warming.
Note: More information on global warming is available here.
* Technically, the units are referred to as heating, ventilation and air conditioning systems, or “HVACs.” As the article is focused on global warming and cooling, the emphasis has been placed on the “air conditioning.”
** It is advisable to monitor temperature and humidity. Many of the network enabled probes feature both temperature and humidity sensors that then report the data back via SNMP. In this article, the focus is on temperature but during reviews of climate control, be sure to understand required temperature and humidity levels both of the equipment and the media in the data center. | <urn:uuid:fedc65ef-12c6-4cff-a576-ed775a369f1e> | CC-MAIN-2022-40 | https://www.datamation.com/trends/the-impact-of-global-warming-on-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00079.warc.gz | en | 0.940837 | 951 | 2.828125 | 3 |
ORNL Researchers Use Quantum Light to Squeeze the Noise out of Microscopy Signals
(ScienceDaily) Researchers at the Department of Energy’s Oak Ridge National Laboratory used quantum optics to advance state-of-the-art microscopy and illuminate a path to detecting material properties with greater sensitivity than is possible with traditional tools.
“We showed how to use squeezed light — a workhorse of quantum information science — as a practical resource for microscopy,” said Ben Lawrie of ORNL’s Materials Science and Technology Division, who led the research with Raphael Pooser of ORNL’s Computational Sciences and Engineering Division. “We measured the displacement of an atomic force microscope microcantilever with sensitivity better than the standard quantum limit.”
Unlike today’s classical microscopes, Pooser and Lawrie’s quantum microscope requires quantum theory to describe its sensitivity. The nonlinear amplifiers in ORNL’s microscope generate a special quantum light source known as squeezed light. | <urn:uuid:8563e2ad-572d-40d5-a2a8-c662224f9a07> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/ornl-researchers-use-quantum-light-to-squeeze-the-noise-out-of-microscopy-signals-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00079.warc.gz | en | 0.847727 | 213 | 3.234375 | 3 |
An application layer gateway (ALG) is a type of security software or device that acts on behalf of the application servers on a network, protecting the servers and applications from traffic that might be malicious.
An application layer gateway—also known as an application proxy gateway—may perform a variety of functions at the application layer of an infrastructure, commonly known as layer 7 in the OSI model. These functions may include address and port translation, resource allocation, application response control, and synchronization of data and control traffic. By acting as a proxy for the application servers and managing application protocols such as SIP and FTP, an application layer gateway can control application session initiation and shield the application servers by preventing or terminating connections when appropriate to deliver application layer security.
Applications are vital to business operations and daily life, but attacks increasingly target those applications and the application layer of IT infrastructures. To ensure business continuity and protect sensitive data and personally identifiable information (PII), security measures must specifically address the application layer. Application layer gateways are one option for defending applications and the data they contain to ensure secure application delivery.
By acting as a proxy for the application servers and managing application protocols such as SIP and FTP, an application layer gateway typically uses deep packet inspection to detect and block attacks before initiating an application session or allowing traffic to pass to the application. The capabilities of an application layer gateway generally exceed those of an application firewall or web application firewall.
The services and functions of an application layer gateway are delivered by F5 Advanced Web Application Firewall (Advanced WAF). Similar application protection can be delivered as a cloud-based service via F5 Silverline Web Application Firewall. | <urn:uuid:2c858c16-c152-4459-a0bd-62b2f0771425> | CC-MAIN-2022-40 | https://www.f5.com/services/resources/glossary/application-layer-gateway | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00079.warc.gz | en | 0.892846 | 339 | 2.65625 | 3 |
A recent White House initiative titled “Public Listening Sessions on Scientific Integrity and Evidence-Based Policymaking” asked for two minute pitches. The following amplifies that pitch.
The topic is too important for me to be deterred by the two minute limitation. I have nine suggestions. Note: All mentions of K-12 include free, open source online self-training resources.
- Develop specific competencies within K-12 education in how the scientific method performs hypothesis generation and testing. This should include its best and worst practices — a realistic picture of how science works within society. that science is incremental, empirical. Demonstrate that science is also subject to social factors like any other endeavor (see Thomas Kuhn, Structure of Scientific Revolutions). Include basics of experimental methods, such as double blind, statistical significance.
- Promote specific competencies to support critical thinking exercises in K-12 to include: how to vet information online, use of Wikipedia, practice skills in summarization and annotation of scientific information, confirmation bias, value/limitations of data gathering.
- Replace trigonometry training in high schools with statistics, fully integrated with the teaching of scientific methods, show applications across all subjects.
- Foster the integration of automated knowledge-based tools, specifically including digital ontologies in all college level degree programs. Particular attention must be paid to automated reasoning approaches, not only machine learning. Helpful: https://ontologforum.org/ Hands-on experience with reasoning software such as Protege https://protege.stanford.edu/ is critical.
- Fund programs to promote awareness of the challenges of specialization, especially for privacy, health care, use of weakly understood technologies (e.g,. 5G, pharmaceuticals). Embed issue-based training in credentialed and graduate programs to include role of automation, an increasingly software-based institutional fabric.
- Embed lessons learned from standards organizations, especially those with mature ethics-based endeavors. See IEEE standards for ethics in autonomous systems. https://standards.ieee.org/project/7001.html https://standards.ieee.org/project/7007.html, https://standards.ieee.org/standard/7010-2020.html, https://standards.ieee.org/standard/7000-2021.html. Understanding of continuous assessment technology governance tooling such as NIST OSCAL https://pages.nist.gov/OSCAL/.
- Foster increased citation of primary source material, including access to datasets, negative results (peer reviewed, even if not published). Antipattern: long essays with few citations but numerous claims. For journalists in particular: Deeper college / postgraduate experience with science and technology for future journalists, who should be given hands-on experience with experiments, large scale surveys, science project management, federal proposal writing, data collection and curation, analytical tools (e.g., Jupyter) and research resources within a chosen STEM subdiscipline.
- Develop specific competencies in K-12 which foster project-based, active, declarative integration of the climate crisis across all other disciplines.
- Promote training in logical reasoning and evidence-based decision support, drawing heavily from evidence-based psychological research, updated as new findings emerge. | <urn:uuid:4cb73350-ecd8-41c7-902b-1317ca0b2fbc> | CC-MAIN-2022-40 | https://knowlengr.com/subpage-1-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00079.warc.gz | en | 0.900792 | 685 | 3 | 3 |
Fraud has become a pervasive part of the discussion around cybersecurity. In part, this reflects a change in attacker motives, as cyber-attacks were not always as vicious as they are now. From the 1980s into the early 2000s, hacking was not really about profit. It was primarily about achieving fame in the hacker community by demonstrating knowledge and insight about information systems, while also having a bit of fun. While many of the early high-profile hacks were indeed illegal and were prosecuted as such, they were also comparatively whimsical and harmless, except to the IT staff who had to clean up the networks afterward.1
By comparison, the present threat landscape is broken down into two significant approaches. Although other kinds of attackers exist, most significant attacks fall into one of two categories. One is nation-state actors, whose motivations primarily make up the espionage and (cyber)warfare portions of the CHEW model of attacker motives. However, North Korean advanced attackers such as APT38 have notably included cybercrime in their repertoire to generate liquid funds for the heavily sanctioned and internationally isolated North Korean regime.2 The other mode is crime, which is incorporating an increasingly diverse set of fraud strategies into the cybercrime toolbox.3
Fraud is, accordingly, on everyone’s lips, but some misunderstandings about it threaten to blur the concept, which can make fraud look—erroneously—like a vague synonym for cybercrime itself. This does us no good, for two reasons: it overlooks the experience and knowledge in detecting and preventing fraud that other parties—law enforcement, financial institutions, and governments—have, and it makes it unclear exactly how to fight it.
This article is an attempt to clarify what forms digital fraud takes and to differentiate it from other attacker behaviors that are often related or adjacent to fraud. The goal here is to help security practitioners understand where antifraud efforts and security converge and where they diverge. So, no matter how any particular organization is structured, fraud teams and security teams can better understand their respective responsibilities, strengths, and weaknesses.
We start with the FBI’s definition of fraud, because it contains the key element we need to understand when a cyberattack is, or includes, fraud. The FBI defines fraud as:
The intentional perversion of the truth for the purpose of inducing another person or other entity in reliance upon it to part with something of value or to surrender a legal right. Fraudulent conversion and obtaining of money or property by false pretenses4.
Leaving aside the fact that the word fraudulent is part of the definition of fraud, this definition helps us because it emphasizes that fraud is a financial crime that hinges on a lie. Successful lying requires some kind of contact, some social interaction, even if the contact is abstract and digital in form.
This observation is key because it lets us quickly eliminate several things that are often fraud-adjacent and part of mitigating fraud but aren’t really fraud. Theft is chief among them. Stealing something usually means avoiding direct contact with your target; if there is no contact, there can’t really be a lie. This means that most credential theft, whether it takes the form of keylogger malware or exfiltrating hashed passwords, can’t be fraud, even though it is a precursor to fraud and part of the antifraud umbra. The same goes for most account takeover (ATO) attacks, although that is a gray area which we’ll touch on later. Enrichment of stolen data, such as cracking hashed passwords, is also fraud-adjacent but not fraud. Those passwords might be used for fraud in the future, but because they don’t involve any deception, they don’t fit our criteria.
These distinctions also illuminate some critical differences between cybercrime and real-world crime: in the real world, theft immediately results in a loss to the victim, even if an attacker hasn’t had time to monetize the theft. In the case of digital theft, the loss is not immediately apparent (even if the victim immediately knows the theft occurred, which is rare) and only materializes when fraud occurs. This distinction works in converse as well—not all digital theft is in pursuit of fraud. In the case of piracy or intellectual property theft, the path to extracting value from the stolen goods involves no contact at all. This distinction is part of the reason why understanding digital fraud is not intuitive.
Flavors of Untruth
Traditionally, when the world was a little smaller, and checking people’s stories was harder, fraud often hinged on fabricating a background, therefore misrepresenting the implicit financial risk of working with the fraudster. Even though the story is contemporary, fraudster Anna Sorokin’s success in impersonating a wealthy heiress in order to obtain lines of credit, both official and unofficial, is a surprisingly successful example.5 In contrast, most digital fraud is about impersonating another identity completely, not just fabricating a background. This takes the form of asserting that you are indeed the person whose name is on that payment card or who earned those air miles.
Although many subtypes of fraudulent attacks exist, and the following is not an exhaustive list, the lying that underpins fraud really has only three kinds of targets: customers (meaning the public), private organizations, and public organizations.
Lying to the Public
Fraud cases like these don’t refer to an event in which someone’s credit card number is used for a fraudulent transaction because, while the citizen is a victim of a crime, they aren’t the target for the lie. Fraud against regular people is really about things like:
- Dating fraud: This type of fraud tends to take one of two forms. One is the appearance of a young woman looking for a man who can transfer some funds to her, after which romance will, we are told, abound. The other form is fraudsters looking for “romantic partners” who don’t mind handing off a package to someone, that is, looking for mules. In both cases, a fraud ecosystem is built around identifying likely targets, preparing plausible-looking bank accounts to accept funds, and collecting dossiers of believable information, such as photographs (usually of young women), that can be used as bait.
- Wire fraud: This type straddles the line between defrauding the customer and defrauding the bank, but it is incumbent on the banking customer to confirm the wire instructions with the appropriate account and routing numbers; the bank’s ability to intervene is limited. The lie here is really about the validity of the financial account information, which is usually delivered in a spoofed email purporting to be from the receiving bank.
Lying to Private Organizations
These are fraud cases where an organization is defrauded, which means there is either a higher value to a singular fraud attempt or multiple, smaller on-going fraud attempts happening.
- Bank fraud: A lot of the fraud that happens around financial institutions is actually better understood as fraud against banking customers (discussed under “Wire fraud”) or fraud against retail organizations (more on that later). However, application fraud, in which attackers use stolen or spoofed personal information to open an account in a victim’s name, is an interesting example. Fraudulent bank accounts are used as logistical support for other criminal activities, such as money laundering or providing a landing place for funds from a dating fraud, as detailed earlier. Figure 1 shows a cybercriminal advertisement for bank fraud services for stolen banking information.
Another prevalent form of bank fraud has, as a precondition, an ATO attack. Brute force, credential stuffing, malware, and phishing can all play a role in the initial account takeover necessary for these kinds of attacks. After the takeover of a bank account, the attacker can make purchases or transfer funds to another account. In these kinds of attacks, maintaining control over a designated email address belonging to the customer can help attackers control communications between the bank and the customer, thereby maintaining better secrecy. Figure 2 is a cybercriminal advertisement for payment “cashout” services on the PayPal on stolen accounts from an ATO.
- Retail fraud: A huge amount of the fraud discussion is around the use of stolen payment card information to make purchases. This kind of fraud is often listed under bank fraud, but since the retailer is responsible for vetting the buyer’s identity, they are the target of the lie, so we think it is better conceptualized as a form of retail fraud. Because of this responsibility, retail organizations are also the ones who bear the brunt in the event that the actual card owner requests a reversal of charges.
Numbers are difficult to come by on this subject, but it appears that this is one of the most damaging and prevalent forms of digital fraud. It is certainly a battlefield for an ongoing technical war between attackers and organizations seeking to use additional signals to help weed out malicious from benevolent users. As security organizations try to implement increasingly sophisticated fingerprinting techniques, attackers are turning to masquerading toolsets, such as antidetection browsers and digital identity marketplaces, as shown in Figure 3.
- Hospitality fraud: Observers of dark web activities have noted two sorts of fraud that involve hospitality, travel, or customer loyalty programs. One entails using fraudulent travel or accommodations services to harvest credentials and financial information for other use. This form belongs in the earlier section about fraud against the public. However, in another form of hospitality fraud organizations are targets of the lie. This involves using previously harvested loyalty points, air miles, rewards points, and the like for the purpose of booking travel services for others. Sometimes, the customers of the fraudulent service are aware of the source of their low prices—there are travel “agencies” in the attacker community that advertise travel services with the markdown from retail explicitly stated.
- SIM swaps: In this type of fraud, attackers gain control over mobile phone accounts by convincing mobile carriers to switch an account from one associated SIM card to another. While this is often enough for attackers to gain access to a core email account and perform ATO on subsidiary accounts, including banking apps, it is also important in that it can allow attackers to circumvent multifactor authentication that is routed to the phone. Some SIM swaps are done with the knowledge of local staff at a mobile carrier’s store, but many are the result of fraud.
Lying to Public Organizations
- Tax return fraud: The most high-profile digital fraud against public organizations is the filing of fraudulent tax returns. This is a specific form of identity theft. In this case, the attacker assumes the identity of an innocent member of the public, gains access to the necessary financial and demographic information, and files a return before the citizen can file their own.
These attacks require comparatively greater amounts of victim information prior to executing the attack. They are often broken up into disparate steps that are handled by specialists in different tasks, such as exfiltrating tax forms, gathering credentials to online tax preparation software, and laundering money afterward.
Special Mention: Phishing
Phishing and other forms of technically dependent social engineering are interesting edge cases that hinge on your interpretation of the FBI’s phrase “something of value.” For the most part, phishing that does not deliver malware is used to harvest credentials, and while credentials aren’t exactly objects of currency, they are increasingly the only prerequisite for a host of digital financial activity, including ecommerce and online banking. Phishing is, therefore, fraud against a member of the public that is also a precondition for another form of fraud, usually against a private organization such as a bank or online retailer. Figure 4 shows a cybercriminal advertisement for phishing services to assist less cyber-savvy fraudsters.
Fraud-Adjacent Attacker Activities
It’s clear that many attacker activities that are part of the fraud ecosystem don’t have the characteristic element of untruth for profit that defines fraud. Much of this category makes up the bulk of what information security practitioners spend their days fighting. An inexhaustive list includes:
- Malware for stealing sensitive information: Importantly, this kind of attack doesn’t require any fraud (again, unless the malware was delivered by phishing) because it doesn’t involve any contact between attacker and victim.
- Money laundering: Money laundering is a big part of the attacker ecosystem, since it is key to attackers turning money into a form they can actually spend. It’s not fraud but does represent a fruitful potential avenue for law enforcement to disrupt the monetization process of fraud.
- Transport, drops, and mules: Just as with money laundering, these are crucial parts of the whole fraud ecosystem that don’t involve contact with a victim, except in the case of unwitting mules (see Figure 5), many of whom are recruited via dating scams.
- Attacker training: The attacker community has a rich training economy. Here, more experienced actors train newcomers in the various aspects of fraud—both the actual fraudulent contacts previously discussed and the fraud-adjacent logistical items listed in this section. This is particularly interesting because it represents a monetization path for attackers that depends on, but is separate from, actual fraud. This is the attacker community paying the attacker community for services, which makes the value of any actual fraud higher, but without any sort of contact.
- Account takeover: ATO that happens via brute force or credential stuffing is a huge part of the threat landscape and a significant challenge for organizations to mitigate (along with phishing, discussed earlier). One important thing to note when discussing ATO as a fraud-support mechanism is the degree to which this kind of attack is becoming a specialized service that attackers outsource to experts. The growing sophistication and diversification of the attacker economy could potentially tip the scales temporarily in attackers’ favor as ATO best practices become more widely disseminated.
We’ve spent as much time talking about what fraud is not as we have talking about what it is. This is because, while fraud is everywhere, all attacks are not themselves fraud. So why does that matter? Are we just splitting hairs?
It matters for two reasons. One reason is organizational. As awareness about digital fraud has grown, security teams increasingly find themselves working closely with fraud teams, but the architecture of these teams, and the delineation of responsibilities therein, varies widely. We’ve spoken to CISOs who have a fraud team under them. We’ve spoken to technical security experts who are embedded in fraud departments that report to the CFO. In the middle are organizations that have both teams but no formal junction, who have to collaborate to mitigate the threat. And, of course, lots of organizations with a small security team but no fraud team.
Because of this variability, it is important to understand how different kinds of attacks relate to one another to leverage different kinds of expertise. Fraud teams have a greater understanding of the ways that digital and nondigital criminal behaviors intersect as well as a better big-picture understanding of the impacts of financial crime. Security teams have a greater understanding of what they can accomplish with technical controls, whether they are preventive, detective, or corrective. Perhaps even more importantly, they understand the limitations of those controls.
The second reason it is important to understand what fraud is not is that it helps us better understand the characteristics of attacker monetization. Many of the ways that attackers monetize attacks don’t involve fraud but instead involve offloading what they stole to a specialist in another arena. A network intrusion specialist might sell hashed passwords to a cracking specialist. A web exploit specialist might sell a foothold in an enterprise network to a malware specialist, and so on. This kind of internal value-generation is difficult to detect in a tactically meaningful way; it also means that information that might not appear sensitive or valuable on the surface might be enriched into something very useful to attackers. Some attack chains start with the fraud and end several steps later with monetization. Some start with several other kinds of attacks, each of which features some monetization, with the fraud coming only at the end.
In sum, not all fraud is a monetization play in the short run, and not all monetization involves fraud. In a context where a great deal of attacker activity happens behind closed doors, fraud can be a clue to attackers’ proximate and long-term goals. Occasionally, the true victims of attacker activity on your network might be another organization’s customers’ customers. Understanding the ways that different attacker capabilities link up can help us put specific, observed behaviors into better context. | <urn:uuid:1986bd4f-8c30-4613-93ac-5472e0d8ad19> | CC-MAIN-2022-40 | https://www.f5.com/labs/articles/cisotociso/the-ins-and-outs-of-digital-fraud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00079.warc.gz | en | 0.945994 | 3,452 | 2.90625 | 3 |
Researchers spotlight the need to gather and map the carbon advantages of wind energy, silviculture and peatlands to analyse their environmental impression.
Whereas renewable energy is essential to the newly adopted clear vitality transition, wind energy generators can’t be constructed too near residential areas as a consequence of noise and panorama components. However, constructing generators in uninhabited areas makes nature extra fragmented and impacts the native wildlife.
The repeated use of forests and peatlands has an impression on carbon sinks. The LandUseZero mission, coordinated by the Natural Resources Institute Finland (Luke), goals to develop an working mannequin that makes land use for wind energy wise and acceptable for individuals in addition to the atmosphere.
The mission will start by growing a harmonised technique to calculate the local weather impression of wind energy, silviculture and peatlands based mostly on carbon dioxide and different greenhouse gasoline emissions. On this approach, the web local weather impression of every kind of land use might be evaluated with one another.
“That is notably difficult, because the impression of wind energy is generally calculated in hours, that of forests in years, and that of peatlands in tens and even a whole lot of years,” defined Anne Tolvanen, programme chief and professor at Luke. “What’s extra, the emissions discount impression of wind energy adjustments over time when the quantity of fossil vitality changed decreases.”
The working mannequin is because of be developed in cooperation between scientists, land use planners, and authorities determination makers. The municipality of Li will act because the pilot web site.
“Li was a pure alternative, as it’s merited in local weather actions each nationally and globally, and now we have already labored collectively, particularly concerning the use and restoration of peatlands,” Tolvanen stated. Different pilot websites are positioned inside the areas of Southwest Finland, Satakunta, and North Karelia.
“For Li, the mission serves to develop complete and carbon impartial land use that addresses biodiversity, might be calculated, and helps the municipality’s technique and forest plan,” added Lauri Rantala, coordinator of Li River administration Micropolis in Li. “It’s higher to sequester carbon and vitamins in forests and soil than to launch them into the air and watercourses.”
Folks’s conflicting attitudes in direction of wind energy
A nationwide survey is because of start at the start of 2022. Its speculation is that, whereas there’s a normal consensus in Finland in regards to the necessity of local weather change mitigation, the native impression of various measures might trigger disagreement amongst individuals. Consequently, the survey goals to determine individuals’s attitudes in direction of wind energy, and a climate-smart and biodiversity-fostering use of forests and peatlands.
“Our purpose is to carry all the aforementioned components collectively to position wind energy, using forests and the restoration of peatlands optimally on a map, whereas producing advantages for the local weather and biodiversity, and making the result acceptable by individuals,” Tolvanen stated.
Moreover, the monetary assessment will study the societal prices of those local weather measures, the willingness of individuals to take part, and forest house owners’ angle to just accept local weather measures to be carried out on their land.
The LandUseZero mission is scheduled to be performed in a number of organisations. VTT Technical Analysis Centre of Finland, in collaboration with Recognis, will consider the emission discount potential of wind energy, whereas the Geological Survey of Finland (GTK) will study the impression of peatland restoration on the local weather and biodiversity. The College of Jap Finland will probably be chargeable for spatial optimisation. Luke will conduct the survey, in addition to put together calculations of the impression of forest use and the general monetary impact of various measures.
The mission is a part of the Catch the Carbon local weather programme, launched by the Ministry of Agriculture and Forestry, aimed to cut back the greenhouse gasoline emissions of agriculture, forestry and different land use, strengthening carbon sinks and shares. The mission is scheduled to be carried out between 2021 and 2023. | <urn:uuid:cc8bdcd6-b5a4-415e-8128-a0490f6cb671> | CC-MAIN-2022-40 | https://dimkts.com/the-carbon-benefits-of-wind-power-silviculture-and-peatlands/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00479.warc.gz | en | 0.907652 | 871 | 3.015625 | 3 |
Why would a cyberthief bother with hacking your network if he can get you to wire money directly to his account? That’s the premise behind business email compromise, a form of cybercrime that’s becoming increasingly common.
A public service announcement from the FBI on business email compromise trends says that:
The BEC (Business Email Compromise)/EAC (E-mail Account Compromise) scam continues to grow and evolve, targeting small, medium, and large business and personal transactions. Between December 2016 and May 2018, there was a 136% increase in identified global exposed losses. The scam has been reported in all 50 states and in 150 countries.
The FBI also reported that the worldwide loss to this type of scam from 2013 to 2018 was over $12 billion.
What is Business Email Compromise?
Business email compromise is a scam in which a cyberthief compromises legitimate business email accounts in order to trick recipients into doing something they shouldn’t. Often the compromised accounts are used to request that an unsuspecting target wire funds to a seemingly legitimate account. In some cases, business email compromise attacks target personal information or forms containing personal information, such as W-2s, that can be used in identity theft.
There are several different ways that scammers perpetrate business email compromise attacks. Sometimes they use an email address that’s deceptively similar to a legitimate email address – for example “Joe@acme_company.com” instead of “firstname.lastname@example.org,” posing as a company executive or supplier. If the recipient isn’t sufficiently careful, they might not notice that there’s something just a bit off about the email address.
In other cases, the cyberthief may compromise a corporate email account via malware, or steal email credentials via a spear-phishing attack on a specific individual, such as the CEO or someone in the finance department. The attacker will then send an email from the compromised account instructing the finance department to transfer funds to a particular account, perhaps waiting until the employee in question is away on business or some other opportune moment. The scammer may even use an account at a bank to which the company regularly makes such transfers, with a similar, but not identical, account number.
How to Prevent Business Email Compromise
The key to most business email compromise attacks is trust. Employees are accustomed to receiving and following directives from management and have little incentive to question these directives. Yet, the key to protecting your organization from constantly evolving, highly-effective social engineering attacks, including business email compromise, lies in the Zero Trust precept to “trust no one, verify everything.” With that in mind, here are a few different ways you can apply a Zero Trust approach to protection from BECs:
- Require in-person or telephone verification of any requests to transfer funds – don’t transfer funds based on an email request alone.
N.B. When verifying transfer requests by phone, use known phone numbers, not phone numbers that appear in the email.
- Use dedicated tools to identify potentially fraudulent requests, such as flagging emails sent from an account that’s similar but not identical to the company email format, or where the email “reply to” address is different than the “from” address.
- Use technology such as Zero Trust Browsing to automatically detect and block phishing sites or have them open in read-only mode, so they cannot be used to harvest email and other credentials.
- Require multifactor authorization to change the account number of an existing client.
- Provide awareness training to individuals with wire transfer authority, so that they know how to look out for anything unusual or suspicious.
What if You’re a BEC Victim?
If you’ve already wired money to a scammer and just realized your mistake, don’t despair – there may be things you can do to get your money back. Verizon’s 2019 data breach report indicated that half of all US-based business email compromise cases ended up with 99% of the misdirected funds being returned; only 9% of compromised companies weren’t able to get any money back.
If you’ve been victimized by a BEC scam, quick action can help you get some or all of your money back. The first thing to do is notify your financial institution about what happened and ask them to contact the financial institution to which the fraudulent transaction was paid. You should also call the nearest FBI office, and report the crime or attempted crime to the FBI’s Internet Crime Complaint Center (IC3).
Business email compromise is an increasingly common type of scam. Fortunately, there are steps you can take to help your employees identify scamming attempts, and technology you can leverage to reduce the chances that a scam will succeed.
Read more: Download our free white paper to learn how to prevent social engineering attacks such as phishing, BEC and credential theft from hurting your business. | <urn:uuid:155dcd61-b7fd-45b1-bf1e-f0686fd38ae9> | CC-MAIN-2022-40 | https://blog.ericom.com/preventing-business-email-compromise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00479.warc.gz | en | 0.938362 | 1,045 | 2.515625 | 3 |
Any security planning process must include establishing guidelines for what is expected of the users of the system. It is absolutely imperative that, at an early stage in the process of developing a security plan, effort be put into developing a set of written guidelines that will be presented to all users of the network, as well as a separate set of guidelines for the administrators of that network. Such a set of guidelines is generally called an acceptable use policy.
Acceptable Use Policies
Acceptable use policies, or AUPs, are the written guidelines given to a user before they are allowed to access a network. Usually, the user is expected to read and sign the policy, agreeing to abide by the policies as written. In addition to making sure that each user is aware of the expected behavior for using the network, the AUP also allows the security administrator to enforce policies much more readily, since he will have documented evidence that the user knows what correct behavior is, and so cannot plead ignorance as a defense of improper behavior.
The AUP should also spell out what disciplinary actions will be implemented should a violation occur; it is absolutely imperative that the system administrator enforce the policy as it is expressed in the AUP. To do otherwise undermines the authority of the system, which in turn undermines the policies themselves. Enforce the policy evenly and uniformly, and users will soon learn to expect fair treatment, as well as the boundaries of their behavior. Check with a competent attorney for assistance with AUP preparation.
Preparing the User Community
Computer security can't be a lone venture. Although every organization that has a computer system have at least one person who stays abreast of computer security issues and initiates security efforts, computer security is every user's responsibility. In fact, most security specialists view themselves as security advisors to the user community, and possibly view each user as his or her own system administrator. This is especially true when an organization has a wide user-base of personal computers running systems such as Windows. Through effective training, campaigning, and notification of new developments, many users can take measures to add to (or detract from) the security of their own system.
Security goes beyond technical solutions. One example of an effective technical solution is the use of passwords. However, this highly effective solution can fail miserably if users are not active participants in making this strategy effective.
Imagine for a moment that you define a controlled set of people that you entrust with access to your various possessions. Each possession-your house, car, boat-has a lock with a key. When you pass out the keys to your trusted individuals, you do not expect them to pass along the original or copies of the keys to other individuals. If they did, that would compromise your entire security plan. When passing out keys, you might be wise to define acceptable use of your possessions and clearly explain your security plan and how important it is that key holders not share the keys with anybody else. You effectively want to enter into an agreement with these people and train them on how to use your possessions safely and properly. It should be no different when you hand over the "keys" of access to a school computer system.
Through training, documentation, and perhaps general campaigns in the form of posters and flyers throughout the school, you must instill in users the importance of security and provide them with the knowledge to help facilitate your security plan.
As you are in the planning stages of a security model for your organization, always consider how your technical security solutions will affect general users and how general users will affect your technical solutions. If neither will affect the other, you can probably quietly move into the implementation stage. However, in many cases, effective implementation will include some amount of consciousness-raising about security issues. Such training might include describing some common scenarios:
- Users should be suspicious of and closely question any person who calls and asks a user for his or her password. Some hackers will try a "social engineering" attack, whereby they call a user saying that they are testing a new security routine, and need the user's password to proceed. Obviously passwords should not be given to any stranger.
- Users should always be careful to enter passwords in such a way that their keyboard can't be seen by others-particularly if strangers are in the immediate area.
- Users should log off of systems when they are going to be away from their desks for any considerable period of time. If the workstation is logged in, the system can be used to perform any task that user is able to do-and that user's name will be on the transactions.
These sorts of examples might sound like common sense, but may not be obvious to people who have never worked before in an online environment. Taking the time to provide security training up front can save a great deal of grief later on.
Providing the sort of use thus described usually takes two forms: start-up training and continuing education. When installation of a new network or equipment upgrade is undertaken, most or all of the users are trained as a group. Such training should cover not only security issues such as those already mentioned, but should also cover such issues as proper network etiquette, also known as netiquette, treatment and proper use of both the physical hardware and the software, the advantages to be gained from the new access, basic issues such as what constitutes a good or a bad password, as well as a variety of other potentially useful issues.
In addition, ongoing training is usually called for. Additional security measures are often added, which require new instruction in their use. New software is installed, requiring updates for the users. Remember, part of a secure installation is having users who understand what they are doing-the quickest route to damage of a system is not the malicious user, but the uninformed user.
Much of the activity involved in making a network as secure as possible involves security on individual desktop computers, rather than anything the network administrator can do centrally.
Part of the training given to new users should include basic desktop security behaviors. Issues to be discussed include physical security measures such as:
- Shutting down systems when not in use
- Locking doors when no one will be nearby
- Locking the computer if it contains, or has access to, sensitive data
The security specialist can influence certain software security issues that are relevant to the desktop, including:
- Installing filtering software
- Installing virus-checking software
- Being aware of what software is installed on each computer.
Physical security is only as good as the behavior of the user-it does no good for a user to have a password on her account if she logs in and proceeds to leave her desk for long periods of time. With that opportunity, anyone could easily walk up to the workstation and in only a few keystrokes steal files or introduce a virus to a system. The behaviors necessary to maintain security are quite simple, but they require the active participation of the user.
Restricting Access to the PC
If you leave, and you don't want anyone to have access to your system while you are gone, lock the door. If you are going to be gone from your system and don't have a lock on the door, log off the computer and log back in when you return. Such security measures seem obvious, but often the more obvious a notion, the more often it is overlooked.
Viruses are small programs that are designed to wreak some sort of havoc on a computer system. Viruses can have a variety of effects-anything from flashing messages on the computer screen, to making the system run slowly, to deleting all of the data from the hard drive. Viruses are transmitted either by infected disks, or by downloading infected software from the Internet. Email, in and of itself, cannot infect a computer with a virus; for your computer to catch a virus from email, there has to be a file attached to a message you receive, and you then must open and execute that file.
The key to effective use of virus-checking software lies in how current the software is. Viruses are written and rewritten on a daily basis, and your software must be able to keep up. All of the major commercial vendors offer frequent updates of their virus databases, also downloadable from the Web. As the security administrator, it is your job to maintain a strict schedule of updating, so that your desktops are always as well protected as possible, with the most recent information available.
There are a number of desktop software packages that are designed to filter and limit the Web sites that a user can access. The intent of these products is two-fold: to protect users from material they, or their parents, might find objectionable; and to keep users occupied with their assignments rather than being distracted by other attractions on the Web. "Content filtering" software such as CyberPatrol, NetNanny, and SurfWatch provide filtering capabilities based at the desktop. Most of them offer trial versions of their software available for download on the Web, so that you can sample them before you pay for them.
Monitoring Network Traffic
Most proxy servers provide extensive logging and reporting either through a native feature of the software or through a third-party plug-in program. In either case, another significant feature of the proxy server is its ability to report on network traffic and on who is doing what. If all Internet applications in a school are configured to pass requests through a proxy server, the proxy server will have the best and most natural vantage point for reporting on network traffic and where people are going on the Internet.
Routers and firewalls can provide similar logging and reporting.
A firewall can be either a hardware device or a software application. Often it is both hardware and software, working together to stand between the outside world and your local area network. A firewall acts as a gatekeeper, deciding who has legitimate access to your network and what sorts of materials should be allowed in and out. Remember, the purpose of a firewall is not only to prevent unauthorized entry into your system, but also to prevent unauthorized exit from your system-in other words, to stop users from sending out things you would prefer they not send out.
Packet filtering is a process whereby a firewall examines the nature of each packet-each piece of information traveling into or out of your network. Some firewalls look only for packets with a forbidden address, and refuse to allow any traffic either coming in from or going out to such addresses. More sophisticated firewalls can actually examine the nature of the packet, and so can filter specific types of traffic.
The most important thing to note here is that administering a firewall can be a highly technical enterprise, and not one to be taken without some serious information-gathering ahead of time. A misconfigured firewall can do more harm than good, even to the extent of opening up more holes in your network access than you might have had with no firewall at all. This is not to say that you should avoid their use; it is merely to emphasize that if you intend to install a firewall, make sure you have the know-how first. | <urn:uuid:8ae7bd04-5b84-4edc-89b6-b75028a0a4ed> | CC-MAIN-2022-40 | https://www.metro-data.com/page/planning_your_security_strategy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00479.warc.gz | en | 0.960635 | 2,287 | 2.953125 | 3 |
Teens like myself always expect to know everything about what happens on the Internet, ignoring the possible risks because, of course, there can’t be any risks if we’ve got everything under control, right? Well, wrong. Even though we think that we know who we can trust and what is safe (or not), phishers know exactly how to imitate that, becoming a very real hazard to us.
A while back, as part of a Hacker Highschool project, I presented a PowerPoint to my class about phishing, so I have some knowledge about the subject and am aware of the dangers involved. Before that, I didn’t really know all that much about it, but neither did my classmates.
I used to think phishing only appeared in fishy emails or websites that told me that I had won a trip to the Maldives, but after my research I found out that nowadays phishing techniques can be hidden anywhere and it surprised me how innocent and uninformed I was in the past.
Phishing Tactic #1 Copying a Reliable App
While I was presenting to my classmates, I showed them two pictures side by side. The first picture was a screenshot of one of those fake scammy websites and the other one was a link for the login information to retrieve their Instagram password. I told them to observe them both and tell me which one would seem more dangerous if they encountered them online. The first picture was the more obviously suspicious option. When I told them that both options were equally risky a few jaws dropped.
The fact that a phisher could imitate exactly what the login information page looked like was a shock to my schoolmates and, to be fair, to me too.
After informing them of the dangers of both websites, I asked them why they thought that the first one was risky but the second one was safe. One person told me that it was because they were used to seeing those typical fishy websites send fake or risky news and on the other hand, they had never seen something so legitimate-looking turn out to be a trap. I couldn’t have agreed more, primarily because we all consider Instagram to be a really trustworthy app, so if we get an email that looks like it came from them, most teens wouldn’t bother making sure if it’s real or not. On top of that, from time to time Instagram does send us emails, so receiving one from them wouldn’t even be considered strange.
Another case of using a reliable app for phishing teens happened a couple of years ago, also with Instagram. Many apps and websites were promising to fill your account with followers, likes and comments in a matter of minutes. Although I personally wasn’t interested, many of my friends and other teens were, and they gave away passwords and accounts for it.
Of course, there were a few apps that actually did work, but a few others just kept their account information and never fulfilled their promise. None of my friends that did it seemed to have any issues until someone started posting all sorts of spam and links on their accounts.
Phishing Tactic #2 Through Fake “Rewards” for Videogames
Like I mentioned before, the promise of rewards like winning a trip to the Maldives or a new phone don’t really work on most teens because we are sophisticated enough to know these are scams, but phishers do occasionally pull one over even on the most jaded teen.
A while back, many people played the game Episode and would spend lots of money on gems and tickets, which made the game more fun. Phishers knew this, and around 2016 many videos were uploaded to YouTube claiming that there was a website that could hack the game for you and get you unlimited free gems and tickets. Supposedly this was safe and perfectly legal.
Even though now I can see that it’s clearly illegal to hack an app, and quite impossible with our knowledge, thousands of teens - some of them were my friends and I - clicked on the link with hopes of gaining unlimited supplies of goodies.
Once I clicked on the link, I remember seeing on the side of the screen a very extensive list of people that apparently already got thousands of gems for the day. This was exciting until I learned the hard way that they were just bots. Long story short, the web page wasn’t the miracle we were all waiting for, but a big phishing trap instead. It was one of those cases of “too good to be true.”
To get all these “free” gems and tickets you were asked to give them lots of personal information - name, where you live, etc. - and then you had to go through a “human verification” process in which you had to answer a ton of personal questions to just end up in the home page all over again with no access to freebies. Luckily, I never put any personal information on there due to the fact that I wanted to go through it fast, so I just put whatever I came up with at the moment.
Long story short, phishers can easily take advantage of teens by exploiting their desire for free items for their favorite games. Certainly this could catch out adults too, but several studies demonstrated that teens and young adults are far more likely not to exercise caution and fall for trips like this, especially because we have this unrealistic sense of what is trustworthy and what isn’t.
Phishing Tactic #3 The Fake Email
Here we’re talking about something different from the Instagram scam I mentioned above. When I was presenting to my classmates, I asked them to explain to me how they would differentiate an email or a message from a friend from an email sent by a phisher pretending to be a friend. Everyone’s response was pretty similar: they could tell easily just by how they talk, what expressions they use and even how they type. But a phisher determined to access your online info would study all of these things beforehand, so just by letting our gut tell us if it’s our friend or not is what gets us in the trap in the first place.
I also asked my classmates how they would identify if a person is real and has genuine intentions about what they’re asking for or if it’s a phisher, because it’s one thing to try to recognize a friend, but recognizing a stranger who is genuine is something else. When asking this question I didn’t really get clear responses; some said to see if the email address looked safe or if there was a web page linked to it that could feel fishy, but again, no real response there. I realized my classmates’ approach to a phisher would purely be by feelings and trust, two factors that could be easily manipulated by the phisher themselves.
I got an email once that said that I had activity on my Google account that wasn’t mine and that I had about thirty minutes to regain control of my account. To regain it, I had to click on a link and enter my username and password. My initial reaction was to freak out and to do it before the timer ended, but luckily enough I remembered that phishing techniques love to use pressure, and that Google wouldn’t make me rush to type in a new password.
Just because I was lucky enough to not fall into that trap doesn’t mean other teens wouldn’t have.
So basically, using a fake email most definitely is a good way to get teens to give all sorts of information to the phisher, just because we prefer to trust our gut rather than using actual research on the cause.
In conclusion, several studies have demonstrated how crucial it is to protect teens from phishers, just because we’re the most vulnerable age group to fall in their traps.
Although I consider myself lucky, because thanks to the Hacker Highschool project I had to do, I learned a lot about their tactics and have been able to be extra careful when being online, and on top of that my parents have always warned me to be cautious.
I think it’s important for parents to let their teens know that phishers can pretend to be anything or anyone they want, including family members or close friends. Even if this might sound obvious to the more informed adults, it’s really shocking for most of us teens because we think it’ll only happen in movies, when in reality, it can happen to us. | <urn:uuid:0150ab75-b85b-418f-b41e-f98e682b0823> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2020/02/three-different-ways-teens-can-get-phished | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00479.warc.gz | en | 0.979799 | 1,751 | 2.578125 | 3 |
The Defense Advanced Research Projects Agency (DARPA) has selected eight teams to conduct research projects that address the limits of atomic vapors in quantum science.
DARPA said Friday that its Science of Atomic Vapors for New Technologies (SaVANT) effort will study how to improve the coherence of warm atomic vapors and determine how they can support DOD's technological pursuits.
Unlike cold atoms, warm atomic vapors do not require laser-cooling but present the challenge of maintaining quantum coherence.
SaVANT will address this limitation and apply room-temperature atomic vapors to support quantum information science applications, as well as help DOD measure high-sensitivity electric and magnetic fields.
The effort will mainly focus on three approaches: Rydberg electrometry, vector magnetometry and vapor quantum electrodynamics.
The eight SaVANT participants are:
- Georgia Institute of Technology
- Quantum Valley Ideas Laboratories
- Rydberg Technologies
- University of Colorado
- University of Maryland
- William & Mary
DARPA will announce an additional participant in the coming months. | <urn:uuid:f3e25420-ed92-4f7d-af1f-39fb7a378c68> | CC-MAIN-2022-40 | https://executivegov.com/2021/09/darpa-names-participants-of-effort-to-study-apply-warm-atomic-vapors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00679.warc.gz | en | 0.842993 | 235 | 2.75 | 3 |
The companies succeeding in the age of big data are often ones that have improved their data integration and are going beyond simply collecting and mining data. These enterprises are integrating data from isolated silos to implement a useful data model into business intelligence that can:
- Drive vital decision making
- Improve internal processes
- Indicate service improvement areas and opportunities
Data integration isn’t easy though, especially the larger your enterprise and the more software systems on which you rely. The hotch-potch of legacy systems and new tools make enterprise architectures difficult to manage, especially due to the different data formats that all these tools receive.
More and more, companies need to share data across all these systems. The problem is how difficult sharing data is when each system has different languages, requirements, and protocols. One solution is the canonical data model (CDM), effectively implementing middleware to translate and manage the data.
Defining a Canonical Data Model (CDM)
CDMs are a type of data model that aims to present data entities and relationships in the simplest possible form to integrate processes across various systems and databases. A CDM is also known as a common data model because that’s what we’re aiming for—a common language to manage data!
More often than not, the data exchanged across various systems rely on different languages, syntax, and protocols. The purpose of a CDM is to enable an enterprise to create and distribute a common definition of its entire data unit. This allows for smoother integration between systems, which can help:
- Improve processes and practices
- Make data analytics easier
How canonical data models work
Importantly, a canonical data model is not a merge of all data models. Instead, it is a new way to model data that is different from the connected systems. This model must be able to contain and translate the other types of data.
- When one system needs to send data to another system, it first translates its data into the standard syntax (a canonical format or a common format) that are not the same syntax or protocol of the other system.
- When the second system receives data from the first system, it translates that canonical format into its own data format.
By implementing this kind of data model, data is translated and “untranslated” by every system that an organization includes in its CDM. A CDM approach can and should include any technology the enterprise uses, including:
- Enterprise service management (ESM) and business performance/process management (BPM) platforms
- Other service-oriented architecture (SOAs), and any range of more specific tools and applications
Benefits of employing a CDM
Enterprises that are able to successfully employ a CDM benefit from the following situations:
- Perform fewer translations. Without a CDM, the more systems you have, the more data translations you must do. With a CDM in place, you cut down on the manual work that data integration requires, and you limit the chances of user error.
- Improve translation maintenance. On an enterprise level, systems will inevitability be replaced by other systems, whether new versions or vendor SOAs that replace legacy systems. When just a single system changes, you only need to verify the translations to and from the CDM. If you’re not employing a CDM, you may spend significantly more time verifying translations to every other system.
- Enhance logic maintenance. In a CDM, the logic is written within the canonical model, so there is no dependence on any other systems. Like translation maintenance, when you change out one system, you need only to verify the new system’s logic within the logic of the CDM, not with every other system that your new system may need to communicate with.
How to implement a canonical data model
In its most extreme form, a canon approach would mean having one person, customer, order, product, etc., with a set of IDs, attributes, and associations that the entire enterprise can agree upon.
By employing a CDM, you are taking a canonical approach in which every application translates its data into a single, common model that all other applications also understand. This standardization is good.
Everyone in the company, including non-technical staff, can see that the time it takes to translate data between systems in time better spent on other projects.
Building a CDM
You may be tempted to use an existing data model from a connecting system as the basis of your CDM. A single, central system such as your ERP may house all sorts of data—perhaps all of your data—so it seems like a decent starting point to the untrained eye.
Experts caution against this seeming shortcut. If the system that is the basis of your model ever changes, even to a newer version, you may be stuck using old data models and an outdated system, which negates the benefit of the flexibility that CDMs are designed for.
You will also face problems with licenses. Developers who try to handle various similar data models may also spend more time trying to decipher the differences, which can lead to more user errors.
If you’re opting for a canonical data model, create your model from scratch. Focus on flexibility so that you reap the purpose of the CDM: easy changes as your enterprise architecture necessarily changes. Otherwise, the convenience of a common data format will quickly become extremely inconvenient.
CDMs in reality
Getting a company to buy into the idea of a CDM can be difficult. Building a single data model that can accommodate multiple data protocols and languages requires an enterprise-wide approach that can take a lot of time and resources.
When to avoid a canonical data model
From an executive perspective, the time and money investment may be too significant to take on unless there is a real tangible change for the end-user–which may not be the case when building a CDM.Other critics of employing CDM argue that it’s a theoretical approach that doesn’t work when applied practically. A project as large as this is so time- and resource-consuming precisely because it is unwieldy.
The inflexibility of making every service fit within a specific data model means you may lose the best case uses for some systems. These systems may benefit from less strict specifications, not the one-size-fits-all goal of a canonical approach.
Why experts recommend CDMs
These experts recommend that an enterprise architect should instead approach the idea of a CDM differently: if you like the goal of data consistency, consider standardizing on formats and fragments of these data models, such as small XML or JSON pieces that help standardize small groupings of attributes.
Less centralization will allow for independent parts to determine what’s best: teams should decide to opt into a CDM approach, instead of a top-down decision where everyone is forced to create a canon data model.
Should my organization adopt a CDM?
CDMs may benefit your company depending on the size and needs of your data. If you can spend the time on such a project, the more systems and applications that need to share data, the more elusive a one-size canonical model can be.
Effectively implementing all your entities into one centralized model and creating a common data format that communicates across all systems will speed up your enterprise’s data handling capabilities. Then, taking data from disparate systems and managing them in a central location makes implementing data into business decisions is more efficient and more effective. | <urn:uuid:52d535cd-f475-4381-87d2-546fc8cfe798> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/canonical-data-model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00679.warc.gz | en | 0.924899 | 1,537 | 2.5625 | 3 |
Cybersecurity managers spend a lot of time thinking about when, and how, to deny or allow entry to certain systems or resources, from digital access points or physical entryways like IT systems, cloud services, elevators, and even doors. Access control is an essential element of security.
Solid security requires control over access to information that matters; the cybersecurity manager who doesn’t take control is not considering what matters and to whom. Anyone who does not have the need to know should not be allowed to have access. Anyone who does need to know should. That’s only logical, but it’s common to find companies with such strict access control that it prohibits the right users from gaining access, or such lax controls that the risk of data breaches and leaks is increased.
Access control is a way to divide the risk of unauthorised access and data breaches. What people don’t know will limit what they are capable of doing. Keep your passwords secret from hackers, and you won’t be hacked. Keep your business plans secret from the competition, and you have a better chance of winning.
Controlling access to information is a complex topic with multiple technologies available. It’s necessary to include some good practices and principles in the high-level policy that sets the management tone for controlling access to information. The highest level of requirements could be phased out like below:
“Access to company information is given to people who need that information at their work, but not to others. This is the need-to-know principle. When the need-to-know ceases to exist, access to information will be removed. Managers in charge will inform IT department to permit the access to information based on their assessment of need-to-know.”
Obviously, there’s a ton of details that could go into more detailed policies for each important system that the company uses.
Here’s a horror story from real life. Most organisations require some sort of ID, access card, or badge to be used in their facilities. Many companies require that employees identify themselves with a lanyard and ID badge that must be worn at all times.
We’ve worked with some quite unbelievable access control scenarios. In one instance, a school was requiring that all of the parents, teachers, maids, and custodians who drop off or pick up their kids wear a lanyard and a photo ID that identifies their face, name, and which child they can bring or take out. Nobody without that lanyard should have been able to access the premises and take the kids. The IDs were supposed to be used to control access to the school area and provide a means to check who has authority to take a certain kid along with them. Nice idea, in theory, but practice turned out to be different.
One of their problems was that the school had more than one entry point, usually manned by staff who were supposed to—but in practice didn’t always—check the parent’s IDs and lanyards.
Every morning and afternoon, the staff came to the gates and greeted parents. They tried to remind the parents to wear a lanyard, but the parents often forgot them at home. The exception was handled by showing the parents to the school office to apply for a day pass. This of course meant that the person could just say, “Sorry I forgot,” at the gate and be guided to enter the premises. Automatic bypass of access control! At the start of a semester, the staff at the gate was strict about it. Staff asked to see the access lanyards and advised parents to wear them because they’re reminded to. But the lanyards were small, and it was hard to see a tiny picture of a face to verify that the person actually matched the ID card, let alone check which parent was matched with a certain kid when they took the kids out. In fact, while exiting the premises with a child, there was no outbound check at all!
As the semester progressed, and a few weeks passed, the staff barely glanced at the IDs; they were only registering a colourful lanyard. At the same time, they could not cover every entrance and exit on the premises. Parents soon learned that they didn’t need to wear the lanyard or access cards anymore because they could either take a route where there was no staff or just rely on them recognising them by looks. In truth, a parent can just smile and walk past them and say, “Sorry, I forgot my badge. A bit of hurry,” and nobody cared. That’s called social engineering access. Show a friendly face, get people used to it, and then enjoy the freedom of access.
Access security lapsed completely once inside the school’s perimeter, where the teachers and other staff seldom wore identification. The badges were just for people trying to access. Once they got inside, there was no way to distinguish between internal staff and parents. Nobody could check to see if they were allowed to remain inside because there are too many people wearing lanyards and too many not wearing them. At best, the ID plan gave a false sense of security.
The school example illustrates a host of different security problems: overly complex and failing access control scheme, multiple points of entry, lax attitude toward enforcement, lack of formality and training, and no real enforceability. Too often, it’s the same with IT security policies.
Access control needs to involve user management, making sure that only authorised users are created and that specific users get access to certain resources and not others. To do that, the cybersecurity manager has to know who the users and user groups are and what resources are appropriate for them to access. The cybersecurity manager also needs to know what systems have to be controlled. Only then can they decide what kind of controls to implement. Those are the basic building blocks of access control. Without them, it’s impossible to do access control well. Systems that allow centralised control like Microsoft Active Directory are essential in building the access control scheme.
People usually think about access control in terms of horizontal access—how many systems an employee gets access to. The cybersecurity manager should also consider vertical access and depth of access. Is the user a normal user or an administrator? The more privileged access a person has, the greater their power, and the more likely they are to be a target for cyber criminals, whose goal is to gain broad access rights to internal systems. Do they mention in their LinkedIn profile that they are working for your company as IT administrator? Guess who hackers would prefer as a target?
Most user management today is done with usernames and passwords, and it’s clearly inadequate. If we look at the breaches and cyber exposure happening in the world right now, we can see why that many of them involve insecure access control processes.
In fact, passwords still pose the biggest threat related to unauthorised access to information, even with all of the security technology available. People use passwords that are too short and too simple. They use the same password for everything inside and outside of company, and keep the same one for a long time. Recent studies say that half of people use the same password everywhere.
It’s not hard to see why people don’t use a different password for everything and change it regularly. One individual we met had 300 plus accounts across various internet services. That means 300-plus usernames and passwords. Managing that is, of course, a huge problem. This isn’t just a company problem or an individual problem; it’s a planetwide problem.
Some companies have offered an apparent solution to password management woes, like using a Google, Facebook, or LinkedIn authentication to log in with a click of a button. Users with many accounts appreciate this because they can just use the same passwords for their company and private services whenever they use the internet. This is very convenient for the users but also places a lot of trust on these authentication services and centralises the risk of compromise. If one of these big ones gets hacked, a lot of other systems and information will be at risk. LinkedIn was hacked, and all usernames and passwords were stolen a few years back. Everyone should know, right? But few people know that many of these stolen passwords still work today.
Unfortunately, when people use their work-related email and password—their user account at work—they inadvertently identify where they work, information about the organisation, and what services that might apply to. Even if those passwords are encrypted or hashed (in tech language, it means they are protected) in third-party services, once a hacker cracks the passwords (defeats the password protection), he potentially gets access to everything, including the user’s work servers that share the same password.
Login credentials are valuable—they’re a sort of currency that can be traded in the underworld economy. Some hackers actually trade login credentials for money or sell access rights to certain companies or types of business systems. Servers and workstations might trade access for some other service. On the dark web, access to a corporate system or to certain servers in a big company can go for fifty-five US dollars.
Companies have tried to create more effective ways to authenticate people—to identify them and make sure they are who they’ve claimed to be when they log in to a system. A company might link the password to another factor, like an SMS message sent to the user’s phone with a PIN to enter at login. Banks use physical number tokens that generate PINS for you based on time and secret keys. A web bank might issue hardware tokens, or PIN tokens, for its users. A commercial business might be using a Virtual Private Network (VPN) for its users, and VPN software for every user who is working for them. Then they could use a password, username, and digital certificate to authenticate the connection and its users.
Layering on additional factors of authentication is usually quite effective; it increases the difficulty of breaching that system. Having said that, there are some instances that the additional complexity of authentication didn’t actually improve security much. Most people are aware of SMS authentication, or the One Time Password (OTP) solution, for instance. With OTP, whenever a user logs into their web bank, they’re required to answer with a PIN number that’s sent to their cell phone in addition to their username and password. A hacker using another phone to try to access the account, even if he knows the username and password, can’t log in without first getting an SMS, reading the number, and using that to log in.
Additional authentication steps like SMS tokens add complexity to the authentication process, and remember, complexity is the worst enemy of security. Even multifactor authentication like SMS tokens aren’t foolproof. Anybody working in the company that provides the cell phone connection could intercept the user’s SMS. Or, if manual processes aren’t strict, a hacker could portray himself as someone else, then manage to open up a clone SIM that receives the same SMS messages. Ironically, that supposedly super secure multifactor authentication scheme that combines SMS token with a good password could be compromised by the same feature that’s meant to protect the user. How many little shops are working for your mobile service provider and are able to issue cloned SIM cards or change the ownership of the mobile connection? Try enforcing an access control policy on them!
Complexity rarely makes security better. That’s not to say that adding a second factor of authentication is a bad thing. It’s good, but there are limitations. There are a lot of different ways to authenticate people. The lesson here is that the more secure the system needs to be—like a bank, for example—the more security is needed in authentication.
Anyone who has a lot of passwords and usernames—say, more than ten—should get a password management tool. They are usually referred to as “password wallets.” There are several available that can run on a laptop and sync with a mobile phone. For a few dollars a year, all of a person’s passwords can be securely stored in one wallet, so they just have to remember one master password for the wallet. Then it helps them log in and authenticate to different solutions, and stores them securely. Having a password wallet makes life easier for users.
At the same time, we have to remind users that wallets also pose a risk, especially online wallets. Passwords are stored in an encrypted file, and in some solutions, that file is sent to a central repository on the internet. If that application or an individual’s computer is hacked, all of those passwords may be compromised in one place. Still, online wallets are an effective solution for people who have a lot of passwords and companies that have a lot of users, even though it has a single point of failure. Anyone choosing an online wallet should choose one that’s been thoroughly tested—by more than one person or one company. It needs to be more than just a convenient solution. It needs to be a secure one.
We couldn’t begin to cover all of the options for access control today. There’s a multitude of authentication and access control technologies, services, and solutions that go under this topic, and all of them would solve bits and pieces of the whole problem. No single solution will cover all of the access control needs of the organisation. This is because no single service can be compatible and integrate with all the various services out there. The cybersecurity manager’s job is to gain understanding of which access control technologies are a good fit for his business and to help IT to design a scheme that is flexible, has good coverage, and is able to secure the business well enough.
Avoiding Security Theatre
For staff to use passwords effectively, they will need to understand what matters in password management. There’s a lot of conflicting information out there. Many government and corporate guidelines, for instance, say that a password has to be eight characters long. It has to contain a mix of letters, numbers, uppercase and lower case, and a special character, and it has to be changed every thirty to ninety days. However, research and practical experience has shown that there’s one property above others that makes passwords strong: the length. Some argue that the complexity of the character set is also significant, but it’s not as effective as sheer length. A long password is a strong password. Researching this subject will reveal a lot of academic papers and calculations pointing to different directions. But hackers think different, they are only interested in defeating the password protection in any means necessary. From attackers’ perspective, only that outcome matters. Not the computational difficulty!
Subscribe To Our Newsletter
Get the latest intelligence and trends in the cyber security industry.
What’s So Special about Those Special Characters?
Why did we start using special characters in passwords in the first place?
The argument for comes from mathematics. In theory, adding special characters increases the workload needed to defeat password protections (encryption, hashing, and so on).
The thing is, we don’t care about the math debates about password complexity. We care about the outcomes, if it’s possible to be a happy password user while making it impossible to crack the password.
Instead of making passwords hard to remember and enter, just make them long but easy to type and remember. “Oh dear a black swan crossed the road” is actually a very good password. Password length is the single most effective way to make the passwords secure. Remember, hackers have no way of knowing if you used special characters, uppercase letters, numbers and so on in your password. They have to assume that all types of characters were used when they try to crack yours. Besides, a password over twenty characters long is virtually undefeatable by any practical means.
Even poor password policies can seem fine on the surface. That’s a problem because people will think their login is secure when it’s not. If people think they are safe, they will drop their defences. They figure, “We already did this two-factor thing. Nothing else can hit us.” Part of access control is spreading the best practices of security and managing the sense of security. Or maybe they think that their Windows AD policy requiring ten-character passwords with all the complexity is good enough. Guess what, it isn’t! If a hacker could crack it, it’s no good, and that’s the only metric that matters.
Is It Cracked Already?
Try this easy trick yourself. Go to www.sha1-online.com and type in any password. You’ll see a long string of characters as a response. Copy and paste that string to Google. Did you get any search results? If you did, that password has been already cracked somewhere out there! Here’s an example. Try this password: [email protected]—you’ll get this response: 21bd12dc183f740ee76f27b78eb39c8ad972a757. After googling it, you’ll see that there are many results. This password would meet most of the password complexity requirements but is very unsecure.
SHA-1 is an outdated algorithm, and using it to secure passwords is a really bad idea. Yet, a few years back, LinkedIn.com used this method for their password security. And when the service got breached, all the passwords were easy prey for hackers.
The bottom line is that you can’t know if your internet services are using good password protection mechanisms or not. But you can use a good password that can’t be cracked even if someone is dumb enough to store it with something silly like SHA-1.
What’s the security theatre then? Ineffective password policy is like airport security, when air travellers have to take certain items out of their bags in airport security, like the liquids. It’s not because they need to be scanned separately but because security wants people to participate in the security process. When you participate, you feel like it’s effective. This is called “security theatre.” It’s the same with two-factor authentication, bad password policies, and so on. Users who have to type something extra feel like they’re actually part of the security process. Unfortunately, two-factor authentication won’t prevent someone from listening in on mobile phone calls or tracking where people internet surf. People just think it’s safe because they’re participating in the security protocol, while in reality, the threat still exists, and the security can be defeated.
Companies should make efforts to train their people and to enforce proper security policies and procedures as much as possible. Enforcement usually means setting technical limitations and requirements for passwords, but not the ineffective eight-character codes we talked about earlier, with or without special characters. As we saw earlier, it’s the length of the password that makes the biggest difference. Now go back to sha1-online.com and try something like “my password is very secure,” and Google it. No findings, right?
If users are, for example, advised to make at least twenty-character-long passwords, using a poem and some sort of string to add to that poem, like a system name or something only they can guess, passwords will become so impenetrable there is no way to crack them, even if they leak.
When the company sets a password policy, it affects users’ behaviour not only in the office, but in their personal lives as well. If the company says, “Eight is enough,” people will use eight in their personal lives. If the company mandates twenty characters, maybe their personal passwords will also become longer. That’s important because security exposure also comes from the employees’ and management’s personal lives. Make “twenty-plus” their mantra.
Long passwords sound like a pain, but they are actually easier to remember; the user can type something that makes sense to them. It doesn’t have to be random. It could be as simple as what you usually buy from grocery stores: “my favourite milk is from Australian cows”
One tip: if a user keeps the same recipe for their shopping, they can just add the system name. If it gets breached, no one can use it because it won’t match with any other system, but it will still be easy to remember. Example: “google.com is my favourite milk.”
Cracking the Passwords
There are several approaches to cracking a password, including dictionary attacks, rainbow tables, and brute force methods.
A dictionary attack uses all the words in all the languages in the world, as well as millions of leaked passwords from data breaches. When a hacker has obtained an encrypted form of a user’s password, all he has to do is to take that long dictionary and hash the words in the dictionary. (Remember that sha1-online example earlier? Same idea but just faster!) If he finds a match, he knows that this was the clear text, human readable, password of the user. A variation of this technique is when the computer sifts through all of the dictionary words and tries the words with tiny changes, like an exclamation point or a hashtag or something linked to the words. A normal computer can make millions of guesses per second, and cloud services can do it in parallel many times faster.
The next method is brute force. Here, the hacker uses as much computing power as they have, then they start blindly searching all possible existing passwords, perhaps like A, AA, AAA, AAAA, and so on, and with different lengths of the search, doing millions of guesses per second. The idea is to try until the produced hash value is the same that hackers stole from the victim. Then they would know that it’s the same password!
The brute force method takes a lot more computing power and time, of course, because it requires going through all the different possible versions of passwords. That’s where the name comes from—it requires a lot of brute force! Hackers do this when the easy way doesn’t work. They use stolen credit cards to buy Amazon accounts, then use cloud computing servers to crunch the numbers and try to crunch as many passwords as possible.
The longer the password, the harder it becomes to crack it by brute force. A password of “ilikegoingtothebeachonsaturdays”, works because going through all the passwords that long will take literally forever using a brute force method. But if you use short ones, with eight characters or alike, no matter how complex they are, given enough time, they will be cracked. And sometimes brute-forcing is just work that can be skipped entirely. Enter rainbow tables!
Another type of password-cracking method puts the dictionary attack on steroids—it’s called the rainbow table. A rainbow table is a pre-computed version of all possible passwords that can exist up to a certain length. A rainbow table would start with short and simple passwords like A, B, AA, AB, and so on, and contain the corresponding hash values of these passwords. These tables are huge, usually terabytes in size. They are powerful because a hacker can do one lookup to his table, find the corresponding password hash value, and see directly the corresponding human-readable password. This method compromises passwords quickly—in a fraction of a second. And the table only has to be created once, though it takes terabytes to store. Typically, a rainbow table would contain all passwords up to a certain length, something in the order of eight to ten characters long. And because everything is nowadays cheap in the cloud, a hacker could just go and search all existing rainbow tables online, as a service, without bothering to store or create the tables himself! This is the final nail in the coffin of short passwords, no matter how complex they may be.
We read about hundreds of major breaches in the news every year. In 2012, for example, around 170 million LinkedIn user accounts were breached. The accounts of 170-plus million people were available to hackers. The majority of these passwords were easily cracked in no time by using rainbow tables and brute force techniques. Soon, lists of cracked LinkedIn passwords started circulating around the dark web. Many of these users were not aware they were compromised and did not change their password, or chose to keep the old one, perhaps not on LinkedIn but in other internet services they used. Now hackers had access to a multitude of these accounts and passwords at LinkedIn and many other internet services where users were using the same credentials.
On the surface, it seems that this is solely a personal problem for LinkedIn users, but in reality, it came back to bite a lot of businesses too. Those compromised accounts were used to collect personal user information, create fake messages to lure people to click phishing links, and other kinds of fraud. Success rates of these kinds of attacks was fairly high, as hackers were basically exploiting trust that people place on each other’s social media profiles. When messages are coming in from user’s real LinkedIn or other social media profiles, and he sends you a link, will you be included to click it? Of course!
Even worse, although LinkedIn is a huge company, it used a lousy password-encryption technology back then, just a plain and simple SHA-1; the passwords were not protected against rainbow tables, brute force, or dictionary attacks, although this should have been a very basic thing to do for any security-aware software developer.
Then, since people were using their business credentials to log in to other systems, hackers were able to use them to log in to many of those business systems as well. That massive external LinkedIn breach led to a multitude of other breaches. It was, and still is, like one big avalanche that never ends, moving from one service and victim to another.
At the time, this breach was titled the worst breach of the decade, or even throughout history, because the exposure was so huge, and the quality of stolen data was high. LinkedIn is not an isolated case either. A normal week in cyber intelligence services starts when we see another few hundred million accounts exposed in one internet service or another. This unfortunate trend isn’t going to get any better anytime soon.
Consequences of Oversights
Let’s look closely at access control security issues in a company we worked with. This company had thousands of people set up on a Windows network. We worked with the twenty-member IT staff, each of whom had access to some of the servers in the network. That level of access was appropriate; many of these people needed administrator or root-level access almost daily in their work. Few of them had the highest-level privileges and access rights to every system in the company, and that was justifiable because it was their job. Naturally there were times when more than one person needed to access the systems, so they had to share some administrative passwords between the team members.
With twenty people and 200 servers, there were a lot of usernames and passwords to remember. The complicating factor was that all the accounts in question were prime administrative accounts for all of the systems in the company. They had a username for each system, then different passwords for different users, and so on. Suddenly, they had the same problem we talked about with individual user account; they needed a solution that would allow them to share the passwords and store them somewhere. We’ve already mentioned a password wallet solution earlier, but this wasn’t their answer, unfortunately.
The company’s solution? They set up a Windows shared folder in the network, which is a folder that users in the same network can open on their computers. The shared folder was accessible to the IT team members, and they could all edit the same files in it. So IT put all of their passwords for those 200 systems and twenty IT professionals in one Excel file in that shared folder—one file containing all of the passwords in the network.
The new system was convenient from a usability perspective, but the company overlooked the risk and impact this set-up caused. One beautiful day, someone in IT forgot that he made that folder, and all files inside shared to all users inside the same Windows domain. They could have limited access to that shared folder, where only those twenty people could open the file and use it. They could have also encrypted the file so that only people with special decryption software—like a password wallet on their computer—could have opened it. Even if a hacker gained access to it, they couldn’t have decrypted the file.
Instead, anybody in the company could log in to that folder, open it, open the file, and look up the main user account, password, and username for any system in the company.
So what happened? Hackers penetrated the network—they used the LinkedIn breach to log in to one account belonging to a C-level executive. From that account, they fabricated a phishing message and sent it over to few select individuals in the company. These people were naturally inclined to click the link and got their computers infected by remote-access software. Now the hackers had access to the network. The first thing they did was crack the local passwords of these users. Next, they proceeded to look around inside the network, looking for anything interesting or of value. After these initial steps, they used something called Windows PowerShell to automatically look for network folders and systems within the network. Kind of like mapping the terrain where they found themselves. Of course, they found the shared folder, named conveniently “IT passwords,” and the Excel file where all of the passwords were located. This led to the compromise of all 200 servers. Now all they needed to do was to log in to all those servers and install covert remote access programmes called rootkits on each of them. Now that they had even better access to the servers than the administrators, they simply exfiltrated all interesting data from the company systems.
The company learned about this incident the hard way—someone contacted them for ransom, asking for money, or otherwise, the hackers would leak all of the information they had stolen.
Otherwise, professional people failed to control this problem. It was a huge disaster at the time, and it took a lot of work to clean it up. A rootkit is so stealthy that you cannot know if it’s still there. They had to spend a lot of time and money to fix the issue, blocking communications in and out, reinstalling a lot of their servers, changing passwords, and so on. | <urn:uuid:0873b66a-8766-4904-a093-e95f719f3598> | CC-MAIN-2022-40 | https://cyberintelligencehouse.com/2022/02/17/assets-and-access-part-ii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00679.warc.gz | en | 0.960893 | 6,450 | 2.671875 | 3 |
Ready to learn Business Intelligence? Browse courses developed by industry thought leaders and Experfy in Harvard Innovation Lab.
In this post, I will give you a walkthrough of the capabilities and the development process of Infobarris, a product we’ve developed for the Public Health Agency of Barcelona (only available in Catalan).
One of the main purposes of the ASPB (Catalan acronym for the Public Health Agency of Barcelona) is to ensure the health of Barcelona residents and visitors through knowledge of the population’s health status and its determinants. We therefore have the authority to develop tools to determine health status and to support public health policies such as preventive health programs, among others.
Infobarris is a tool to support the analysis of health and its determinants in the neighborhoods of the city of Barcelona. It contains a set of indicators available at the neighborhood level, with reference to the values of the district to which that neighborhood section belongs and to the entire city of Barcelona. The information is organized in a set of tabs according to several health topics, based on our conceptual framework of health determinants and health inequalities in urban areas developed at the ASPB (Borrell et al. Journal of Epidemiology and Community Health, 2013, available here)
Available topics (brief description):
- Barcelona Urban Heart (Urban Health Equity Assessment and Response Tool), which represents the array of Urban HEART indicators presented as a tool for visualization of health equity made by the World Health Organization (WHO) in collaboration with some of the investigators at the ASPB.
- Population (depicted by sex, age group or place of birth).
- Physical context (age of housing stock).
- Socioeconomic context (percentages of low education and unemployment, truancy index, among others, for each sex).
- Sexual and reproductive health (fertility, abortion and pregnancy rates, among others, in distinct maternal age groups and birth regions).
- Health-related behaviours (percentages of regular smokers, self-assessed poor health status, obesity, and risk of poor mental health-related issues, for each sex).
- Drug abuse (index of problematic drug use, drug-related death rates, and drug therapy admission rates devised by sex or substance among other indicators).
- Notifiable diseases (overall, tuberculosis and HIV incidence rates).
- Mortality (life expectancy, mortality rates by cause of death and premature death rate, for each sex).
- Use of health services (percentages of healthcare coverage and dental visits).
Directions for use
1. Select a tab according to the available health topics.
2. Select the reference neighborhood on the map of Barcelona (top left map).
3. Use the drop-down menu to choose the category you want to view (by sex or other available options depending on the selected tab).
How did we do it?
0) A working group composed of public health experts from distinct backgrounds (statisticians, physicians, computer engineers, psychologists) proposed a series of indicators according to the conceptual framework of health determinants and health inequalities in urban areas (previously described) to devise a health scorecard for the city of Barcelona, taking the neighborhood section as the reference area to be evaluated.
1) To identify the required indicators and final visualization model for each health topic, we built a preliminary prototype using a mockup tool. Then, we scheduled a series of periodic meetings with the different key users (working group members) in order to discuss and envision the graphical requirements of the project using the preliminary prototype (Agile methodology).
3) From the outset, our BI platform (SISalut) was key to organizing all the business logic and became the central repository to be used to systematise the generation of the indicators that would feed the Tableau dashboards. We reused many of the indicators that were already defined in our BI platform. We ingested into it all new data sources (ETL, validation and normalisation process). Quite often, we had to reformat the indicators (perform some extra aggregations, recode a few variables, column pivot or unpivot operations, etcetera), in order to smoothly integrate the resulting indicators as Data Sources of Tableau, avoiding further data transformation drawbacks inside Tableau. As an example of data uniqueness we computed all population-related rates using the very same slices of Population OLAP cubes available from our BI platform.
4) We decided to use Tableau for developing the interactive dashboard because we were used to working with it in previous projects to obtain appealing data visualizations easily and quickly. In addition, Tableau has achieved a consolidated and valued market position at an affordable license cost. Hopefully, the development was boosted by the help of an external consultant (thanks to Synergic Partners) who focussed on the rough part of turning data into beautiful charts and maps, while facing some rather tricky features of Tableau. By the end of the third month after the beginning of the project, a working version was ready to be tested.
We have designed an online interactive dashboard to display the status of health and its determinants in the neighborhoods of Barcelona applying Agile methodology to support translating the requirements to the final product in a short period of time without major issues. This experience has helped us to acquire a thorough knowledge of Tableau that will be useful for the development of other dashboards.
We are now ready to concentrate on improving this data product while measuring its impact and value in the context of public health in the city of Barcelona. In addition, we are prepared to feed the final product every year with the incoming health data following the same data processing strategy: ingesting data into our BI platform, extracting the resulting indicators and using them as Tableau data sources. Thus, Tableau will sketch old and newly added data according to the selected year.
The amazing team behind the project:
Pere C. Llimona
Pere C. Llimona
+ Multidisciplinary team:
Patricia G. Olalla
Pere C. Llimona | <urn:uuid:214ef002-8544-492f-af27-8453f16131fb> | CC-MAIN-2022-40 | https://resources.experfy.com/health-tech/bi-on-steroids-providing-value-to-barcelona-residents/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00679.warc.gz | en | 0.903387 | 1,416 | 2.578125 | 3 |
In 2020, UK councils notified the Information Commissioner’s Office (ICO) of over 700 data breaches, with one particular council reporting 29 different breaches in a single year.
Among the most serious forms of cybercrime related to data breaches are ransomware attacks. These insidious attacks involve locking victims out of their core IT systems and databases. In recent years, this type of cyberattack has often included data being stolen as leverage to force targets to give into their demands.
Today, we’re going to take a closer look at ransomware, how attacks work and what damage they can cause for councils. We’ll also examine why local authorities make ideal targets for this brand of cybercrime.
What is ransomware?
In brief, ransomware is a malicious type of software. Sometimes referred to by the term “crypto malware”, ransomware is designed to effectively encrypt an organisation or government offices systems and data, locking out authorised users until they obtain a decryption key. Ransomware can lock people out of their computer operating systems, servers, email accounts and dedicated devices.
How does ransomware get onto council systems in the first place?
Ransomware can be deployed through exploit kits in a drive-by download, or semi-manually via automated active adversaries. The most common way a ransomware payload is delivered is through a malicious spam email.
How does a typical ransomware attack work?
Once the ransomware has infected a device or network it encrypts the systems and files believed by the attacker to be of most value. It has become common practice for many ransomware gangs to also steal data when they exfiltrate systems. An electronic ransom note is then left on the victim’s system where it can easily be viewed, commonly in the form of a landing page, but sometimes a simple text file.
The ransom note details a payment the attacker demands in return for a decryption device and the deletion of stolen data files. Payments are typically requested in crypto currency as this is difficult for the authorities to track, allowing ransomware gangs to escape without capture.
What happens when victims refuse to pay?
In some cases, councils and companies are able to restore their systems and databases from fresh backups. When this is the case, they may refuse to give into ransom demands and pay up. This is when the stolen data becomes useful to ransomware groups. To force a payment, they usually threaten to release the sensitive data files online or sell them at auction on the dark web.
Providers of essential services
UK councils often support tens of thousands of people who depend on the key services they provide on a daily basis. This is a prime reason why ransomware gangs target local authorities. Ransomware operators are like terrorists in the way that they thrive in environments where they cause the most disruption for their victims.
When a council is forced offline because of a ransomware attack, this can mean citizens are left without access to what are sometimes critical services. Members of the community may be sheltered or infirm and will require these vital services to be up and running to ensure their health and safety. This kind of chaos is exactly what ransomware operators need to succeed. When councils and companies provide key services and cannot deliver them, they are more likely to give in to demands and pay up for the return of their systems.
Keepers of sensitive data
Ransomware operators not only pick their targets according to the services they offer, but also the sensitivity of the data they store and use. As mentioned, confidential information is often stolen when ransomware gangs infiltrate a council or company. An attack on Hackney Council’s services and IT systems last October saw data stolen during the attack published online by the threat operators responsible.
When private information belonging to a data subject is disclosed, it constitutes a data breach. If an organisation is found to have taken insufficient security measures, the consequences can be severe for all involved.
Local councils must retain numerous types of personally identifiable information (PII) on those living within the borough they are charged with looking after, such as names, dates of birth, addresses, and even financial details and private medical records. Information may be emailed, shared or stored on servers, but if unsecured it can be exploited and exposed by cybercriminals like ransomware gangs.
Cybercriminals like ransomware groups must hit the headlines if they want to be recognised as a threat. Attacking local councils and causing panic and disruption can encourage other victims to pay up, as a government target is exceptionally high-profile.
Lacking in technical abilities
Council budgets do not always allow for the resources necessary to employ a qualified cybersecurity team. Research conducted this year found that only half of all council staff members undertook cybersecurity training last year. It also found the around 45% of UK councils were hiring no employees with approved security certification.
This is a major vulnerability for councils and makes them a potential target for ransomware gangs. If councils do not have the skillset required to protect their systems and data, they may need to outsource this area of expertise or find a more cost-effective solution to increase their cybersecurity.
Specialist systems for advanced data protection
With knowledge obtained assisting local authorities, educational institutions and businesses, we at Galaxkey have developed our most secure system where enterprise professionals and council teams alike can benefit from improved security. Combining our technical expertise with direct experience of the cyber security challenges businesses and governments are now facing, we have designed the Galaxkey workspace.
Packed with powerful tools, our system offers council teams access to cutting-edge encryption to lock data away from cybercriminals. Exceptionally user-friendly, staff can quickly add three-layer encryption to data whether it is being stored, shared or sent via an email in an attachment.
If your council offices are ready to protect the information you retain from damaging and disruptive ransomware attacks, contact us today to book your free two-week trial. | <urn:uuid:f516dbec-624c-4bbb-9a28-845bddc47a41> | CC-MAIN-2022-40 | https://www.galaxkey.com/blog/why-are-local-councils-targeted-by-ransomware-gangs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00679.warc.gz | en | 0.949082 | 1,204 | 3.15625 | 3 |
Rolls-Royce has signed a deal with internet giant Google in a move intended to help the British engineering company to develop autonomous ships.
Under the terms of the deal, Rolls-Royce will use Google’s Cloud Machine Learning Engine to further train an AI-based object classification system that it has developed, for detecting, identifying and tracking the objects that a vessel might encounter at sea.
The agreement, which the companies claim is the first of its kind in the marine sector, was signed today at the Google Cloud Summit event in Stockholm. Rolls-Royce has some 4,000 marine customers worldwide, including 70 navies.
Maritime machine learning
The Google Cloud Machine Learning Engine uses the same neural net-based machine intelligence software that powers many of Google’s own products, such as image and voice search. It competes against similar cloud-based machine learning platforms from Amazon Web Services (AWS), Microsoft and IBM.
In effect, machine learning methods analyze existing data sets with the objective of learning to recognise patterns, making predictions from previously unseen data. Airbus Defense and Space, for example, uses Google Cloud ML Engine to correct satellite imagery to distinguish between snow and clouds.
According to Karno Tenovuo, senior vice president of ship intelligence at Rolls-Royce, the technology has a key role to play in developing smart ships that pilot themselves, but in the short term, it’s more about recognizing (and hopefully, avoiding) hazards.
“While intelligent awareness systems will help to facilitate an autonomous future, they can benefit maritime businesses right now, making vessels and crews safer and more efficient. By working with Google Cloud, we can make these systems better faster, saving lives,” he said.
The intention is for Rolls-Royce to use Google Cloud’s software to create bespoke machine learning models that can interpret the large and diverse marine data sets that the engineering company has created. This data must be relevant and present in sufficient quantities to get statistically significant results from machine learning and the resulting models will be evaluated by using them in practical marine applications, so that they can be refined over time.
In the longer term, Rolls-Royce and Google say they intend to undertake joint research on a range of areas: unsupervised and multimodal learning; the use of speech recognition and synthesis in marine applications; and using machine learning on board ships.
The goal of this research, of course, is smarter ships: safer, easier and more efficient to operate for crew that have a better understanding of the environment in which a vessel finds itself. This will be achieved by combining data from a on-board sensors, existing ship systems such as radar and other IT systems, such as marine databases and mapping applications, for example. | <urn:uuid:4456a884-24b5-4e8b-8938-76d4b8b9e03e> | CC-MAIN-2022-40 | https://internetofbusiness.com/rolls-royce-navigates-google-machine-learning-shipping/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00679.warc.gz | en | 0.935417 | 570 | 2.53125 | 3 |
Wines require proper storage to preserve and age correctly. changes to temperatures and exposure to light can increase the rate of spoilage. Humidity is also a factor that needs to be monitored and controlled. This is why traditionally wine has been stored in cellars. Underground storage eliminates sunlight, and typically provides more stable temperatures year-round.
AKCP provided a wine cellar monitoring solution that includes temperature, humidity, airflow and water leak detection. | <urn:uuid:2d91323c-fc20-486e-9718-9feadf8d43de> | CC-MAIN-2022-40 | https://www.akcp.com/support/5-dry-contact-sensor-pin-out/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00679.warc.gz | en | 0.943844 | 88 | 2.625 | 3 |
So much of our personal and professional lives are online — from online banking to connecting with friends and family to unwinding after a long day with our favorite movies and shows. The internet is a pretty convenient place to be! Unfortunately, it can also be a convenient place for cybercriminals and identity theft.
One way these scammers may try to take advantage of someone is by trying to convince them to give up their personal information or click on links that download things like malware. They might try to appear as a trustworthy source or someone you personally know. This fake online communication is called “phishing.”
As we’ve all heard before, knowledge is power. By understanding what phishing is, how it works, and the signs to look for, you can help minimize your risk and get back to enjoying the internet the way it was intended. Here’s what you should know.
How does phishing work?
You’ve probably heard of the term “phishing,” but maybe you don’t know what it means. Here’s a quick overview of how it works.
Phishing is a type of cybercrime where scammers send communications that appear to be from trusted sources like a major corporation — basically, they’re trying to play off people’s trust through what is known as social engineering. They might request sensitive information like passwords, banking information, and credit card numbers. Hackers may then use this information to access your credit cards or bank accounts.
The thing with phishing attacks, though, is that they can come through several platforms, including:
- Email: This is the most common type of phishing, with 96% of phishing attacks occurring by email.
- Phone calls: Scammers might leave messages encouraging targets to call a number where someone will ask for their personal information.
- Text messages: The goal is to get people to click links to a malicious website or webpage.
- Wi-Fi spoofing: Scammers create a malicious free Wi-Fi hotspot that appears to be a legitimate access point. Once connected, they have access to a user’s system.
What kind of information are phishing scams after?
We’ve mentioned that phishers are looking to get sensitive information, but what exactly are they after? The kind of information phishing scams are after might include:
- Login information (including email account and password)
- Credit card information
- Bank account numbers
- Social Security numbers
- Company data
Types of phishing attacks
Phishing scams can come in many forms, but understanding the common types of phishing attacks can help you keep identity thieves at bay. Here are some to be aware of:
A phishing email is a fraudulent email made to look like it’s from a legitimate company or person. It may ask you to provide personal information or click on a link that downloads malware. For example, an email allegedly from Bank of America notes that due to suspicious activity, you should log into your bank account to verify your information.
Fortunately, there are ways to spot a phishing cyberattack like this.
- There are typos and grammatical errors. If the email is filled with spelling and grammatical errors, it’s likely a phishing scam. Corporations don’t send out emails riddled with errors.
- A bank requests personal information. Financial institutions don’t email you to ask for personal information like your PIN, Social Security number, or bank account number. If you receive an email like this, delete it and don’t provide any information.
- The URL doesn’t match. To see the sender’s email address, hover over the name of the sender or on the link in the email. If the sender’s address doesn’t match the name that shows, that’s a red flag. For example, if an email that appears to be from FedEx has an email address without the company name in it or if it’s spelled wrong, it’s most likely a phishing email. To check the URL of a link on a mobile phone, press the link and hold it with your finger.
- The email isn’t personalized. A company you do business with will address you by name. A phishing email might use a general greeting like “Dear Account Holder.”
- There’s a sense of urgency. Phishing messages create fake emergencies to get you to act without thinking. They might claim an account is being frozen unless you immediately confirm your personal details. Requests for emergency action are usually phishing emails. A legitimate business gives its customers a reasonable amount of time to respond before closing an account.
- It’s from an unfamiliar sender. Consider deleting an email from a sender you don’t recognize or a business you don’t patronize. Also, be cautious with a message from someone you know who seems unusual or suspicious.
While some phishing emails are sent to a broad audience, spear phishing emails target specific individuals or businesses. This allows the scammers to research the recipient and customize the message to make it look more authentic.
Examples of spear phishing emails include:
- Enterprise hacking: Cybercriminals send emails to employees in a corporation to find vulnerabilities in a corporate network. The emails might appear to be from a trusted source. It only takes one person to click on a link to download ransomware that infects the company’s network.
- A note from the boss: An employee receives a fraudulent email that appears to be from an executive asking them to share company information or expedite payment to a vendor.
- Social media scam: Cybercriminals can use information from your social media account to request money or data. For example, a grandparent might receive a text using the name of their grandchild asking for money for an emergency. But when they call to check, they find out their grandchild is safe at home.
One of the best defenses against spear phishing is to contact the source of an email to verify the request. Call the colleague who’s asking you to do a wire transfer or log onto your Amazon account to check for messages.
For this highly customized scam, scammers duplicate a legitimate email you might have previously received and add attachments or malicious links to a fake website. The email then claims to be a resend of the original. Clicking a malicious link can give spammers access to your contact list. Your contacts can then receive a fake email that appears to be from you.
While clone phishing emails look authentic, there are ways to spot them. They include:
- Follow up directly. Go to the website of the bank, online retailer, or business to see if you need to take action.
- Look at the URL. Only websites that begin with HTTPS should be trusted, never sites that begin with HTTP.
- Look for mistakes. As with any phishing email message, be on the lookout for spelling errors and poor grammar.
Through vishing or voice phishing, scammers call you and try to persuade you to provide sensitive data. They might use caller ID spoofing to make the call appear to be from a local business or even your own telephone number. Vishing calls are usually robocalls that leave a voicemail or prompt you to push buttons for an operator. The intent is to steal credit card information or personal and financial information to be used in identity theft.
Fortunately, there are signs that give away these attacks. They include:
- The call is from a federal agency. If a caller pretends to be from a federal agency, it’s likely a scam. Unless you’ve requested it, agencies like the IRS won’t call, text, or email you.
- It requires urgent action. Scammers might attempt to use fear to make you act quickly. The pressure to act immediately is a giveaway.
- They request personal information. It’s a red flag when the caller asks for your information. Sometimes, they’ll have some of your data, even the first few digits of your Social Security number. The scammer will try to make you think the call is legit and get you to provide additional information.
If you’d like to avoid vishing calls, there are several things you can do. When you don’t recognize the number, don’t answer the phone. Let the call go to voicemail, then block it if it isn’t legitimate. Use a call-blocking app to filter calls coming to your cellphone. To block calls on a landline, check with your service provider regarding the services offered.
Dealing with a cybercriminal is no time to be polite. If you do answer a vishing call, hang up as soon as you realize it. Don’t answer any questions, even with a yes or no. Your voice could be recorded and used for identity theft. If they ask you to push a button to be removed from a call list, don’t do it. You’ll just receive more calls.
If you receive a voicemail and are unsure if it’s legitimate, call the company directly using the phone number on the company website. Don’t call the number in the voicemail.
If you’ve ever received a text pretending to be from Amazon or FedEx, you’ve experienced smishing. Scammers use smishing (SMS phishing) messages to get people to click on malicious links with their smartphones. Some examples of common fraudulent text messages include:
- Winning prizes: If it seems too good to be true, it probably is.
- Fake refunds: A company you do business with will credit your account or credit card, not text you.
- Relatives who need help: These messages might request bail money or other assistance for a relative who is abroad.
- Messages from government agencies: Always delete these texts because federal agencies don’t conduct business by text message.
- Texts from companies like Amazon or Apple: These are the most frequently spoofed businesses because most people do business with one or both of them.
If you receive a smishing text, don’t respond because it’ll cause you to receive more texts. Instead, delete the text and block the number.
Pop-up phishing occurs when you’re on a website and a fake pop-up ad appears. It encourages you to click a link or call a number to resolve the issue. Some of these reload repeatedly when you try to close them or freeze your browser.
Common pop-up scams include:
- Infected computer alert: This scam ad tries to persuade you to click a link to remove viruses from your computer. For added urgency, some even include fake countdown clocks that give you a few seconds to click a link and install antivirus software. The link actually installs malware. Legit antivirus software like McAfee® Total Protection won’t do that — instead, keeping your connected life safe from things like malware, phishing, and more.
- AppleCare renewal: This pop-up encourages you to call a fake Apple number to give credit card information to extend your Apple warranty.
- Email provider pop-ups: You’re encouraged to provide personal data by this pop-up, which appears to come from your email provider.
If you see a scam pop-up ad, don’t click on the ad or try to click the close button within the ad. Instead, close out of the browser window. If your browser is frozen, use the task manager to close the program on a PC. On a Mac, click the Apple icon and choose Force Quit.
What should I do if I am a victim of phishing?
Being online makes us visible to a lot of other people, including scammers. Fortunately, there are things you can do if you become a victim of phishing — allowing you to get back to enjoying the digital world. They include:
- File an FTC report. Go to IdentityTheft.gov to report phishing and follow the steps provided.
- Change your passwords. If you provided the passwords to your bank account or another website, log into your account and change your passwords and login credentials. If you have other accounts with the same passwords, change those too. Don’t use the same passwords for more than one account.
- Call the credit card company. If you shared your credit card number, call and let them know. They can see if any fraudulent charges were made, block your current card, and issue a new credit card.
- Review your credit report. You can get free copies of your credit report every 12 months from all three major credit agencies — Experian, TransUnion, and Equifax — by going to AnnualCreditReport.com. Check to see if any new accounts were opened in your name.
- Scan your devices. There’s a chance you downloaded malware during the phishing attack. Antivirus software, like what’s included in McAfee Total Protection, can scan your devices in real time to detect malicious activity and remove viruses on your devices.
How can I protect myself from phishing attempts?
You deserve to live online freely. But that might mean taking steps to protect yourself from phishing attempts. Here are some ways you can improve your cybersecurity and keep scammers at bay:
- Don’t click email links. If you receive an email from your bank or a company like Amazon, open a browser window and go directly to the company’s site. Don’t click a link in an email.
- Use unique passwords. If you use the same password for multiple accounts, a hacker that accesses one of your accounts might be able to break into all of your accounts. Use different passwords for each of your accounts. A password manager like McAfee True Key can help you create and save passwords.
- Check your browser security. Web browsers like Google Chrome and Safari can be set to block fraudulent websites. Go into the settings for your browser and adjust the security level.
- Use spam filters. All major email providers have spam filters that move suspicious emails into a junk or spam folder. When phishing emails do get to your inbox, always mark them as spam so all other emails from that source will go to the spam folder.
- Delete suspicious emails. Delete emails from financial institutions with urgent subject lines, for example.
- Use antivirus protection. All of your internet-connected devices should have antivirus protection like McAfee Total Protection. Set it to update automatically to keep your coverage current.
- Don’t email information. Banks and credit card companies won’t email you for personal data. If you want to confirm information with a financial institution, contact them directly with the information on their website, such as with a phone number.
- Watch your social media posts. Be careful about what you post on social media. Those quizzes where you mention life details, such as your pet’s name, school mascots, and so on, can provide hackers with a wealth of information. Make sure only friends can view your posts.
Browse online safely and securely
You don’t have to stop enjoying the internet just because of phishing attempts. McAfee’s identity theft protection services, including antivirus software, make it possible to enjoy your digital world while staying safe from scammers and identity thieves.
With 24/7 active monitoring of your sensitive data, including up to 60 unique types of personal information, McAfee is all about proactive protection. This means you’ll be alerted 10 months sooner than our competitors — so you can take action before your data is used illegally. We also provide up to $1 million of ID theft coverage and hands-on restoration service in the case of a data breach.
The best part is that you can customize a package to meet your needs, including virus protection, identity theft monitoring, and coverage for multiple devices. We make it safer to surf the net.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:37ba5678-6b71-4868-b71f-6b901481b858> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/tips-tricks/what-is-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00679.warc.gz | en | 0.915404 | 3,395 | 3.3125 | 3 |
WHAT IS EXPRESSION LANGUAGE INJECTION?
Expression Language Injection (aka EL Injection) enables an attacker to view server-side data and other configuration details and variables, including sensitive code and data (passwords, database queries, etc.) The Expression Language Injection attack takes advantage of server-side code injection vulnerabilities which occur whenever an application incorporates user-controllable data into a string that is dynamically evaluated by a code interpreter. If the user data is not strictly validated, an attacker can substitute input that modifies the code that will be executed by the server.
Expression Language Injections are very serious server-side vulnerabilities, as they can lead to complete compromise of the application's data and functionality, as well as the server that is hosting the application. Expression Language Injection attacks can also use the server as a platform for further attacks against other systems.
To counter this vulnerability, applications can avoid incorporating user-controllable data into dynamically evaluated code, instead using safer alternative methods of implementing application functions, ones that cannot be manipulated for malicious purposes. | <urn:uuid:7ba07529-e578-4714-8486-1d4f1c72fc1c> | CC-MAIN-2022-40 | https://www.contrastsecurity.com/glossary/expression-language-injection?hsLang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00679.warc.gz | en | 0.890884 | 222 | 2.828125 | 3 |
Security and encryption are becoming bigger talking points lately for users of the web. Everyday there are new breaches, vulnerabilities and threats announced and it can be overwhelming to grasp it all. A question that is being raised more and more is why don’t we use the secure protocol for all websites, everywhere. HTTPS (HyperText Transfer Protocol) is what has been used by banking, shopping and other websites for two decades. This protocol encrypts the traffic between a user’s web browser and the website so hackers cannot see your sensitive information being sent back and forth.
With all these breaches and hacks why don’t companies use HTTPS everywhere?
There is a reason for that, actually there are several reasons why a company can’t and shouldn’t use it everywhere. The first reason is that encryption isn’t free. There is a financial cost, system processing cost and time cost. The financial cost is buying the encryption infrastructure to handle your encryption and if you use external certificates there’s a cost for each certificate. Using HTTPS adds processing time overhead for the back and forth handshakes, while our computers and broadband connections are getting faster it’s still added time and will slow things down. Also a trade off with HTTPS is caching, saving data for easier re-use. HTTPS would require to re-request items that normally would be cached locally. The last cost is time. All those certificates and infrastructure requires more maintenance, replacing expired certificates, dealing with handshake problems, etc…
Another downside to HTTPS everywhere is that HTTPS is certificate based to a domain. Using HTTPS is for one domain only. 1 to 1. If you host multiple subdomains, shop.acme.com and catalog.acme.com you would need certificates for each one. They have to match exactly to the domain being accessed otherwise you will get browser security error or not able to access the site at all. Facebook and Twitter both enabled HTTPS on their sites for everyone. They do not have that problem because their entire sites run off one base domain, www.facebook.com. Anyone with pages are ‘sub directories’ under the domain instead of sub-domains off the root.
The last big reason to why HTTPS everywhere isn’t quite ready yet is the coordination with your users/partners. You need to be sure that all those connecting can support SSL in their tools, surprising some don’t but this number is falling fast. If you expose APIs, have a public site with benign, non sensitive data, making sure the users can trust your certificates is vital. If you are using an industry standard certificate authority this is less of a problem. But if you create your own, you need to exchange files with your users/partners so they can trust your apps. If not, they won’t be able to access anything.
Google has been pushing for the wide use of HTTPS for the last several years. Google is even considering punishing regular HTTP sites by ranking HTTPS sites higher in their search results. The point is HTTPS use is growing. Whether it’s going to be the standard everywhere is still up for debate. Regardless, if you can I would recommend using it. If you have any web applications where users enter credentials of any kind, no matter what they are asking, you should use HTTPS without question. Even though you are asking people to login to get a free white paper, your users may not have their own security practices as strong as they should and use the same username and password for your site as they do for their bank. The first things hackers do with collected credentials is run them across the Internet seeing if they work elsewhere.
End of line.
Binary Blogger has spent 20 years in the Information Security space currently providing security solutions and evangelism to clients. From early web application programming, system administration, senior management to enterprise consulting I provide practical security analysis and solutions to help companies and individuals figure out HOW to be secure every day. | <urn:uuid:14d46bf1-51f5-4455-a48c-7c27317fc8a8> | CC-MAIN-2022-40 | https://binaryblogger.com/2016/05/11/cant-shouldnt-use-https-everwhere/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00079.warc.gz | en | 0.940927 | 815 | 2.609375 | 3 |
How can you know if a website is secure? In the olden days of the internet—the ancient early 2010s—the common answer to this question was pretty simple: look for the little lock in the browser bar indicating the site has a security certificate. But times have changed.
“The green lock is just not enough,” said Maria Ojeda Adan, Software Developer at F-Secure.
Last year, Google Chrome removed the “secure” indicator on sites that use https. It replaced this with a “not secure” warning for sites that only use HTTP.
In one way, this is a sign of a huge success. The international movement to get as many sites as possible to use the HTTPS protocol that encrypts all the information passed on the page is working. Most of the world’s top 1 million websites now use HTTPS, according to security researcher Scott Helme. Unfortunately, this much-needed step does not eliminate all the security and privacy pitfalls web surfers now confront.
1. Yes, look for the little padlock, but that’s not nearly enough.
Checking for HTTPS is the minimum precaution you need to take to secure your data online.
“If there’s no lock icon at, there’s no HTTPS connection to the server is not encrypted,” said Sami Ruohonen, Threat Researcher at F-Secure Labs. ‘This means that anyone listening in on to the network you’re at is able to see all your discussions with the server.”
This can include your username and password or even more valuable personally identifiable data.
2. Double-check the URL.
Having an encrypted connection website is no help if you’re not on the site you meant to load. We recently told you about spam coming from .xyz and other new top-level domains that use a newer version of an old trick to draw you into a phishing trap.
Major search engines work hard to keep from sending you to infected sites, but you could easily end up on a bad site by clicking on a link, especially in a spam email.
So while you’re checking for the lock, make sure you’re actually on the site you mean to be on.
3. Do a little research.
If you’ve done these first two steps and you still don’t feel secure, trust your instincts—especially if you’re considering making an online purchase.
Before you click “buy”, you can do some basic research on the site.
“Now what I do to do is look at these webstores. I like to look at the information they’re giving you about contacting them,” said Janne Kauhanen, host of our Cyber Sauna podcast. “Is there a phone number? Is there a location?”
Janne will also use sites like Wayback Machine to determine how long the site has existed in its current form. If the site has only been around for a brief while and used to have another identity entirely, this should make your suspicious.
4. Make sure you are running endpoint protection software.
Having an encrypted connection website is also no help if you’re connected to a malicious website. Unfortunately, there’s no obvious sign that a site has been infected.
“So if you’re connected to a malicious website [with a green lock], your connection is going to be encrypted,” said Maria. “But it doesn’t mean a malicious website is suddenly going to be a safe website.”
New tricks like online skimmers can suck up your credit card details on official site that has a green lock and amazing reputation that has lasted decades. That happened to customers of Ticketmaster.com, Newegg.com and British Airways last year and there was no way they could have seen it coming.
That’s why using endpoint protection that blocks threats like Magecart, which deploys online skimmers, is essential.
5. For privacy, use a VPN.
Even if a site is encrypted by HTTPS, there’s someone who always knows which websites you’re searching—your internet security provider. And that provider could use or sell that data depending on the laws in the country where you’re located.
“This is where a VPN comes in because rather than entrusting a local ISP with snippets of information about your browsing habits, you now have the ability to encrypt the entire communication between your PC or mobile device and the VPN provider,” wrote Troy Hunt, the cyber security expert behind Have I been pwned? “What you’re doing is moving the trust away from that local organization that’s increasingly beholden to tracking your browsing habits to the provider of the VPN service.” | <urn:uuid:931c74eb-1b40-4a62-b7cd-9358a738e5f7> | CC-MAIN-2022-40 | https://blog.f-secure.com/5-ways-to-make-sure-a-website-is-secure-and-private/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00079.warc.gz | en | 0.928417 | 1,016 | 2.75 | 3 |
In part 1 of our blog series, we defined three main telephony technologies
- Analog Telephony,
- Digital Telephony and
- IP Telephony.
We then examined Analog Telephony and looked at the reasons why businesses would want to continue using Analog telephones.
In part 2 of the series, we continue by exploring the second generation of business technologies: Digital Telephony.
Part 2: Digital Telephony
During the 1980s and 1990s a revolution in technology entered the world of telephony. This became known as the “digital age” of telephony as digital multi-line phones quickly replaced analog phones. Initially digital phones being more expensive were only used for executives, service desks or reception desks. That all changed as the price of digital telephones came down in cost in the late 1990s and digital phones started to surpass analog phones in all settings. With digital telephony, analog signals are encoded digitally and these digital messages are sent over the digital lines to be reconstructed back into audio at the handset.
Improved Voice Quality
Digital telephones introduced a whole new realm of audio to a typical telephone call. While analog lines are clear, digital signals are cleaner. Especially when making announcements over speakers, like in a retail store when a page is made from a digital telephone, the call terminates with a clean silent release from the phone line….with no clicking or popping heard over the speakers.
Digital phones use very little power, and it is drawn from the phone line itself. The wiring is simple, the phones are robust and the setup is not complex when compared to IP technology. Once the phone is installed, there is very little that can go wrong.
Features: Displays and more!
Digital telephony opened up a new capability; the ability to provide a display or screen that would display a wide range of “intelligence” that was not available with analog. This information grew more sophisticated and second nature. We forget prior to digital telephony the only way to know who was calling was to answer! Let's take a look at some popular features for digital phones.
From an installer’s perspective the most desirable features are:
From a user's perspective (depending on the type of user), these features are highly desirable:
- Busy Lamp Field (a.k.a. Boss-Secretary Filtering) – this feature provides a visual indicator (a.k.a. presence), click-to-dial and call screening all on the same key.
- Set-to-set paging (a.k.a. Voice Call/Intercom Call) - this feature allows users to page others via the handsfree speaker on the receiving phone.
- Interactive display – the display screen can show if you have missed calls, if you have a voicemail, the time and date, and incoming caller information.
- Shared Call Appearance - this feature allows multiple devices to share the same line key
Cost: Multi-Lines and Shared Lines
A multi-line analog phone requires a physical cable for each line connecting directly to the phone. With digital telephones, only one physical cable connects to the phone and it can handle multiple lines. A digital telephone system also allows you to share a pool of lines that can be accessed by dialing an access code (sometimes called a trunk access code) such as “9” to access an outside line. With the analog multi-line phones, you had to manually select the outside line you want to use and then dial out. There is significant cost savings here when phones can share a pool of external lines.
Why use Digital?
Digital phones still exist in many places, but they are fast being outpaced by VOIP based telephones, or more commonly referred to as IP Phones. Many of the digital telephones on the market from the 1980s and 1990s, such as the AT&T 7400/8400 series or the Nortel Meridian M2xxx/M3xxx series or the BCM Norstar M7xxx/T7xxx series, still perform well. Their voice quality and features continue to work flawlessly and are still very much sought after by businesses. The only reason most of these phones are being pulled is the fact that they may be perceived as outdated or the users are tired and they want the newer display of an IP phone. Some firms may be worried that their digital telephones aren’t supported anymore because the big IP system vendors sometimes use scare tactics to tell users that their digital telephones will fail if they don’t switch to their new IP system. In reality, the market continues to support older systems with parts and software, such as the Meridian 1, Lucent/Avaya Definity, Meridian Norstar, and other large 1990s PBX’s which are still maintained and offered by many vendors. Companies like E-MetroTel, Avaya and Mitel support a hybrid of digital and IP phones, allowing you to bring digital telephones over to their platform and combine them with IP phones. Note that the cost of digital telephones are much cheaper compared to IP phones.
If you are considering replacing your phones, do some research first. Decide if you really want to rip and replace everything, or it could be more economical to keep your best digital telephones and then upgrade as needed. One such option would be to upgrade some users to IP phones, an idea we will explore in depth in Part 3: IP Telephony. | <urn:uuid:88b80666-72aa-4406-90e7-74203b88affa> | CC-MAIN-2022-40 | https://www.emetrotel.com/blog/choosing-telephony-system-your-business-3-part-blog-series-part-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00079.warc.gz | en | 0.930321 | 1,276 | 3.046875 | 3 |
MIL-STD-810 Shock Methods and Procedures
MIL-STD-810 contains numerous shock methods and procedures that provide laboratory simulations of real world events. Mechanical shock can adversely effect the integrity of a component, especially if the resonances of the shock harmonize with natural frequencies of that component.
To ensure effective developmental test and evaluation of a product it is important to understand these types of shocks, their characteristics, and how they could potentially effect the product to be tested. Methods for evaluation in MIL-STD-810 include:
- Gunfire Shock
- Ballistic Shock
- Rail Impact
Method 516.8 Shock
Method 516.8 Shock has eight different procedures these procedures include:
- Functional Shock – Tests equipment its operational modes to evaluate its ability to perform as intended during and after exposure to mechanical shock.
- Transportation Shock – Tests equipment to shocks expected during transportation. Ground vehicle profiles are often used due to their severity.
- Fragility – Procedure III (Fragility) is often performed early in the development cycle to establish a fragility level of a design so that shipping and packing designs can be adequately developed.
- Transit Drop – Tests equipment’s ability to endure drops encountered during loading and unloading.
- Crash Hazard Shock – This procedure verifies the integrity of mounts and fasteners to prevent equipment mounted in air or ground vehicles from creating hazards during shocks encountered during a crash event. This often employs classical shock test waveforms.
- Bench Handling Shock – Shocks commonly encountered during packaging and maintenance.
- Pendulum Impact – This procedure is used to evaluate the effects of horizontal impacts on large shipping containers
- Catapult Launch/Arrested Landing – This tests materiel mounted on aircraft that operate on aircraft carriers.
Method 517.3 Pyroshock
Pyroshock testing is performed to assess materiels ability to operate as intended and survive when operating near detonated explosive devices. This method has five procedures that vary in accordance with closeness of the explosive shock and method of employing that shock. The shock can be administered using actual explosions, mechanical test devices such as those used in shipboard shock test machines, electrodynamic shakers, or beam resonant shock machines.
Pyroshock presents particular challenges for designers in that it has has a shock response spectrum ranging from 100 Hz to 1 MHz. These shocks can range from 300 to 200,000 gs. Because of the high frequencies encountered, effects of pyroshock can include damage to small electronic components and cause relay chatter. High frequencies can also generate piezoelectric effects that can cause unexpected operation of materiel
Method 519.8 Gunfire Shock
Gunfire shock testing is used to evaluate a components ability to withstand high rate repetitive shocks from gunfire. This method is not intended to replicate effects of large single shot weapons such as large naval guns. To adequately tailor a test plan it is important to know what specific weapon is being employed, what that rate of fire is, and where the material to be tested is with respect to the gun. Measured data from the component location on the intended platform is preferred for replication in the laboratory setting.
Method 522.2 Ballistic Shock
Ballistic shock is the shock experienced when a projectile impacts an armored surface. This testing is limited to equipment intended for use in armored combat vehicles. As with pyroshock, ballistic shock has a very broadband of frequencies with high acceleration and poses special concerns for electronic devices.
Testing can include with actual projectiles being fired at armor hulls and turrets (testing performed on a military base). Usually testing is performed using various types of shock machines such as those used in MIL-S-901 Shipboard Shock.
Method 526.2 Rail Impact
Rail Impact testing is used to evaluate tie down methods for systems that will be transported on rail cars. The impact of concern is that which occurs during the coupling process. These tests are performed using a locomotive, a cushioned draft car with the test item secured. These are conveyed at speeds of 4, 6, and 8 mph and collided into a draft car upweighted with brakes set.
Choosing Appropriate MIL-STD-810 Shock Methods
Because time and money are limited resources, decisions must be made as to which testing will be performed. While requirements can offer a degree of clarity into relevant test methodology selection, a thorough assessment must be made through a Life Cycle Environmental Profile (LCEP) to develop an effective test matrix.
The LCEP will map all anticipated logistical, tactical, and operational shock events and offer appropriate parameters for test selection and severity. These inputs combined with requirements and measured data are then placed into an Environmental Issues/Criteria List (EICL). Selection can then be made based on a risk assessment of vulnerabilities of the product based and probability of an environmental stress to occur.
Characteristics of Shock Types
Mechanical shock are generally events that have a short duration of under a second and are usually limited in frequency below 4 kHz. Other types of shock such as Pyroshock (Pyrotechnic Shock) , Ballistic Shock, and Shipboard Shock (MIL-DTL-901) can have much higher frequency components.
As we have seen, pyroshocks, typically are less than 20 milliseconds in duration with a frequency range of 100 Hz to 1 MHz. Therefore consideration must be given to the test items vulnerabilities to shock frequency content as well as g forces.
Making the Decisions
MIL-STD-810 provides guidance for selection of appropriate test methodologies. This allows for the development of systems in a timely fashion without excessive testing and over engineering. When selecting, for example, appropriate scenarios for Transport Drop for Tactical situations, look at those with the greatest impact velocity and then make a risk assessment as to which of these would pose the greatest threat to the test item based on the probability of an event to occur.
CVG Strategy Can Help
Our team of test and evaluation experts can assist you in creating a meaningful test program that meets requirements and prevents costly failures at the operational test stage. CVG Strategy provides an array of services to help you with environmental and EMI/EMC testing. We also offer classes in MIL-STD-810 to help you keep current with the latest developments in this important standard. | <urn:uuid:60a19977-6dae-42bf-a25c-5aa60557e44d> | CC-MAIN-2022-40 | https://cvgstrategy.com/mil-std-810-shock/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00079.warc.gz | en | 0.935965 | 1,288 | 2.5625 | 3 |
Swarm intelligence goes far beyond what current IoT applications are offering the world in the way of futuristic improvements and gee-whiz prototypes seen in many of today’s more progressive cities and homes.
As you’re about to see, they’re perfect for certain types of business and industry applications, making the world safer, better, and easier in an exciting number of ways.
Tackling Dangerous Tasks
Swarm-bots can be used to tackle dangerous tasks to reduce or eliminate the risk for humans. Based on the level of danger, there’s potential for loss of robot individuals, necessitating a focus on fault tolerance of the swarm. Some examples include:
- Search and rescue
- Cleanup of toxic spills
When High Flexibility and Scalability Are Needed
There are tasks where it’s difficult or impossible to estimate at the beginning the number of resources needed to accomplish the task. For example, if you’re allocating resources for managing an oil spill or leak, you cannot foresee the oil output or temporal evolution. This makes resource allocation doubly difficult.
Flexibility and scalability should be the focus of the swarm-bots deployed to handle such tasks. You can add or remove robots as needed to give the right amount of resources according to the evolving requirements of the job. Some applications include tracking, cleaning, and specific search and rescue scenarios.
Swarm robotics may be useful when it’s necessary to accomplish tasks within very large or informal environments. In these cases, you don’t have the infrastructure to control the robots, such as a global localized system or a communication network.
Swarm-bots fit the bill because of their ability to work autonomously without any infrastructure or centralized control system. Examples include:
- Extraterrestrial or underwater excursions
- Search and rescue missions
Certain environments may change rapidly over time, such as after natural disasters like hurricanes or earthquakes. Buildings may collapse, altering the original layout of the environment and creating unforeseen hazards. Here, swarm robots that are customized for high levels of flexibility will be needed.
- Search and rescue
- Disaster recovery tasks
How Swarm Robots Get Their Power
It only makes sense to have swarm robots powered by batteries. Experts recommend lithium polymer batteries because they provide high current output and energy density. They are also lightweight, flexible in format, and safe to use, as they resist overcharging. However, these types of batteries can be dangerous if not treated properly, so they must be protected from over/under-voltage, overcurrent, and overheating.
For a large robot swarm, it’s impractical to design them to be manually recharged. Instead, a module can be written to teach the robot to find a docking station for recharging when it runs low on power.
Consider that finding and docking at a charging station is a higher-level task than the swarm-bots typically do. Therefore, apart from the module, charging stations should be designed to be as intuitive as possible. For example, the inclusion of high-speed, two-directional communication between the computer and bot during recharging would be useful.
Part of the battery includes charging and discharging management to ensure battery health and safety. The discharge management circuits must be installed in the swarm bot for apparent reasons. However, the charging management circuitry may be placed outside the bot (in the docking station’s computer system).
There are upsides to external deployments, such as more straightforward bot design, which makes them cheaper to acquire. But if the aim is to maintain a bot’s autonomy within the swarm, then it must be equipped with all necessary tools to enable proper functionality.
Charging is central to a bot’s functionality, so it makes sense to have both charging and discharging management systems within the bot.
How Swarm Robots Connect to the Cloud – Swarm Computing
For swarm robotics, you have autonomous micro-machines that need to communicate with each other and with the cloud when necessary. This evolution that brings together cloud computing with swarm robotics is called swarm computing, and it’s still in its infancy.
Swarm computing brings together cloud principles with network principles, to give rise to higher functionality and flexibility of swarms or IoT ecosystems. It focuses on increasing data sharing and mobility, as well as allowing temporary control of devices connected to the cloud.
The most visible advantage of investing in cloud robotics will be the ability to delegate more difficult tasks to higher-intelligence agents in the cloud. Cloud cooperation should enable swarm robots to, for instance, connect to the more intelligent bots in the cloud when meeting more difficult challenges. There should be real-time data processing, support, and response, whether by humans or robots, to inform the autonomous robots’ actions/responses.
Right now, we still need lots more extensive research to determine how to operate, manage, and deploy highly distributed cloud services. This will demand high levels of innovation and automation but will be essential given the proliferation of IoT and swarm intelligence.
Real-World Applications of Swarm Robotics
Even though swarm robotics and IoT is a relatively new field, only a few years old, different organizations are already diving headfirst into the field. Below are some examples of companies and organizations using swarm robotics to power various aspects of their operations.
DOD Micro-drones for Military Use
The military application of swarm robotics perhaps the most significant of all. The US Department of Defense has already demonstrated one of the largest micro-drone swarms in China Lake, California. The swarm showed advanced swarm intelligence, such as decision-making, self-healing, and adaptive formation flying.
Perdix drones, as they are called, work as a collective organism, sharing a distributed brain that enables them to adapt to each other and make decisions to benefit the entire swarm. Without a leader, the swarm adapts gracefully to drones leaving or entering the team.
Ideally, the Pentagon hopes to use these small, cost-effective, and autonomous drones to accomplish the same things they used large, expensive drones to do. However, they were keen to mention that drones will not replace humans in the future battlefield. Instead, they would equip humans with information to make better decisions faster.
Inspired by biological phenomena, Wyss Institute researchers are developing RoboBees prototypes, which can perform various disaster relief and agriculture-related tasks. A RoboBee is very small, half the size of a paper clip, and weighing 0.1 grams or less. Its flight is powered by “artificial muscles,” which are materials that contract when exposed to voltage.
Some RoboBee models can swim underwater or fly, as well as “perching” on surfaces using static electricity. Researchers wanted to create micro-aerial, autonomous vehicles that could achieve self-directed flight and work coordinately when in large groups.
RoboBees can be used to assess infrastructural damage after a natural disaster or act of terrorism, as well as locate victims for smart rescue efforts.
Cost-Effective Modular Robots
A research team at the Department of Mechanical Engineering at the University of Toronto developed a modular robot called mROBerTO. Modular robots are bots that can autonomously change shape and perform different functions.
Such abilities are essential to swarm robotics research, where thousands of small bots are needed to test out behavior algorithms and functionalities. For these miniature robots to be cost-effective, it would be necessary to make sure each unit costs as little as possible. Otherwise, the cost of research would be prohibitive, holding back the advancement of the field.
Enter mROBerTO, a modular robot that can be made from commonly available and affordable materials. mROBerTo can be used for a variety of applications calling for miniature swarm robots, although its primary purpose was to give swarm robotics researchers cheap physical tools to test out swarm behavior algorithms.
These robots are designed in such a way as to enable researchers to be able to change hardware parts to test our different algorithms, shapes, and functions using the same robot skeleton. The modular millirobots are made such that changing one section/module doesn’t affect the functionality of the other sections.
Future of Swarm Robotics Research
There are hundreds of possibilities for swarm robotics research across many different fields. Theoretically, the technology will be useful in areas where human intervention would be impossible (e.g., nanomedicine) or too dangerous (e.g., search and rescue, nuclear reactors, chemical plants, mines, etc.).
However, this will only be possible if researchers finds ways to build robot swarms cost-effectively. This is the line of thinking the makers of mROBerTO above adopted (although at $60 each, the cost of a swarm of 100 robots is $6,000, which is still high).
Similarly, most of the current robotics research has been carried out in controlled lab environments, which do not mimic real-world constraints. It will become crucial, then, to find ways to take swarm robotics out of the lab and into the real world, particularly since most applications will require high flexibility of the swarms in rapidly changing environments and without external intervention.
A joint team of researchers from the University of West England and the University of Bristol is currently working on techniques to develop autonomous discovery of suitable swarm strategies when swarms are deployed in real-life situations.
Future research in this area will involve using dynamic environments to test swarm robot responses and determine the designs that will be better suited to real-world applications.
Swarm Robots: How Much Is All This Going to Cost?
This is a complex question because the cost of a robot swarm depends on so many factors, like:
- Number per swarm
- Level of autonomy
- Level of flexibility/adaptability needed
For example, the University of Colorado wanted to acquire swarm robots called Droplets, which were self-charging and worked in groups of 100 or 1000. The estimated cost was $10,000 for just 100 Droplets in 2014.
It’s easy to understand why, despite its value, swarm robotics research is still out of reach for many businesses. Typically, the people who make these robots are expensive – they are computer scientists, computer engineers, or electronic engineers with advanced degrees. The parts that make these robots – motors, cameras, sensors, etc. – are also expensive.
Swarm robotics is set to become one of the most significant technological advancements we’ll see this century. Its applications, particularly in disaster recovery and management, are endless and powerfully significant.
Of course, swarm robotics research is still in its infancy for many applications, subject to different challenges you’ve learned about here. In time, as better technology becomes cheaper and more accessible, we will see swarm robotics becoming part and parcel of business operations and decision making — for the greater good. | <urn:uuid:bd1c38c8-5a51-4c45-8333-3955188f3b3b> | CC-MAIN-2022-40 | https://www.iotforall.com/swarm-robotics-applications | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00079.warc.gz | en | 0.931948 | 2,286 | 2.5625 | 3 |
XFP vs SFP+: What Are the Differences?
In fiber optic networking, optical module is the indispensable building block and the enabler of seamless data transmission. 10G fiber optic transceivers are still popular in the market such as 10G XFP and 10G SFP+, then XFP vs SFP+, what are the differences? How about their inter-compatibility? Can we connect XFP with SFP+ module? Find all the answers in the following.
XFP vs SFP+: Definition
The XFP (10 Gigabit Small Form Factor Pluggable) is a standard for modules for high-speed network and telecommunication links that use optical fiber. XFP modules are hot-swappable, protocol-independent and they typically operate at near-infrared wavelengths (colors) of 850nm, 1310nm or 1550nm. Also, XFP transceivers can operate over a single wavelength or use dense wavelength-division multiplexing techniques. Principal applications of XFP modules include SONET OC-192, SDH STM-64, 10 Gbit/s Optical Transport Network (OTN) OTU-2, and parallel optics links.
The SFP+ (Enhanced Small Form-factor Pluggable) is an enhanced version of the SFP that supports data rates up to 16 Gbit/s. 10G SFP+ modules can be applied for SONET OC-192, SDH STM-64, OTN G.709, CPRI wireless, 16G Fibre Channel, and the emerging 32G Fibre Channel application.
Figure 1: XFP vs SFP+
XFP vs SFP+: Specifications
XFP vs SFP+, what are the differences? Although both of these modules are mainly used in 10G fiber optic networking, 10G XFP transceivers differs from 10G SFP+ optics in some specifications. Let's look at the details from the following figures.
|Standard||IEE802.3ae; XFP MSA||IEE802.3ae; SFF-8431; SFF-8432|
|Data Rate||6Gbps; 8.5Gbps; 10Gbps||6Gbps; 8.5Gbps; 10Gbps|
|Wavelength||850nm; 1310nm; 1550nm; CWDM; DWDM; BIDI; Tunable; Copper||850nm; 1310nm; 1550nm; CWDM; DWDM; BIDI; Tunable; Copper|
|Fiber Type||OM3; OM4; OS1; OS2||OM3; OM4; OS1; OS2|
Through comparing XFP vs SFP+, it is clear that SFP+ shares some advantages over XFP:
10G SFP+ optics has a smaller footprint than the XFP modules, which can also enable greater port density. That is because SFP+ transceivers leave more circuitry to be implemented on the motherboard instead of its inside — it moves some functions to the motherboard, including signal modulation function, MAC, CDR and EDC. Moreover, XFP technology is relatively older but more expensive, that's also the reason 10G SFP+ modules have been gaining more market share.
XFP vs SFP+: Application Scenarios
XFP optics are connectivity options for data center, enterprise wiring closet, and service provider transport applications.
10G SFP+ Transceivers are widely used on 10G switches, routers, servers, NICs and other transmission equipment. Featuring low power consumption and high speed, SFP+ is suitable for data center, enterprise wiring closet and other environments. You can click the link to see the details.
Additionally, 10G XFP and SFP+ transceivers can be inter-compatible in one Ethernet network on condition that their protocols are consistent and they conform to the same wavelength and signaling rate (as the figure presented below).
Figure 2: XFP vs SFP+: Interconnection Application
XFP vs SFP+: FAQs
Q: Can XFP module connect with SFP+ transceiver?
A: XFP connector and SFP+ connector are both LC duplex. For example, the XFP 10G 1310nm transceivers can talk to the SFP+ 10G 1310nm modules via LC duplex fiber optic cable. The following figure shows a typical interconnection application with XFP and SFP+ modules.
Q: Can I plug SFP+ module into the XFP slot?
A: No. Since the size of two formats are totally different from each other, it is not feasible to use SFP+ module in XFP slot and vice versa. Please make clear which port type the switch is equipped with and remember that XFP and SFP+ optical transceivers are not interchangeable.
Figure 3: SFP+ Module Can't be Plugged in XFP Slot
Q: What are the differences between XFP vs SFP+ vs SFP?
A: SFP and SFP+ are the same size with different speed and compatibility. Unlike XFP and SFP+ modules which are used in 10GbE applications, SFP is for 100BASE or 1000BASE applications. And SFP+ ports can accept SFP optics at a reduced speed of 1 Gbit/s, but a SFP+ transceiver can not be plugged into a SFP port. Excerpt from SFP vs SFP+ vs SFP28 vs QSFP+ vs QSFP28, What Are the Differences?
Q: How's the prospect of 10G XFP and SFP+ modules?
A: Though the pursuit for higher bandwidth is unstoppable, 10GbE won't step out of the market in many cases. SFP+ transceivers are still a mainstream option offering a smaller size, lower consumption and lower cost. Not as popular as SFP+ transceivers, 10G XFP modules still play an irreplaceable role in some 10G network applications.
To summarize, XFP vs SFP+, there are many differences. While SFP+ transceivers have a smaller size than 10G XFP modules, considering cost, port density and applications, 10G SFP+ transceivers are favored in the market as a cost-effective solution. | <urn:uuid:87cf5ae1-954b-413e-9945-a75b2bc49a0a> | CC-MAIN-2022-40 | https://community.fs.com/blog/can-we-connect-xfp-and-sfp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00079.warc.gz | en | 0.874421 | 1,395 | 2.515625 | 3 |
Once spam hits your email inbox, you become a target. When it comes to technology, humans tend to be the weakest link in most IT security situations. Attackers will constantly try to trick them, manipulating users to click on things that they shouldn't through a variety of methods. Oftentimes, these "tricks" are via email, as email platforms can target a very large number of people and is a very "budget-friendly" attack. If users happen to click the wrong thing within the spam email, bullseye, internal data then becomes exposed.
Since email is commonly used as a way to exploit users and their data, spam filtering has grown in importance and relevance. Organizations must utilize a spam filter to reduce the risk of users clicking on something they shouldn't, in turn keeping their internal data shielded from a cyber attack.
HERE'S HOW IT WORKS
Spam filtering uses a filtering solution within your email run by a set of procedures that help determine which incoming emails are spam and which are safe for the user to open. According to Spamhaus, the United States is ranked #1 among which countries have the most live spam issues. Spam is getting sent to users, and it's getting sent a lot.
The main types of filtering analyzes the source of the email, whether the source of the email has had any complaints or has ever been blacklisted, the content of the email, and subscriber engagement. All of this is tracked and sorted before hitting a users' inbox. Spam filtering solutions can be hosted in several ways to support organizations, whether it's through a cloud service, on-premise technology, or software installed on organizational computers that can collaborate with email platforms.
WHY IT'S IMPORTANT
Implementing spam filtering is extremely important for any organization. Not only does spam filtering help keep garbage out of email inboxes, it helps with the quality of life of business emails because they run smoothly and are only used for their desired purpose. Spam filtering is essentially an anti-malware tool, as many attacks through email are trying to trick users to click on a malicious attachment, asking them to supply their credentials, and much more.
According to Radicati Research Group Inc., email spam costs businesses up to $20.5 billion each year, and that number will only continue to rise. Spam filtering prevents these spam messages from ever entering an inbox in the first place, keeping organizations from adding to the growing statistic of lost revenue.
Graymail. Another important aspect of spam filtering is the ability to eliminate "graymail" from user inboxes as well. Graymail is an email that a user has previously opted to receive, but doesn't really want or need in their inbox. Graymail isn't considered spam, as these emails aren't used to infiltrate an organization. What is considered graymail is determined by the actions of the user over time, and spam filtering platforms will pick up on that to determine what is or is not wanted within an inbox. A good spam filtering platform lets users adjust to block a lot of graymail, rather than having to manually unsubscribe from every single one.
According to Proofpoint, 40% of organizations targeted by email fraud received between 10 and 50 attacks in the beginning of 2018, and the number of companies receiving more than 50 attacks rose by 20% in comparison to 2017. Five Nines has utilized Proofpoint as its' spam filtering platform for a couple different reasons.
First, Proofpoint is hosted as a spam filtering cloud service, this is preferred as inboxes get filtered before getting inside the Five Nines or client networks, which cuts down on malicious traffic immensely. Because spam and email attacks are constantly evolving, the threat response must continuously evolve as well, which is why Proofpoint spends a lot of time and money improving their spam filtering platform continuously.
Without spam filters, an organization's email setup wouldn't function properly, and internal data would have a higher risk of exposure to a cyber-attack. Consult with an IT team about properly implementing a spam filtering system for the well-being of your organizational email system, the safety of your data, and the peace of mind of your users.
To learn more about the red flags users should watch for when navigating their email inbox through a free downloadable graphic, click below. | <urn:uuid:0ae41935-24ea-42ce-997a-1a38312a635e> | CC-MAIN-2022-40 | https://blog.fivenines.com/topic/spam-filtering | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00079.warc.gz | en | 0.964655 | 887 | 2.546875 | 3 |
Biometric authentication is one of today’s leading cybersecurity technologies, but it may not be as secure as it seems. While biometric authentication may be more difficult to hack, it does come with its own unique risks. These risks need to be addressed for biometric authentication to be truly secure. Users also need to be aware of the state of biometric authentication before scanning their face or fingerprint.
Protecting Biometric Data
Biometric authentication is in many ways more secure than passwords or PINs, especially when used as part of multi-factor authentication. However, it is important to remember that biometric authentication data has to be stored just like any text password. If alpha-numeric data can be stolen by cybercriminals, so can biometric data. The only difference is that biometric data is far more valuable.
Since biometric data is often used to secure high-value and sensitive data, more is at risk if biometric data is stolen. In fact, with facial recognition scans, fingerprints, and behavioral biometric data, a hacker could easily commit identity theft or tamper with biometric databases, such as those used to identify criminals.
So, while users may be getting a more secure login method with biometric authentication, they are putting more valuable information at risk if a network or server is compromised.
It is also important to remember that a stolen password can be replaced after the fact. Even if something as sensitive as financial information is compromised, users can still get new accounts and use more secure passwords and PINs in the future. However, if biometric authentication data is stolen, it is irreplaceable. It isn’t possible for users to simply get a new face or new fingerprints. Once biometric data is compromised, it is effectively permanent.
As a result, securing biometric data is more intensive than it may be for other types of data. There are solutions, though. For example, industry leaders have suggested things like authentication apps can store biometric data exclusively on local storage, such as that on a user’s smartphone, not on a large server. Before using biometric authentication, users need to carefully consider where the biometric scans will be stored and how that storage will be protected.
Many modern smartphones feature facial recognition login options. This easy authentication method is quick and generally secure. After all, it would be extremely difficult to “hack” someone’s facial recognition scan. Unfortunately, emerging technologies are changing that.
Deepfake technology is making it possible to trick facial recognition systems using convincing photos or videos of someone’s face. Similar technology exists for making fraudulent fingerprint scans, as well. This may be more time-consuming than some other hacking methods, but cybercriminals can accomplish it if they want to get into a system badly enough. In fact, it is even possible to fake other types of biometric authentication, such as voiceprints.
In 2020, industry leaders used a deepfake algorithm to hack airport security facial recognition systems in a friendly hack to test biometric cybersecurity. The algorithm could trick facial recognition systems into mistaking one person’s face for another’s using image-swapping and morphing techniques. One researcher on the project even pointed out that there are many similarities between facial recognition algorithms, which could make it easier for hackers to create a successful deepfake algorithm.
Privacy and Legality
One unique risk that comes with biometric authentication is how biometric data is handled by companies, businesses, and websites. For example, if a biometric authentication company collects users’ facial and fingerprint scans and sells them to a local law enforcement agency, this may violate privacy laws. Biometric authentication data remains a murky area when it comes to legislation, though.
Since biometric authentication technology is still relatively new, federal legislation has not yet been established to regulate it. Concerns have been raised about what companies do with users’ biometric data, though. If biometric data is shared without a user’s knowledge, it could pose a risk of identity theft.
Additionally, it is difficult to know if a company is using biometric data to track a user’s daily activity and even harder for users to stop that tracking if they want to opt out. With internet advertising tracking, one can simply use a private browser or ad blocker. The same strategies don’t apply to biometric data.
This has led several local governments to put regulations in place to protect users from the risks associated with biometric authentication. California, for example, has a law guaranteeing citizens the right to “access, opt-out of the sale of, and delete” their facial recognition data from databases.
Several other states, such as Colorado and Illinois, have similar laws in place requiring companies to obtain users’ consent before handling their biometric data. Laws like these reduce the risks associated with biometric authentication.
Biometric Authentication Safety
Most of the time, biometric authentication is a highly secure login method for protecting personal information. Users do need to be careful about where and when they use it, though. Biometric data should be treated like one’s Social Security number, birth certificate, or other valuable personal data. Users can utilize biometric authentication safely by staying aware of who they are trusting their data with and the cybersecurity methods they have to protect it. | <urn:uuid:5c12b95d-a690-4a8e-bf1b-63985010a1d8> | CC-MAIN-2022-40 | https://cyberexperts.com/risks-of-using-biometric-authentication-in-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00280.warc.gz | en | 0.926404 | 1,099 | 2.75 | 3 |
Most computer systems are still very easy to hack, due to a vulnerability in memory chips produced by Samsung, Micron and Hynix, according to a study by researchers from VUSec of the Vrije Universiteit Amsterdam.
The vulnerability in question is called Rowhammer, a design flaw in the internal memory (DRAM) chips of a device that creates the vulnerability. By exploiting the error, an attacker could gain control of a device. Rowhammer was made public eight years ago.
After an abundance of controversial Rowhammer attacks, CPU and DRAM manufacturers were eagerly looking for the definitive hardware solution to the Rowhammer problem. They came up with Target Row Refresh (TRR).
TTR as the silver bullet
It was assumed that Rowhammer was no longer a danger on the newest generation of systems with a DDR4 memory module, protected by TRR. Manufacturers presented TTR as the silver bullet and advertised with Rowhammer-free products. The chips are in PCs, laptops, telephones and servers.
No real solution
However, the VUsec researchers, led by Herbert Bos, Cristiano Giuffrida and Kaveh Razavi, and in collaboration with scientists from ETH Zurich and Qualcomm, noticed that very little is actually known about how TRR works and how it is applied and how effective it actually is.
In the research of their PhD students Emanuele Vannacci and Pietro Frigo, they analyzed TTR. They come to the conclusion that TTR does not solve the Rowhammer problem, and there is no prospect of a solution for this in the near future.
DDR4 chips more vulnerable than their predecessors
Cristiano Giuffrida, researcher at VUsec, explains: “The results of our research are worrisome and show that Rowhammer is not only still unsolved, but also that the vulnerability is widespread, even in the very latest DRAM chips. Moreover, we see that the new DDR4 chips are even more vulnerable to Rowhammer than their DDR3 predecessors. ”
Security by obscurity
In their research, the computer scientists also question the “security by obscurity” approach used by manufacturers. That means that the mitigation for a vulnerability only works and therefore offers security, if nobody finds out how the mitigation actually works.
“Sooner or later someone will naturally discover how the mitigation actually works. And then safety is gone. Manufacturers say that they keep their solutions secret because of market competition.”
Tech companies nervous
That the Rowhammer bug has not been tackled with the TTR solution is bad news for the big tech companies and reason for nervousness.
According to the researchers, a cloud provider that wants to guarantee the security for its customers should try to physically separate untrusted programs from other software and data. This can of course be quite expensive.
For consumers themselves, the consequences of the Rowhammer bug are probably not very large, because there are simpler ways to hack phones or computers.
Far from a solution
VUsec has also worked on various software solutions. While these solutions provide stronger guarantees, they are unfortunately expensive. Giuffrida: “Ultimately, the problem must be resolved deep in the hardware and that is only possible by the hardware manufacturers.
“In the meantime, our software solutions can help, in combination with measures that administrators can now take to make it harder for attackers (for example, by increasing the ‘refresh rate’ of your memory).” | <urn:uuid:411fe609-312e-4f4b-ad24-927bfe460d7d> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2020/03/13/memory-chips-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00280.warc.gz | en | 0.958659 | 720 | 3.109375 | 3 |
A Google Photos web version vulnerability enabled websites to learn the history of a user’s location based on the images stored in the account.
The flaw affected the Google Photos search endpoint, which allows users to quickly find images based on aggregated metadata, such as geographic locations and date of creation, an algorithm of artificial intelligence that recognizes objects and faces of people after their tagging.
The main advantage of the search function of the service is that human queries can be used to discover pictures that are relevant to a name, place, date, things or combination. An example of a query would be “Zanzibar Sunset.”
Ron Masas, a security researcher at Imperva, found that a browser-based time attack, which takes advantage of how SEPs typically work in browsers, can help an attacker to determine a user position or travel history. SOP is the security mechanism for web applications that prevents the interaction of resources loaded from different sources.
However, cross-origin writing is allowed in a typical configuration but reading is not allowed.
The researcher determined how long it took for non – existent photos to be searched and compared them against waiting time to search for results. Masas could determine with location tags if images from certain places were stored in one user’s account indicating a visit to a country.
A malicious website could add a date to the query and set a time range when the user was present at some location. Naturally, testing several tag types would reveal additional pieces of information.
The attacker does not need to extract all information simultaneously. You can track what you already have and resume where you left off, he added. In a video that shows the proof of concept attack, Masas shows how a third-party website can measure the time to search for countries in which a user took photos. | <urn:uuid:46fe4223-fea5-412a-a13a-03960618c613> | CC-MAIN-2022-40 | https://cybersguards.com/google-photos-bug-the-location-and-time-of-your-photos-was-exposed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00280.warc.gz | en | 0.938957 | 513 | 2.78125 | 3 |
In the Tactical Deception Field Manual FM 90-2 of the US Army, the concept of deception is described as those measures designed to mislead enemy forces by manipulation, distortion, or falsification of evidence to induce him to react in a manner prejudicial to his interests. In the cyber world the deception concept and deception techniques have been introduced in the early 1990 with the use of honeypots .
Honeypots are decoy systems that attract attackers to attempt to compromise them , whose value lies in being probed, attacked or compromised . In addition, honeypots can be used to gain advantage in network security. For instance they provide intelligence based on information and knowledge obtained through observation, investigation, analysis, or understanding .
Deception techniques such as honeypots are powerful and flexible techniques offering great insight into malicious activity as well as an excellent opportunity to learn about offensive practices. In this post I will be introducing how to create a honeypot for research purposes to learn about attack methods.
If you want to learn more about computer deception I recommend to read Fred Cohen articles. In regard to honeypots in I definitely recommend the landmark book authored by Lance Spitzner in 2002 and published by Addison-Wesley. One of the many things Lance introduces on his book is the concept of level of interaction to distinguish the different types of honeypots. Basically, this concept provides a way to measure the level of interaction that the system will provide to the attacker. In this post I will be using a medium interaction honeypot called Kippo.
A important aspect before running a honeypot is to make sure you are aware of the legal implications of running a honeypots. You might need to get legal counsel with privacy expertise before running one. The legal concerns are normally around data collection and privacy, especially for high-interaction honeypots. Also you might need permission from your hosting company if you would for example run a honeypot on a virtual private server (vps). Lance on his book as one full chapter dedicated to the legal aspects. Regarding hosting companies that might allow you to run a honeypot you might want to check Solar vps, VpsLand or Tagadap.
Let’s illustrate how to setup the Kippo SSH honeypot. Kippo is specialized in logging brute force attacks against SSH. It’s also able to store information about the actions the attacker took when they manage to break in. Kippo is considered a low interaction honeypot. In addition I will be demonstrating how to use a third party application called Kippo-graph to gather statistics and visualize them.
Based on the tests made the easiest way to setup Kippo is on a Debian linux distro. To install it we need a set of packages which are mentioned in the requirements section of the project page. On my case I had a Debian 6 64 bits system with the core build packages installed and made the following:
Using apt (advanced packaging tool) which is the easier way to retrieve, configure and install Debian packages in automated fashion. I installed subversion to be able to then download Kippo. Plus, installed all the packages mentioned in the requisites. Then verified python version to make sure is the one needed. During the installation of the mysql-server package you should be prompted to enter a password for the mysql.
# apt-get update
# apt-get install subversion python-zope python-crypto python-twisted mysql-server ntp python-mysqldb
# python –V
Check the status of MySQL, then try to login with the password inserted during the installation:
# service mysql status
# mysql -u root -p
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 42
Server version: 5.1.66-0+squeeze1 (Debian)
Check if we have a timesource configured and NTP is syncing:
Download Kippo using svn. Create the initial configuration file and then login into MySQL and create the necessary database and tables:
#svn checkout http://kippo.googlecode.com/svn/trunk/ /opt/kippo
#cp kippo.cfg.dist kippo.cfg
mysql -u root –p
mysql> CREATE DATABASE kippo;
mysql> USE kippo;
mysql> SOURCE /opt/kippo/doc/sql/mysql.sql
mysql> show tables;
Edit the kippo.cfg file and change the hostname directive, ssh port, and banner file. Also uncomment all the directives shown above regarding the ability of Kippo to log into the MySQL database. Make sure you adapt the fields to your environment and use strong passwords:
ssh_port = 48222
hostname = server
banner_file = /etc/issue.net
host = localhost
database = kippo
username = root
password = secret
Edit the file /etc/issue.net on the system and insert a banner similar to the following:
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to law enforcement officials.
Verify which username and password is used to deceive the attacker that he got the correct credentials and break in:
# cd /opt/kippo/data
# cat userdb.txt
Then add a non-privileged user to be used to launch Kippo. Its also needed to change the ownership of the Kippo files and directories to the user just created:
# useradd -m –shell /bin/bash kippo
# cd /opt/
# chown kippo:kippo kippo/ -R
# su kippo
$ cd kippo
Starting kippo in background…Generating RSA keypair…
By default – as you might noticed in the kippo.cfg – Kippo runs on port 2222. Because we start Kippo as a non-privileged used we cannot change it to port 22. One way to circumvent this is to edit the /etc/ssh/sshd_config file and change the listening port to something unusual which will be used to manage the system. Then create an iptables rule that will redirect your TCP traffic destined to port 22 to the port where Kippo is running.
#cat /etc/ssh/sshd_config | grep Port
#service ssh restart
#iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 22 -j REDIRECT –to-port 48022
Depending on your setup you might need or not additional firewall rules. In my case I had the system directly exposed to the Internet therefore I needed to create additional firewall rules. For the iptables on Debian you might want to check this wiki page.
Create a file with the enforcement rules. I will not be including the redirect rule because will allow me to have control when to start and stop redirecting traffic.
# Sample firewall configuration
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 2222 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 48022 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 48080 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
I will be allowing ICMP traffic plus TCP port 22 and 2222 for Kippo and 48022 to access the system. Then the 48080 will be for the kippo-graphs.
Note that you might want to add the –source x.x.x.x directive to the rules that allow access to the real ssh and http deamon allowing only your IP address to connect to it.
Then we apply the iptables rules redirecting the contents of the file to the iptables-restore command. Then we need a small script for each time we restart the machine to have the iptables rules loaded as documented on the Debian wiki.
#iptables-restore < /etc/iptables.rules
/sbin/iptables-restore < /etc/iptables.up.rules
Change the file mode bits
#chmod +x /etc/network/if-pre-up.d/iptables
Subsequently we can install kippo-graphs. To do that we need a set of additional packages:
#apt-get install apache2 libapache2-mod-php5 php5-cli php5-common php5-cgi php5-mysql php5-gd
After that we download kippo-graph into the the webserver root folder, untar it, change the permissions of the generated-graphs folder and change the values in config.php.
# wget http://bruteforce.gr/wp-content/uploads/kippo-graph-0.7.2.tar –user-agent=””
# md5sum kippo-graph-0.7.2.tar
#tar xvf kippo-graph-0.7.2.tar
# cd kippo-graph
# chmod 777 generated-graphs
# vi config.php
Edit the ports configuration settings, under apache folder, to change the port into something hard to guess like 48080. And change the VirtualHosts directive to the port chosen.
#service apache2 restart
Then you can point the browser to your system IP and load the kippo-graphs url. After you confirmed its working you should stop apache. In my case I just start apache to visualize the statistics.
With this you should have a Kippo environment running plus the third party graphs. One important aspect is that, every time you reboot the system you need to: Access the system using the port specified on the sshd config file ; Apply the iptables redirection traffic ; Stop the apache service and start Kippo. This can be done automatically but I prefer to have control on those aspects because then I now when I start and stop the Kippo service.
#ssh vps.site.com -l root -p 48022
#iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 22 -j REDIRECT –to-port 2222
#service apache2 stop
Stopping web server: apache2 … waiting .
$ cd /opt/kippo/
Starting kippo in background…
Loading dblog engine: mysql
Based on my experience It shouldn’t take more than 48 hours to have someone breaking in the system. You can than watch and learn. In addition after a couple of hours you should start seeing brute force attempts.
If you want to read more about other honeypots, ENISA (European Network and Information Security Agency) just recently released a study about honeypots called “Proactive Detection of Security Incidents II: Honeypot”. It’s the result of a comprehensive and in-depth investigation about current honeypot technologies. With a focus on open-source solution, a total of 30 different standalone honeypots were tested and evaluated. It’s definitely a must read.
In a future post I will write about the findings of running this deception systems to lure attackers.
The use of Deception Tecniques : Honeypots and decoys, Fred Cohen
The Art of Computer Virus Research and Defense, Peter Szor, Symantec Press
Honeypots. Tracking Hackers, Lance Spitzner, Addison-Wesley
Designing Deception Operations for Computer Network Defense. Jim Yuill, Fred Feer, Dorothy Denning, Fall | <urn:uuid:a2fca498-8e7a-4b24-b6d4-6502008b8baf> | CC-MAIN-2022-40 | https://countuponsecurity.com/tag/network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00280.warc.gz | en | 0.837009 | 3,114 | 2.65625 | 3 |
What is a packet?
A packet is a multi-byte unit of data transmitted at one time by a host on a packet-based network. The actual packet consists of the user data, called the "payload," and control information that the network uses to deliver the payload. "Packet" is often used interchangeably with "frame," although some people distinguish packets as messages at the network layer and above, and frames as messages that include the data link and sometimes even the physical layers.
NETSCOUT's Omnis Security platform utilizes packet-based analysis for advanced threat analytics and response.
What is a network packet?
A network packet is a unit of data transmitted across a packet-switched network. Such packets contain control information and user data. Control information, which typically includes source and destination network addresses, error detection codes, or sequencing data, enables payload delivery. This information is contained in packet headers and trailers. Packets are instrumental to the function of telecommunications and computer networking.
NETSCOUT solutions utilize packet data to enable rapid IT troubleshooting, predictive analysis, network topology & health diagnostics reporting.
What is packet loss?
Packet loss happens as a result of a single, or multiple packets of data, traversing a computer network, but then failing to arrive at their intended destination. Such a failure can be caused by errors in the transmission of the data over a wireless or wired network. It can also be the result of network congestion. Packet loss is defined as a percentage of packets lost compared to the number of packets sent.
When packet loss is detected by the Transmission Control Protocol (TCP), retransmission of the packets is attempted to ensure messages are completed. In some cases, packet loss is intentionally introduced through the TCP connection in order to reduce throughput and alleviate network congestion.
Packet loss can adversely impact a user's quality-of-experience (QoE), particularly in real-time applications, such as online gaming and streaming media.
NETSCOUT's nGeniusONE solution helps minimize disruptions and optimize performance by monitoring and trending incoming traffic for internet circuits and VPN gateways. Metrics for traffic volume, dropped packets, and errors provide early warning of potential issues impacting users.
What is a packet tracer?
A Packet Tracer enables users to simulate computer network topologies. This cross-platform, visualization tool was created by Cisco Systems. Packet Tracer software simulates the configuration of Cisco routers and switches using a mocked-up command line interface.
Packet Tracer software is primarily used by students enrolled in Cisco Network Associate Academy. This tool is used for educational purposes in learning basic CCNA concepts.
What is packet switching?
Packet switching involves taking data and grouping it into packets, which are then transmitted over a telecommunications network. Packet switching is the principle way data communications are conducted across computer networks around the world. The packets themselves are composed of a header and a payload. The data contained in the header is utilized by hardware at the network level, directing the packet to its designated destination. Once the packet arrives at its desitation, the payload is extracted where it is processed by an operating system, higher layer protocols, or application software.
What is a Packet Broker?
A packet broker is a hardware or software appliance that directs network traffic from multiple SPAN ports and manipulates the traffic to allow more efficient use of network tools and monitoring devices on the network.
Packet brokers are tasked with gathering traffic from numerous network links, then filtering and redirecting the individual packets to the optimal network monitoring tool. By improving the delivery of data across the network, the effectiveness of network monitoring and security tools is attained.
What is a packet analyzer, protocol analyzer or network analyzer?
A packet analyzer is a software program or computer hardware (packet capture appliance) that is used to catch and then log traffic traversing a computer network or part of that network. A packet analyzer may also be referred to as a network analyzer, packet sniffer, or protocol analyzer. (The terms network analyzer and protocol analyzer can also have other meanings.)
Packet capture occurs when the analyzer intercepts each packet as the data streams flow throughout the network. In some cases, the analyzer is tasked with decoding raw data found in the packet in order to reveal the values of certain fields found in the packet. The contents of the packet are analyzed per the applicable specifications.
When a packet analyzer is employed to capture traffic on a wireless network, it is referred to as a wireless analyzer.
What is a PCAP?
PCAP is an industry standard acronym for packet capture. In addition, the term is used to describe the capture output file format, typically labeled with the extension .pcap, used by common tools such as Wireshark or TCPDump. | <urn:uuid:5f2c01f9-40c5-41b7-92cd-eb537b5adced> | CC-MAIN-2022-40 | https://www.netscout.com/what-is/packet | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00280.warc.gz | en | 0.919843 | 1,000 | 3.4375 | 3 |
Within the World Wide Web, there are only victims and potential cybercrime victims. Right as we speak, millions of computer viruses ramble into this huge network. Say your computer hasn’t been infected yet? Unless you haven’t already made the choice of a smart antivirus, it’s not a personal merit to rejoice for. In fact, without antivirus protection, not getting infected is mostly a matter of luck.
Then again, just like in many other fields, education for prevention can make a huge difference. So, wouldn’t you like to know more about these threats? Let us start with the basics of what is a computer virus and how antivirus software works to detect it…
First things first, what is a virus? What will the antivirus look for?
Viruses are applications like many others, designed to work on specific devices. Obviously, not any compiled application is a virus. And so, the difference between useful and harmful compiled applications is given by their purpose.
Viruses are meant to cause damage. That damage can mean anything from stealing your information to deleting your data, crashing your computer, or asking you ransomware to regain access to the infected device.
The fact that a virus is a compiled app can be both a good and a bad thing:
- It is a bad thing because it can easily pass as a good app and trick users into downloading or accessing it.
- But it is also a good thing because, well… compiled applications are made of bits.
- And bits create footprints or signatures that make the app easier to recognize as a virus, once it was first reported as such.
In other words, a virus is an application that compiles into the same sequence of bits, every single time it runs, generating the same negative impact. This sequence reported by antivirus software to have a harmful impact on a device is seen as a virus signature.
Antivirus vendors will blacklist and store that sequence as reference for future comparisons. From that moment on, whenever their software will encounter it during any kind of scanning, it will recognize it as a virus and react accordingly.
Reacting accordingly is a vague term. There are so many and different tools that antivirus labs rely on, when it comes to dissembling viruses. Normally, they all move the suspicious file in quarantine, isolating it from the system and preventing it from running its malicious code. Depending on the antivirus program’s settings, it can choose to delete the file right away. Or it can run it and test it in sandboxes, in the cloud, from where it cannot affect your device.
So, is it really that easy for your antivirus to spot a virus?
Now, if it’s so easy to “spread the word” and let other devices know what signatures to block… How comes we still often feel overwhelmed with these attacks?
Obviously, it is because while antivirus developers are working hard to collect useful information and share it with their entire pool of users, so are the virus developers. Anyone with the knowledge to program computer software can also create computer viruses. And they don’t even need to create a virus from scratch. Suffices to take one of those virus signatures, alter its code with new, custom specifications, and they can compile and distribute it as a new virus.
The new virus will have a certain code sequence in common with the old virus. But the signature won’t perfectly match and, therefore, it will be reported as a different virus. This is the case when a particular virus, powerful enough to frighten the entire online community, ends up having several different names – it was altered by other virus developers and now has different versions running online.
How can your antivirus stay up to date with all these changes?
Well, the antivirus in itself is just a compiled app that knows how to scan other compiled apps and match what it finds with a database. That database contains virus signatures that, as already explained, change rapidly, resulting in new threats.
Basically, your antivirus doesn’t stay up to date with all these changes. But its developer will do. By collecting all the information that it can get, the developer will update the antivirus with so-called definition files. It will then notify its users that a new definition file is available. And by installing the update, the antivirus software will benefit from a new version of database with virus signatures.
In other words, if you ignore updating the definition files, you leave your computer exposed. The antivirus will continue to scan the executable files. But if it will encounter a virus with a signature modified from the version currently stored on its database, it will not be able to recognize it.
For this reason, definition files should be allowed to download automatically. And the antivirus software will have the chance to access new, updated definition files once a day, sometimes even more often than that.
Is that everything that antivirus software does?
Needless to say, the antivirus will always have to match the file it analyzes against the signatures from its most recent definition file. By always, we mean every time you are launching an app or an executable file.
In those short (or long) fractions of a second when you’re waiting for the app to launch, the antivirus is doing all the hard work of comparing code sequences. Hence the complaints that using antivirus software can slow down your computer… And the continuous struggle of antivirus developers to create software with as little impact on a computer’s system resources as possible.
Aside from the code comparison, antivirus software can also look into a program’s behavior, doing a so-called heuristic evaluation.To sum up, the basic scanning process of any antivirus software will focus on three types of detection mechanisms:
- Specific detection
- Generic detection
- Heuristic detection
The specific detection will try to identify known malware by looking for a specific, quite exact set of characteristics. Whereas the generic detection will seek for variations of the known malware code, trying to identify new viruses that have been developed from older versions.
Heuristic detection is different from behavioral detection
Heuristics walk the extra mile. Instead of simply comparing pieces of code, it relies on rules and algorithms. And it evaluates commands that can indicate malicious intentions from a certain app or program. Because of that, it can spot a new or unknown malware even when the antivirus lacks the latest virus definitions.
What kind of suspicious activities performed by viruses can be spotted with the help of heuristic detection? For instance, when the virus is trying to access all of the executable files on your computer, inserting a copy of the original program into their code. That way, it will increase the risks of infecting your device (any executable file on the PC will become a source of infection) and it will make it even harder for the antivirus to completely remove it from the device.
Heuristic-based detection usually pairs with signature-based detection and tends to make an impact especially on the prevention side. The behavioral based detection, on the other hand, will look at what a program or an app does while actually running on the PC. This is hugely different from looking at what that program does in a virtual environment.
The problem with heuristics, however, is that it leaves so much room for mistakes. Sometimes, it can prove too much of an aggressive measure. And so, it can lead to false positives, where it flags a harmless program as an unknown type of virus.
Does it come down to signatures, behaviors, and executable files?
It would have been nice, but no. The truth is that there are many other types of online threats. And not all of them will come through an executable file that you personally launch. Browsers and plugins, the operating system itself, your email app and not only… It can all easily turn into access points for viruses to sneak in.
So, antivirus software is either part of a security suite with several other layers of protection included; or it comes with extra features in itself, doing a lot more than the actual scanning of every file you open.
Antivirus software can fight malware with different detection techniques. We have already seen the signature and the heuristic-based detection mechanisms. And we have mentioned the behavioral-based detection.
As suggested, this has more to do with an antivirus’ intrusion detection mechanisms. It detects the potentially harmful characteristics of malware while it actually executes on the device, meaning while it runs its malware actions.
On top of everything else above discussed, there is also the sandbox detection and a series of data mining techniques.
Sandboxes are virtual environments
Specifically built for testing malicious files outside of the operating system, sandboxes are the next level after heuristic detection.
Heuristic detection looks for features or actions and behaviors that are normally associated with known threats.
Sandboxing is all about letting the malicious app run in that dedicated environment and record its behavior.
Sandboxing takes more time but it is also more accurate and it is often inspected, afterward, by a malware analyst. With its help, the analysis will not only determine if the suspicious file really is malicious or not, but also exactly what it does, if it really ismalicious.
In a nutshell, sandboxing opens the file in a safe environment, lets it run, and sees exactly what it would have done to your computer if it had the chance to run in there.
Data mining and the first steps towards machine learning
Just like the name suggests, data mining is a sophisticated process of selecting a huge amount of data and, equally important, sifting through it in search of pertinent, specific information.
Knowing how to interpret the information extracted from those large sets of data is crucial, therefore are different options involved in data analysis. Machine learning techniques represent one of the most recent and complex options of data analysis, making use of complicated algorithms.
In fact, data mining involves applying an overwhelming suite of statistical and machine learning algorithms, on a specific set of features extracted from both malicious and clean programs. More about that, a bit later in this article.
The main antivirus scan types and detection mechanisms
We’ve seen what the antivirus is generally looking for. But it would probably help to know, in advance, what kind of options you have, as an antivirus software user.
Scanning is a process that can be executed either on demand or by default. Some users will disable automatic scanning, unhappy that allowing the antivirus to run its scans in the background will slow down their computers. Others will let the antivirus work as it sees fit and that’s probably a very good idea.
Long story short, there’s on-access scanning and full system scanning. Depending on the features that the antivirus software comes with, there are also options to create custom scans, to scan only certain partitions, certain folders, or even certain files. You can do that periodically or, as suggested, on demand.
Then again, scanning is just one of the many security layers that your antivirus relies on. More specifically, it is a detection mechanism, one of the four main detection mechanisms that antivirus software normally provides:
- Scanning – implies simply searching for specific strings in the analyzed files, strings that are pre-defined virus signatures; scanning may report results based either on exact matching or variants of a virus-signature.
- Activity monitoring – as one of the latest trends in virus research, this one involves monitoring a file execution and detecting any trace of malicious behavior.
- Integrity checking – this one starts with creating a cryptographic checksum for every single file stored on the computer and returning to it periodically, to check for any variation that occurred in that checksum in the meantime; these variations can help to detect changes caused by viruses.
- Data mining – as mentioned, it is a complex process that works with both statistical and machine learning algorithms.
With this scan type, the antivirus runs in the background, checking every single file you open. It checks it by comparing its code with the database of signatures, to see if it matches the ones of known viruses, worms or other types of malware.
This type of scanning onaccess doesn’t come down to executable files only. It can also look into archive files that may hide a compressed virus; or into office documents that may hide malicious macros; or into any type of file that you download, which will be scanned automatically, without the antivirus waiting for you to open that file.
On-access scanning is perhaps the most important type of antivirus scanning because it has the ability to protect a PC before it gets infected. Most viruses will enter the device and wait for you to launch it before it starts acting.
Once you release it, it becomes significantly more difficult to remove it. And even if you do, or your antivirus says it has removed it, there is no certainty that you have completely removed it. Therefore, catching a virus before you get to launch the app that contains the malicious code is very important.
While one may have the option to disable on-access scanning with the purpose of reducing the impact that the antivirus has on the system’s resources, it is certainly not a good idea to make use of that option.
Full system scanning
The full-system scans are available with most antivirus software. And they usually come as an option to schedule or an automatic action. When automatic, the antivirus software will schedule it like once a week, at an hour when you normally don’t need to use your computer (it will notify you about that).
But as long as the on-access scanning is active, there are only a few instances when one should spend time with scanning the entire disk. Such instances include but aren’t necessarily limited to the following situations:
- When you have just installed new antivirus software and you want to run a full scan to see if there aren’t dormant viruses that the previous antivirus missed;
- When you know for a fact that the device has been infected, you don’t want to reinstall the operating system, and choose instead to transfer the hard drive on another PC and have it scanned in there with a full system scan;
- When you have disabled the automatic full-system scans that the antivirus software will schedule periodically.
The future of how antivirus software works
With the simple mentioning of the machine learning algorithms, we have entered the fascinating field of artificial intelligence antivirus. Pretty much everything we have discussed in this article so far targeted the way that traditional antivirus software works.
As stated, traditional antivirus software relies on data signatures and pattern analysis. It’s all a never-ending attempt to comparing everything that happens on your computer with previous instances where malicious activities were reported.
In other words, antivirus software knows how viruses look and what they do on a computer. And whenever it detects an activity that has to do with those virus-specific features and behaviors, it jumps in and blocks it.
The traditional malware recognition modules decide if an app is a threat after collecting and analyzing specific data about it. Data can be collected:
- In the pre-execution phase – a phase where it just looks at the app and gathers details such as file format descriptions, code descriptions, statistics of binary data, text strings and other data extracted through code emulation;
- Or in the post-execution phase – a phase where it analyses what happens after the app was active inside the system, after seeing its behavior and consequences firsthand.
This would work fine for the less challenging malware apps, but we all know that we are facing more and more advanced malware versions and malware attacks. To respond to it all accordingly, artificial intelligence antivirus software is being developed. And through it all, the anti-malware companies have turned to machine learning, increasing their malware detection rates and malware classification abilities.
The differences between Machine Learning and Artificial Intelligence
Machine learning (ML) and Artificial Intelligence (AI) are two terms often interchanged, even though, at their essence, they are different. To put it simple, machine learning is just a mean for the goal of achieving artificial intelligence. Because artificial intelligence defines programs that can execute tasks with human intelligence characteristics… Whereas machine learning defines a set of methods that would give an antivirus the ability to learn without being explicitly programmed.
Machine learning algorithms can look at large sets of data, and then discover and formalize the principles underlying that data. In other words, the algorithm should be able to “reason” properties of malicious samples even if they were previously unseen.
Applied specifically to malware detection, machine learning can consider any new file that you are trying to access on your computer as a previously unseen sample. The hidden property it discovers in it may be malware or benign. But it should be able to reason if it really is malware or not, based on a model that it deducts through a set of principles underlying the data properties.
Most importantly, machine learning is not just a single method but rather a range of approaches that will lead to a solution.
Given the complexity of this scanning method, artificial intelligence antivirus is raising the stake among the villains who seek to develop malware. The more complex the scanning and identification methods become, the harder they will have to work to create malware that are more difficult to detect.
It is, after all, a continuous race and antivirus software based on artificial intelligence simply keeps us in the race. | <urn:uuid:4e96f234-c57b-4843-a502-0501a25c4d8a> | CC-MAIN-2022-40 | https://antivirusjar.com/how-antivirus-software-works/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00280.warc.gz | en | 0.942698 | 3,626 | 3.46875 | 3 |
A ground-breaking scientific expedition to research the structure of the sea ice in the Arctic Ocean put Getac’s rugged computer technology at the heart of its operation – in some of the most extreme environments.
As part of an on-going project looking at the impact of ice-algae on the sea ice environment around Greenland, Associate Professor Lars Chresten Lund-Hansen and Associate Professor Brian Sorrell from the Department of Bioscience at Aarhus University in Denmark spent six weeks on the Arctic Ocean late last year.
To understand how the algae lives in such extreme conditions, the scientists needed to study the microscopic plants that sit inside the bottom of sea ice, having adapted to an extremely dark environment in temperatures below freezing. This saw the research pair helicoptered onto the ice most days to drill out ice cores, collect samples of the algae for analysis and record associated data – which was collected on the semi-rugged S400.
“We relied on the Getac computer to record all of our information, no matter the weather. We knew the conditions were not always good up there with dense fog, wind and snow so we wanted something we could trust,” says Lund-Hansen
“Even though the temperatures out there were below freezing, Getac’s notebook worked really well for us,” he adds. “We have tried other computers but they simply aren’t up to the job, our colleagues recommended the S400 to us because of its ability to function in extreme environments.
“The notebook’s long battery life enabled us to stay out on the ice for long periods too, which was invaluable and helped us keep the amount of back-up equipment we needed each day to a minimum,” continues Lund-Hansen. “We had purchased replacement batteries, but we didn’t need them and even though the wind chill factor meant temperatures were pretty extreme at times the S400 did not let us down.”
Getac’s S400 has a 14” multi-touch screen with an anti-reflective, anti-glare display as standard and is available with optional 800 nits QuadraClear™ sunlight readable display.
“The readability of the screen was a real asset; light hitting ice flows can produce a lot of glare but the screen’s anti-reflectivity properties meant it was easy to view, even in strong sunlight,” says Lund-Hansen. “We were really impressed by the S400 and will definitely be taking it with us on our next expedition to the Arctic Ocean.”
The S400 has Intel® Core™ i5 vPro™ technology for a fast, powerful performance in temperatures as low as -20°C up to 60°C. IP5X and MIL-STD 810G certification makes the semi-rugged S400 perfect for demanding work environments.
Getac UK President, Peter Molyneux, says: “The S400 is designed for those who have a tough job to do in extreme circumstances – exactly what the Aarhus University researchers experienced. All Getac products are designed to perform in extreme conditions to allow our customers to complete their work without interruption. | <urn:uuid:0c27dcd7-36b0-4e88-bca8-b01450b74c57> | CC-MAIN-2022-40 | https://www.getac.com/intl/news/expedition-to-one-of-the-worlds-coldest-places-relies-on-getac-rugged-laptop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00280.warc.gz | en | 0.948055 | 664 | 3.296875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.