text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
A lot of programs we install on our computer are automatically run when Windows starts and loads.
While this is not always necessary, there usually is not much harm in this.
But this behavior is also copied by malware writers to pass security checks. Their malicious program try to mimic legitimate programs that you might expect in your Windows startup programs.
Why hide when you can pretend to be something useful?
Copying the art of camouflage from the animal world, malware writers have been trying several methods over the years to hide their registry entries in the open. Sometimes by using (pseudo-)random names and sometimes by using locations that are relatively unknown to the general public. But also by pretending to be, or belong to, legitimate programs.
Arguably there are some 57 ways to make a file get loaded automatically.
The majority of them are found in the registry. Not all of them apply when Windows loads, some get triggered by other events.
Running Internet Explorer for example loads the Browser Helper Objects.
Some of the most well-known and most used startup locations are the Run keys:
or HKEY_LOCAL_MACHINE \SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run
Together with entries from the Windows startup folder and other possible registry entries these are listed in the Startup database by research engineer Paul Collins aka Pacman.
This database gives you information about the Name of the startup key, the name of the file that gets started, whether the startup is needed, not necessary or even downright malicious. It also has a column where you can find extra information about the files. This can include a link to the site of the manufacturer or a link to a description of the malware.
As you can tell from the screenshot (or if you do a search on the site for yourself) there are a few filenames that are very popular to disguise malware. These are typically entries that are very popular (like skype.exe) or entries that look very much like a legitimate windows filename (i.e., svchost.exe).
If you check your own registry or make a log file with the startup information, a file like skype.exe may jump out at you if you have never installed the program. But if you showed that log to someone else, they might not know if you use the program. That is why experienced and trained log readers pay attention to the folder the file is found in.
Default for the legitimate skype.exe is %ProgramFiles%\Skype\Phone where %ProgramFiles% is an environmental variable that points to the Program Files directory, usually C:\Program Files or C:\Program Files (x86).
Any skype.exe located in another folder should be looked at closer. Another important point is the name of the startup. For the legitimate skype.exe (and many fake ones) the name is “Skype”, but there are others, like the malware shown in the example that uses “Skype Update”. That may have been an attempt to make it look less conspicuous if the real Skype is present as well.
If you need to know more about Windows startup programs and especially how to identify them then we recommend you visit Pacman's Portal - which is powered by Malwarebytes.
Thank you, Paul Collins, for your input. | <urn:uuid:ace47a6b-384f-4714-a9b4-9afd05b05cc9> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2013/10/hiding-in-plain-sight | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00721.warc.gz | en | 0.90165 | 730 | 2.65625 | 3 |
Research finds that earthquakes can systematically trigger other ones on the opposite side of Earth
- Published: Tuesday, 07 August 2018 11:17
New research shows that a big earthquake can not only cause other quakes, but large ones, and on the opposite side of the Earth. The findings, published in Scientific Reports, are an important step toward improved short-term earthquake forecasting and risk assessment.
Scientists at Oregon State University looked at 44 years of seismic data and found clear evidence that earthquakes of magnitude 6.5 or larger trigger other quakes of magnitude 5.0 or larger.
It had been thought that aftershocks - smaller magnitude quakes that occur in the same region as the initial quake as the surrounding crust adjusts after the fault perturbation - were the only seismic activity an earthquake could lead to.
But the OSU analysis of seismic data from 1973 through to 2016 - an analysis that excluded data from aftershock zones - provided the first discernible evidence that in the three days following one large quake, other earthquakes were more likely to occur.
Each test case in the study represented a single three-day window ‘injected’ with a large-magnitude (6.5 or greater) earthquake suspected of inducing other quakes, and accompanying each case was a control group of 5,355 three-day periods that didn't have the quake injection.
"The test cases showed a clearly detectable increase over background rates," said the study's corresponding author, Robert O'Malley, a researcher in the OSU College of Agricultural Sciences. "Earthquakes are part of a cycle of tectonic stress buildup and release. As fault zones near the end of this seismic cycle, tipping points may be reached and triggering can occur."
The higher the magnitude, the more likely a quake is to trigger another quake. Higher-magnitude quakes, which have been happening with more frequency in recent years, also seem to be triggered more often than lower-magnitude ones.
An earthquake is most likely to induce another quake within 30 degrees of the original quake's antipode - the point directly opposite it on the other side of the globe.
"The understanding of the mechanics of how one earthquake could initiate another while being widely separated in distance and time is still largely speculative," O'Malley said. "But irrespective of the specific mechanics involved, evidence shows that triggering does take place, followed by a period of quiescence and recharge."
Collaborating with O'Malley were Michael Behrenfeld of the College of Agricultural Sciences, Debashis Mondal of the College of Science and Chris Goldfinger of the College of Earth, Ocean and Atmospheric Sciences. | <urn:uuid:78e9843b-922e-42c1-bb2b-8fad3074ed08> | CC-MAIN-2022-40 | https://www.continuitycentral.com/index.php/news/resilience-news/3156-research-finds-that-earthquakes-can-systematically-trigger-other-ones-on-the-opposite-side-of-earth | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00721.warc.gz | en | 0.954324 | 548 | 3.859375 | 4 |
Implementation of the “Internet of Things” in the modern world is gaining pace at breakneck speed. Society is moving away from standalone devices and entering the realm of inter-connectivity. With uses in different facets of life, such as personal gadgets, retail, electricity distribution and financial services, IoT is making its mark.
One such application field of IoT is in Smart Homes, or more specifically in the Heating, Ventilation, and Air Conditioning industry (HVAC). According to a report by Zion Market Research, the global smart HVAC control market is expected to reach almost USD 28.3 billion by 2025 as compared to USD 8.3 billion in 2018. Amalgamation of the HVAC industry and IoT provides for vastly superior customer-centric services, enabling remote appliance control as a first step. Further evolution would result in predictive thermostatic controls, based upon usage history.
Let us go over some of the ways IoT can be implemented in HVAC utilizing smart devices.
What Makes HVAC Devices Smart?
Let us dive into the “smart” part. Merely having a device be connected to WiFi does not make it smart. Having pre-defined scheduling controls or “if statements” doesn’t make it smart either. Then what does?
The answer lies in the difference between communication and decision making. Being connected to the internet and establishing two-way communication in the form of data logs, usage history, and crash reports is not considered smart. It’s what happens after the communication that determines if a system is smart or not.
For example, a thermostat relays back to the cloud server the usage patterns of the past 7 days. Simply recording this data and displaying it wouldn’t qualify the requirement for smart. What needs to happen is manipulation of this data using machine learning algorithms–only then can something be called smart.
Enter the World of Algorithms
Algorithms, in the context of this article, are pieces of computer code which can learn, adapt and improvise to certain situations over a time period. This means that much like a human brain, an algorithm is capable of assessing prevalent conditions and implementing appropriate corrective actions without human intervention.
As an example, let us look at the implementation of geofencing, a popular feature available with a variety of smart thermostats/controllers. Through this feature, location-based controls can be implemented. This can include the automated powering on of an air conditioner based on the proximity of a user to their home.
The Honeywell D6 smart controller takes this concept a step further. The controller can, over time, sense how long the AC takes to achieve the desired temperature, and calibrate its start time according to the user’s location. This way, the desired temperature is achieved just when the user reaches their home, and not before.
This is a great example of machine learning algorithms, which can learn the behavior of the AC, and manage startup times so as to reduce energy wastage.
The Smart HVAC IoT Ecosystem
To gain a better understanding of the different cogs in the system, it is useful to get familiar with the process workflow within the HVAC IoT ecosystem. The interplay between each ‘node’ of the system is critical to the proper implementation of IoT. Great consideration has to be undertaken in order to reduce the latency in the system, thus ensuring a seamless process for the end-user. A robust backend promises to deliver a smooth front end experience. This is not only specific to IoT in HVAC, but to all other implementations of the concept.
Let us take a look at the process flow for an HVAC IoT system. Once a command is sent from one of the mobile or web apps to the cloud, it then gets relayed to the device. Popular cloud services currently being used by companies include Amazon Web Services (AWS), and Microsoft Azure. The device then emits an appropriate signal through infrared to the air conditioner (in the case of a ductless remote-controlled AC), or through hard wires to the boiler/furnace (in the case of a ducted system thermostat).
Additionally, voice-activated commands can also be sent over through home assistant devices such as Google Home or Amazon Alexa. Routines can be incorporated and multiple actions can be taken in conjunction with each other.
Another important function of the device is to work as a sensor, relaying back information from the room to the cloud. This information is then analyzed by the machine learning algorithms in conjunction with external factors and acted upon. For example, the algorithm can identify the current weather season, and calibrate its temperature setting accordingly.
Thermostats vs Controllers
You might have noticed the usage of both the terms above, and think of them as interchangeable. Even though the end effect is nominally the same, this is still an important distinction to make. Depending on if a person has a ducted air conditioning system or ductless, the mode of control for a smart system can vary. Hardware implementations of the actual device will need to be different so that compatibility is ensured. In most cases, the backend development is the same, and indeed interchangeable with a few tweaks.
For a ducted system, smart thermostats are used which provide complete control over the boiler/furnace system. Some current market products are the Tado Smart Thermostat, and of course the Google Nest Smart Thermostat.
In the case of ductless air conditioners, such as wall-mounted mini splits or window units, a smart controller is used. The functionality is pretty much the same, with the exception of a few extra modes available to the user, depending on their specific AC. These are standalone devices, which can usually be placed anywhere in the line of sight of the AC.
Are we Headed Towards an IoT Dominated World?
The answer to this question is an invariable yes. More and more companies and startups are diving into this venture, not only in HVAC but also in other industries. According to Gartner Inc., 8.1 billion connected devices were in use worldwide in 2017, an increase of 31% from the preceding year.
After the first stage of communication and connectivity, the wide-scale deployment of machine learning algorithms logically follows. Incorporation of increased amounts of variables for enhanced user comfort will then be possible using those algorithms. As a result, energy savings, and user comfort will increase exponentially, leading to a promising future for the HVAC industry. | <urn:uuid:12635c49-ce67-488e-92b1-84c6ee92834d> | CC-MAIN-2022-40 | https://www.iotforall.com/smart-hvac | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00721.warc.gz | en | 0.928072 | 1,353 | 3 | 3 |
INSPIRA is a pioneer project originated in the University of Deusto at Basque Country in Spain. The aim of the project is to promote the technological vocation among girls. In November 2017, CIONET Spain’s group “Women and Technology” received training from Inspira in Madrid to start implementing this project in Spain’s capital.
After completing a specific two-day training given by the Deusto University, CIONET Spain’s Women and Technology Group lead by Idoia Maguregui, President of CIONET Spain Advisory Board, has now qualified mentors ready to develop the Inspira Project in Madrid.
The mentors, all professionals in the world of investigation, science and technology, will visit schools to give workshops that will raise awareness and foster orientation towards STEAM careers.
The project is aimed at students between 11 and 13 years old in the 5th and 6th grade in Spain. It is proven that at these ages girls still have enough confidence in themselves and are interested in the STEAM areas, something that tends to decline during their mid to late teenage years.
During six sessions in school days, the students will work closely with the mentors on educational games about STEAM professions, stereotypes and they will learn about women in science and technology through history, as well as some current references.
Watercolor “More fantastic women” by Jorge Bayo Lon | <urn:uuid:3a690ef9-fb56-4f05-9e63-63d1d39284b7> | CC-MAIN-2022-40 | https://blog.cionet.com/2018/03/08/cionet-spain-launches-the-inspira-project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00721.warc.gz | en | 0.962988 | 290 | 2.53125 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
By Chris Rosa, IT Guru
A lot of smart people are fooled every day by something that looks genuine, but isn’t! Phishing and malware attacks are designed to take something important from you and profit from the theft. This type of attack is directed at individuals and businesses alike. Keep reading to find out what phishing is and how to protect yourself and your business from phishing attempts.
Last year, Google conducted a study examining how passwords are stolen and user accounts become hijacked by hackers. The researchers found that phishing scams posed the greatest threat to users:
By ranking the relative risk to users, we found that phishing posed the greatest threat, followed by keyloggers, and finally third-party breaches.
Interestingly, third-party breaches often originate with phishing emails as well. The third party or contractor is infected and information is gathered about that company’s clients. That information is then used to reproduce the same scam on the newly gathered emails.
Phishing affects personal email accounts like Gmail and Yahoo email accounts and business accounts alike. When these spoofed emails succeed through business emails, the entire company can become compromised, including employee and customer information. A recent example includes the August Legacy Health breach in which 38,000 patient records were compromised.
What is Phishing?
Protecting yourself from hackers should be simple: Don’t open email from strangers. Be suspicious of any links or attachments from unknown sources.
Here’s the problem: the hacker getting ready to send you a message wants you to think everything is okay and normal. They want you to trust them. So they disguise themselves in a way that gets you to do just that.
The ruse normally starts with an email that looks like it comes from someone you know or from a familiar company. It may even be sent directly from a friend’s email address that has been hacked or even spoofed to appear that it’s from your own address. Inside the message are links that look normal, but are actually a direct line to disaster.
Some spoofs are easy to identify. One example is an email from someone you know that contains a link in the body of the email and short, impersonal text such as, “I thought you might be interested in this link.” Don’t click it! Send a message to your friend asking if they sent this email or simply delete it.
Other scams are far more difficult to recognize. A good example that has become common is the App Store subscription phishing scam. These emails look like legit messages from Apple letting you know that your high priced (sometimes comically so) subscription is confirmed, and offering a cancel button. If you click that, you’ll be taken to a look-a-like Apple site asking for credentials. Don’t be fooled!
When gathering your personal information is the goal, there is usually an offer for something of value in return for clicking a link in your email. Clicking this link will then lead to a page disguised as a familiar website, such as Paypal, Google or even your bank. From here, you are tricked into providing valuable information:
- Credit card information
- Bank account information
- Login and password information
- Anything else that might be valuable
Sometimes, the phishing email sent is intended to catch you off guard and frighten you into clicking a link. A common example is an email stating that a child predator has moved into your neighborhood. The end purpose, however, is the same.
You may not even realize this type of attack has happened until hours, days or even weeks later. By then, it may be too late to take any effective counter-action.
Like the phishing scams above, malware (short for malicious software) attacks are often sent to your email address disguised as a message from a trusted company, coworker or friend. They sometimes include links in the body of the email and sometimes include attachments for you to open or download.
If you click the link or open the attachment, this action will download malware that can open up your computer and its contents to a stranger. The thief may work silently in the background, stealing files and data without your knowledge. Or the attachment you open may download a remote access tool (RAT) that allows the perpetrator complete control of your computer. Sometimes, infiltrators will plant ransomware, a bug that will encrypt the entire drive. The criminals then demand a ransom payment to let you have access to your own files!
Often, this type of attack can spread from one computer to other computers that are attached to the same network. From there, hackers can collect information stored on a coworker’s computer or even the company server.
It Gets Worse
In a study by Wombat Security, 76 percent of organizations interviewed admitted that they had experienced phishing attacks in 2017. When successful, phishing attacks result in malware infections, account compromises and data loss. According to Trend Micro, attacks on businesses are expected to grow by more than $9 billion in 2018.
Other information that could be taken include your address book, which the thieves will use to email your friends and professional contacts (posing as you) and then infiltrate their systems in order to perpetuate the cycle of thievery.
It can be an almost never-ending scenario, unless you protect yourself first with proper security software and some common-sense defensive measures.
Protect Yourself and Your Organization
At the very least, install commercial anti-virus and anti-spam software and keep it current. Have these types of software installed on every computer you or your business uses. You should also use robust Internet security, starting with a strong firewall.
The AV-TEST Institute, an independent testing organization for IT security, does a great job breaking down its findings according to need (business, home user, Mac, Windows, mobile devices, etc.).
Always update and always patch. As software vendors recognize vulnerabilities in their products, updates and patches are developed and distributed. This is probably the easiest security measure you can take: simply turn on automatic updates.
But even the best security software will not protect you from the repercussions of all phishing attacks. The most important thing you can do is educate yourself, your family and your employees on how to recognize and avoid phishing attacks. Here are a few tips:
- Fraudulent links: Hover your mouse over links placed in emails that you receive. This will show you the true address where a link will send you. If the actual hyperlinked address is not the same as the address that is displayed in the email, it is probably fraudulent or malicious. Don’t click the link. Sites like virustotal.com can help you check suspicious URL addresses without subjecting yourself to a hacker.
- Too good to be true: You’ve probably heard of the infamous Nigerian Prince Scam, in which the sender of a message poses as a Nigerian prince who needs your help moving money in return for a large portion of that money. If a stranger is offering money in return for minimal action on your part, or a large chunk of money in return for a small investment, chances are it’s a scam.
- From the government: Government agencies rarely, if ever, initiate contact through electronic methods (email, text or phone). The IRS, for example, ONLY initiates contact through U.S. Postal Service snail mail. If you receive an email from a government agency, it’s likely a scam. Do not reply to the email, click any links or open any attachments. Instead, call the agency and ask about the message. Don’t use any number listed in the email; instead, look up a verified phone number.
- The message asks for account information: If your bank or credit card company is contacting you, they already have your account information and would not request for you to send it to them. This is certainly a scam. If you’re not sure, call the number on the back of your actual bank card or credit card and ask them about the email.
Here’s an example of a suspicious email inquiry sent to a DriveSavers email account.
The email contained an Internet link which looked like the regular Google Apps interface. However, hovering the cursor over the suspect link revealed an address for a different location that turned out to be a known malicious site.
Upon investigation of this URL with a few services like virustotal.com, we found that it had a hit as a malicious site.
We investigated further by opening the URL on a secured system. The link directed to a site that was built to look just like that Google Docs login page. One tell-tale sign this was not actually Google Docs, however, was that the URL for the page wasn’t google.com, and none of the links on the page lead to a Google-owned site.
When In Doubt
If in doubt about an email you receive, contact the person listed as the sender to verify with them directly before opening links within the email. If the “sender” did not send the email, definitely do not open any links in the message and delete the email right away.
Another avenue is to simply delete the suspected message and empty the trash. If the sender is bonafide and needs to communicate with you, then they will get back in touch. You may lose some time, but hopefully not your important data!
If you might have been tricked by a phishing email, you can file a report with the Federal Trade Commission at www.ftc.gov/complaint. | <urn:uuid:1ba0cdbe-ff2e-4c35-9e5d-d29782d144ea> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/dont-get-caught-by-email-phishing-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00721.warc.gz | en | 0.943424 | 2,018 | 2.875 | 3 |
06 Dec CWDM Channel Plan
CWDM channel plan – full list of channels for CWDM systems, color coding and how we use them in pairs for bidirectional CWDM systems explained in this article.
CWDM (Coarse Wavelength Division Multiplexing) is one of xWDM technologies that allow to achieve greater data throughput and defined by per ITU-T G.694.2 specification consists but is not restricted to 18 CWDM Channels divided by wavelengths (λ) evenly spaced 20 nm apart between each channel from 1271 nm to 1611 nm as shown in Double Fiber CWDM Channel plan table:
|Nominal central wavelengths (nm)||Latch color by wavelength||BIDI Pair||Side|
To use CWDM based modules a requirement is to use a MUX/DEMUX unit at each end – additional information can be found at https://edgeoptic.com/product/bidi-cwdm/ and https://edgeoptic.com/passive-xwdm-wireless-fronthaul-basics/.
Number of channels for data transmission are dependent on MUX/DEMUX unit used as there are two main types Single Fiber (BiDi) CWDM MUX and Double Fiber CWDM MUX (Each CWDM MUX/DEMUX unit channel count and used wavelengths can be customized).
Single Fiber (BiDi) CWDM MUX
Provides ability to organize up to 9 parallel and independent data streams over single fiber of single-mode optical fiber and are organized in pairs. First module of pair is applied in one end of and second module of the same pair to other end. For example, SCMD-9A 1310 nm module will be transmit to SCMD-9B 1331 nm module and vice versa. Below You will find Single Fiber CWDM Channel plan table:
Double Fiber CWDM MUX
Doubles the bandwidth as it consists of 18 channels sending and receiving information over two SMF (Single-Mode Fiber) lines (one for sending, one for receiving). On both ends same CWDM channel module transmits data to other side and receives data from other side. | <urn:uuid:8f3d4307-92f9-4c8c-84fd-129c0c667b9b> | CC-MAIN-2022-40 | https://edgeoptic.com/kb_article/cwdm-channel-plan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00721.warc.gz | en | 0.850699 | 619 | 2.8125 | 3 |
Overcoming the limitations inherent in drone technology
By Shaun Passley, Founder, Zenadrone
Drones work. From search and rescue operations in Ukraine, to COVID-19 vaccine drops in India, to tracking bison in protected landscapes across Colorado, drones have proven in recent years to be an indispensable tool for a wide range of organizations. Still, like all technologies, there is room for improvement.
It’s easy to buy the hype that says the sky’s the limit regarding the potential applications for drones, but it doesn’t take long to realize their limitations. For drones to achieve all the things that the industry’s manufacturers are promising, they will need to evolve into a tool that is more efficient and longer-lasting.
Identifying the barriers to achieving longer-lasting drones
Battery power is the lifeblood for drones. Thus, producing longer-lasting drones means increasing the amount of battery power they can store. Barring any unforeseen breakthrough in battery technology, this remains an issue that is best addressed by simply adding more batteries. Unfortunately, that path leads to the other major barrier to achieving longer lasting drones: Reducing their weight.
Decreasing the weight of a drone is the easiest way to extend battery life. With less weight to lift, drones draw less power from batteries. However, decreasing weight often means using materials that are less stable, which decreases the overall lifespan of a drone.
Achieving the proper balance between weight and power is one of the key challenges for drone manufacturers. Nonetheless, finding solutions to this challenge is critical not just for sustaining the current state of the industry, but also for allowing the drone industry to expand.
Developing drones that fly longer
Creating drones that can fly for longer periods of time is crucial; not merely for extending the range of work that contemporary drones are performing, but also for expanding the areas in which future evolutions of drones will work. For new industries to take advantage of the utility drones can provide, drones must be properly developed and equipped to carry new and larger payloads.
Drone used in search and rescue operations provide an excellent example of this. For example, drones being used for search and rescue in Ukraine provide users with two-way audio. Whereas most other drones only carry tools for capturing photos and video, these drones must also come equipped with microphones and speakers, allowing pilots to hear what is happening in the drone’s vicinity or talk to people the drone encounters.
In addition, using drones in new industries means they will need to be prepared to encounter new, and often, unpredictable, flight conditions. When flight conditions become unpredictable, so does the drone’s battery usage. Again; extending battery time becomes a critical factor.
The use of Beyond Visual Line of Sight (BVLOS) drones is yet another area in which longer battery life is essential. While current regulations have limited the use of such drones, manufacturers are counting on BVLOS drones to manifest the drone delivery programs that Amazon and other companies have.
Increasing performance with new technology
When it comes to building a better drone battery, the development of hydrogen fuel cells for drone use has been heralded as a major innovation. When used with drones, these cells allow for flight times to be extended to two hours while also lowering the recharging time of batteries to as little as ten minutes. Unfortunately, these cells also cost thousands of dollars, which equates to a dramatic increase in drones’ costs.
Increased costs are also an issue when it comes to engineering lighter drones that still retain necessary stability. Using carbon fiber-reinforced composites increases costs while lowering weight. More affordable options for materials sacrifice durability, and shorten the life of drones.
Another option for increasing performance is implementing intelligent features that prolong the life of the drone and increase its effectiveness. These include obstacle avoidance technology, sensing systems, and failsafe protection. When coupled with advanced pilot training that reduces crashes and increases efficiencies in flight, this technology can lead to drones that fly longer, last longer, and provide greater returns on investment.
About the Author
Dr. Shaun Passley is the Founder of Zenadrone. He holds numerous master’s degrees from DePaul University, Benedictine University, and Northwestern University and has a PhD in Business Administration. In addition to founding ZenaTech, he is also Chairman & CEO of Epazz, Inc. – an enterprise-wide cloud software company — and the manufacturing company Ameritek Ventures – a manufacturing company. ZenaDrone is an entirely bootstrapped venture that is aiming to help the agri sector in Ireland close its emerging labor gap through automation. Shaun can be reached online at firstname.lastname@example.org and at our company website https://www.zenadrone.com/ | <urn:uuid:cf64ef74-585d-427c-9f25-294a4826fa20> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/lasting-drones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00121.warc.gz | en | 0.931309 | 975 | 2.6875 | 3 |
Factors of Authentication
If you’re studying for one of the security certifications like CISSP, SSCP, or Security+ it’s important to understand the different factors of authentication, and how they can be intertwined as multifactor authentication. These are commonly known as something you know (such as a password), something you have (such as a smart card), and something you are (using biometrics). A basic understanding of these topics can help you correctly answer many different questions on authentication on any of these certification exams.
A previous post covered identification, authentication, and authorization. As a reminder, identification occurs when a user (or any subject) claims an identity. Authentication occurs when the user provides proof of the identity, such as with a password. Authorization grants access to resources based on the user’s proven identity.
Pass the Security+ exam the first time you take it.
CompTIA Security+: Get Certified Get Ahead: SY0-401 Study Guide
Something You Know
The something you know factor includes passwords and personal identification numbers (PINs). This is considered the weakest form of authentication because users often use weak passwords, give them out, or write their passwords down.
A strong password is complex and includes at least eight characters. Complex means that the password uses a mixture of upper case, lower case, numbers, and special characters. Some documentation indicates using three of the four character types is enough, while other documentation states that a complex password has four character types. The key is that more character types results in a more complex password that is harder to crack. However, the bigger point is that many users create passwords with only a single character type.
Troy Hunt did a great analysis of passwords that were stolen from Sony’s web sites and published on the Internet. He found that half used only a single character type and only 1 percent used any non-alphanumeric characters. Some of the top passwords were very simple: seinfeld, password, winner, 123456, purple, sweeps, contest, princess, maggie, and abc123. More than 64 percent of the passwords were found in common password-cracking dictionaries. Additionally, when users had accounts on two separate Sony sites, over 92 percent of them used the same password.
Password policies are often used to ensure that users create strong passwords and change them often. Some common password policy settings are:
- Maximum password age. Requires users to change their password.
- Minimum length. Ensures passwords have a minimum number of characters.
- History. Remembers specific number of past passwords (such as last 5, or last 24 passwords). Prevents users from reusing the same passwords.
- Minimum password page. Prevents users from changing their password right away. Used with the password history to prevent users from changing their password multiple times to circumvent the password history.
Looking for quality Practice Test Questions for the SY0-401 Security+ exam?
CompTIA Security+: Get Certified Get Ahead- SY0-401 Practice Test Questions
Something You Have
Smart cards and token, or fobs are common examples within the something you have factor of authentication. A smart card is a credit card sized card that holds key information about the user. Smart cards have certificates embedded in them using TLS and provide very strong authentication. This blog covers the differences between smart cards, a common access card (CAC), and a personal identity verification (PIV) card.
A fob (sometimes called a token) has an LED display that shows a number that changes regularly, such as every 60 seconds. This number is synchronized with a server. When users log into a website, they enter the number shown on the display to verify they have the token. This factor is often combined with another factor to provide multifactor authentication.
This book covers the new objectives effective Feb 1, 2012.
SSCP Systems Security Certified Practitioner All-in-One Exam Guide
Something You Are
The something you are factor uses biometrics to prove a user’s identity. Fingerprints are very commonly used for authentication, but there are many other examples. Biometrics are often divided into two categories: physical biometrics and behavioral biometrics.
- Physical biometrics are based on physical traits of an individual. It includes fingerprints, thumbprints, handprints, palms retina scanners, and iris scanners.
- Behavioral biometrics is based on behavioral traits of an individual. It includes voice recognition, signature geometry, and key strokes on a keyboard.
Biometrics systems are susceptible to false readings. These are commonly known as:
- Type 1 error. False Reject Rate (FRR). This occurs when a biometric system incorrectly rejects an authorized user.
- Type 2 error. False Accept Rate (FAR). This occurs when a biometric system incorrectly identifies an unauthorized user as an authorized user.
Most biometric systems allow you to adjust the sensitivity of the system. For example, you can adjust it to minimize false rejections (FRR errors) but this will result in an increase in the false acceptances (FAR errors). The overall accuracy of a biometric system is identified with the crossover error rate (CER), where the FAR and FRR are equal. A biometric system with a lower CER is more accurate than one with a higher CER.
Multifactor authentication combines two or three of the factors. Two common examples are:
- A user has a smart card and also uses a personal identification number (PIN)
- A user has a token and also enters a username and password
It’s important to realize that multiple authentication and multifactor authentication are not the same thing. For example, if a user enters a pin (in the something you know factor), and a password (also in the something you know factor), this is not multifactor authentication.
Other Security+ Study Resources | <urn:uuid:6378d524-78e7-4a6a-a281-145440fc48f0> | CC-MAIN-2022-40 | https://blogs.getcertifiedgetahead.com/factors-authentication-multifactor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00121.warc.gz | en | 0.917129 | 1,229 | 3.515625 | 4 |
Safer internet day 2018- admirable goal, but impossible to achieve without considering IoT Security
“Safer internet day 2018” was celebrated on February 6th. It was an opportunity to stop and think about all the things that make our connected world great and how to ensure it remains so. In recent years, the awareness of the risks involved in the use of internet, social media platform, mobile devices and wearables has risen to a point where almost everyone knows the basics of cyber hygiene.
We are taught to choose complex password, not to share intimate details online and to buy online only at respectable websites. We also know that lawmakers have invested a great deal in enacting laws and regulations to maintain our safety and privacy- such as the EU GPDR.
We are aware of the danger of theft identity, online bullying and denial of service attacks on our personal devices and corporate networks and websites. But the connected world is changing, and our perception of it should change too. Not too long ago, “The Internet” was a place you “went to” to connect with people. You sometime literally walked to an internet café to get “Online”.
The Internet of Things changes, well- everything
But the internet revolution does not stop there- the “Things” around us are also becoming increasingly connected, and, like our smartphones, are “always on”. But the big difference is that unlike those physical devices (be it hand help or placed on a table), connected devices are embedded in our home and work environment (and cars, too). We buy them, connect them and forget they are even there. We also pay less attention to securing them, and this opens the door for malicious actors to exploit these devices. To date such devices have been exploited on a very basic level- hackers considered them small, connected and unprotected devices, and have recruited them to Botnets or used them for mining cryptocurrencies. But as more sophisticated devices will become the norm, hackers will also utilize their unique characteristics- they can record the audio and video of their surrounding (including your home), they can lock your doors or play with the temperature of you’re A/C.
Start educating now
This sounds alarming and it should be. But while we (as a society and parents) are doing a decent job educating the population (and our own kids) about the dangers and best practices of the internet, we fail to so when it comes to connected devices. My guess is that several more very high profile IoT attacks will do the trick and break the mental barrier that makes people feel safe around such devices. But you don’t need to wait until then. You can start by questioning the best practices of securing smart home IoT devices and ask your IoT service provider what are they doing to secure your connected device. IoT security solutions are now available and will be deployed very soon to boost security levels, but until then, and even after they are deployed, awareness and common sense will be required to ensure the true safety of the new interest.
Remember- from now on we cannot and should not differentiate between online security and safety and “offline” (yet connected) security and safety. Keep in mind you are always, to some degree, connected and monitored, and act accordingly.
Have a nice and secured day!
[su_box title=”About Yotam Gutman” style=”noise” box_color=”#336588″][short_info id=’83491′ desc=”true” all=”false”][/su_box] | <urn:uuid:b6d3e06d-63f1-4bea-bae0-c4e816ca256b> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/safer-internet-day-2018-not-without-iot-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00121.warc.gz | en | 0.954163 | 750 | 2.53125 | 3 |
Artificial intelligence is one of those fields that intrigues people very much, and they often tend to connect it to things like science fiction. But the reality is that artificial intelligence is a part of our lives for quite some time and is developing fast.
If you use things like face recognition on your smartphone or mobile app for language learning that relies on AI algorithms, you are already in touch with AI. Transportation, health care, education, culture…AI is present in almost every field of life and business.
Artificial intelligence is a discipline that was invented in the 50s as a way of emulating and replicating biological processes. Since its invention, people expected a lot from AI, and there were several artificial intelligence investments in projects that unfortunately failed.
One of the first successful AI applications was Optical character recognition (OCR), which today is performed better by machines than humans.
Today it is possible to make deepfake videos or audio recordings using AI, where it is almost impossible to distinguish is that the real person we are seeing or hearing or a machine-generated image/voice. Moreover, it is possible to generate faces that don’t even exist. Maybe some of these things sound scary, but as with any other technology, artificial intelligence as well can be used for good or bad.
If used correctly and responsibly, AI can really help us automate lots of tasks, improve our lives and businesses and take them to a completely new level. AI is enabling us things that were unimaginable in the past like, for example, voice assistants, driverless cars and much more.
In the following videos, we will be talking about machine learning, deep learning and neural networks, with special emphasis on their connection to artificial intelligence. Find out more about this topic in the video and subscribe to our channel to get notified of new videos.
to find out more about this or any other topic, we would be more than happy to help. | <urn:uuid:12d98d0a-f52e-4a81-89c6-7fade2acaacc> | CC-MAIN-2022-40 | https://www.miadria.com/introduction-to-artificial-intelligence-in-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00121.warc.gz | en | 0.970219 | 392 | 3.203125 | 3 |
Almost 40% of internet users globally don’t have internet freedom
As internet adoption continues to grow worldwide, more and more governments want to control what internet users can see and assert their authority over tech firms. These trends have resulted in a significant decrease in internet freedom and more restricted access to content.
According to the recent findings by the Atlas VPN team, almost 40% of internet users globally don’t have internet freedom. While Icelanders have the most liberty online, Chinese internet users suffer the most from content limitations and censorship.
The data is based on Freedom on the Net 2021 report released by Freedom House. The organization is non-profit, and it conducts research and advocacy on democracy, political freedom, and human rights. Each country received a score from 0 to 100 based on a checklist of questions.
As per the findings, internet freedom is not available to 39% of internet users in 2021. Complete loss of internet freedom includes the government’s decisions to block specific applications and technologies, technical filtering, and website blocking, as well as other forms of censorship. In addition, violations of user rights and restrictions on free speech are also common.
Elsewhere, the internet is partly free to 28% of internet users. For example, India, considered ‘partly free,’ ordered blocking apps developed by China-based companies and deliberately disrupted internet connection during protests. Similar internet control practices can be seen in other nations with partial web freedom.
Following up, 21% of the world’s internet population has access to internet freedom. No critical internet controls were observed by researchers in Canada, Costa Rica, Estonia, France, Iceland, Japan, and the United Kingdom. People in such countries can freely express their opinions without being persecuted and access content with no or minimum restrictions.
Internet freedom rankings
Restrictions of internet freedom especially can be felt in authoritarian or communist regimes. Censorship of world news and website blockage is used to hide criticism of the government administrations, leaving people uninformed about the actual violations committed in their country.
Iceland ranks the highest in internet freedom global rankings, achieving 96 points. Users in this island country benefit from worldwide connectivity, few limitations on online content, and robust protections for their rights online. Media and government websites have not been subjected to cyberattacks in a couple of years.
Estonia is second in internet freedom by accumulating 94 points. The Estonian government is well-known for its innovative approach to e-government with low restrictions on internet access and online content. Despite that, in December 2020, researchers found that the Estonian government was a client of surveillance company Circles that allows monitoring phone data.
Furthermore, Canada and Costa Rica share third place on the global internet freedom ranking, each scoring 87 points. Finally, Taiwan closes out the top 5 with 80 points.
On the flip side, China is rated last in internet freedom as they received only 10 points. China remains one of the most oppressive countries to its internet users. The Chinese Communist Party (CCP) has tightened its control over media and online speech, censoring criticism about authorities’ response to the pandemic and Chinese-produced vaccines.
Iran ranks as the second-worst country globally in internet freedom with 16 points. During anti-government rallies, the Iranian administration imposed localized internet shutdowns. They continued to limit access to independent news sites and a variety of social media and communication platforms.
Continuing the list, next is Myanmar, which suffered a significant decrease in internet freedom score since last year, going from 31 to 17 points in 2021. In February, the military coup influenced the decline of internet liberties, as the military junta shut down internet services. Cuba and Vietnam round out the list with 21 and 22 points, respectively.
As the world becomes more digital, governments want to have more control over people’s online presence. Some governments have taken complete control of what people should see on the internet. A shared global vision of free and open internet for everyone is a must if people want to keep their digital privacy safe. | <urn:uuid:9e8a8c9a-f950-487b-b149-cf3ab3542840> | CC-MAIN-2022-40 | https://atlasvpn.com/blog/almost-40-of-internet-users-globally-dont-have-internet-freedom | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00121.warc.gz | en | 0.935154 | 828 | 2.6875 | 3 |
In this blog, Roman Shrestha, a researcher at Intelligent Voice, looks at an important, but often overlooked, machine learning challenge: “How to identify different bird songs in the wild?”. An effective, automatic wildlife monitoring system based on bird bioacoustics, which can support the manual classification done by an ornithologist or an expert birder, can be pivotal for the protection of the environment and certain endangered species. Of more than academic interest, this has applications as diverse as tracking climate change to identifying the locations of videos shot of child abuse. Roman’s unique approach shows a significant improvement over the state of the art.
Birds embody peculiar phonic and visual traits that distinguish them from 10,000 distinct bird species worldwide. Birds are well-known for their instinctual ability to promptly respond to the changes in their environment, providing reliable insights on its ecological state. Considering their gift of flight, small size and propensity to lodge in the trees and bushes, tracking then visually can be an onerous task. Hence, the majority of non-invasive automatic wildlife monitoring systems rely on avian bioacoustics.
In modern machine learning, classification of birds `in the wild’ is still considered as an esoteric challenge owing to the convoluted patterns present in the bird songs along with background noise, and the complications that arise when numerous bird species are present in a common setting. To overcome these challenges, we have implemented a novel Faster Region-Based Convolutional Neural Network (R-CNN) bird audio diarization system that incorporates object detection in the spectral domain for bird-specific spectral pattern recognition from spectrograms.
Spectrograms generate distinct visual patterns based on the energies possessed by avian vocalisation and these patterns differ for every bird species. The Faster R-CNN model is capable of learning and performing object detection in the spectral domain for effective spectral pattern recognition thereby providing an important insight on which bird sang when.
Diarization, commonly known as the “who spoke when?” problem, is the process of partitioning an input audio stream into homogeneous segments according to speaker identity.
In terms of bird audio diarization, the system accepts an input audio stream and recognizes all the bird species present in the audio along with the precise timestamp of when they occur in the recording as illustrated in the figure below which cannot be achieved by using a traditional classification approach.
The Faster Region-Based Convolutional Neural Network (R-CNN) is a specialized Convolutional Neural Network (CNN) architecture for object detection presented by Ross Girshick, Shaoqing Ren, Kaiming He and Jian Sun in 2015 that can perform highly accurate and speedy object detection. Faster R-CNN is the preferred architecture because it can be easily customized and trained with custom data and performs better compared to generic methods like Selective Search and EdgeBoxes The four major components of this architecture are discussed below.
CNN Feature Extractor: The CNN Feature extractor extracts fixed length features from the image.
Region Proposal Network (RPN): Based on the features extracted, the RPN generates bounding boxes to locate the object of interest (spectral patterns in our case) with various scales and aspect ratios.
The Classifier: The Faster R-CNN classifier is responsible for detecting objects from the Regions of Interest specified by the RPN.
Scoring: A confidence score between 0 and 1 is provided for the detected object along with the generation of bounding box to locate the object of interest within the image.
Bird Songs from Europe corpus, a subset of the Xeno-canto database, containing well-labelled intrinsic audio recordings of the 50 most common European bird species was used for training and evaluating the Faster R-CNN model.
Initially, the raw audio input was downsampled from 16 KHz to 8KHz and the mp3 files were converted to wav. Then, the audio files were segmented into uniform 2 second chunks and audio augmentation was performed with 50% overlap followed by merging the audio segments randomly between different species to simulate the presence of multiple bird species. A Pydub based bird audio detector operates on the recordings for automatically labelling the segments containing the bird species within the audio. The spectrograms were generated from the audio segments and partitioned for training (80%), validation (10%), and testing (10%).
A Faster R-CNN model with ResNet50 Feature Pyramid Network (FPN) backbone pre-trained on the COCO dataset with a Region Proposal Generator was used to train the model. The spectrograms along with labels highlighting the corresponding annotations were provided for training the model. For effective transfer learning, the latest version of Fastai makes use of several fit one cycle iterations to fine-tune modules with pre-trained weights more efficiently. Hence, the Fastai library was implemented utilising functionalities from the IceVision package.
Using an NVIDIA GeForce GTX 1080 Ti GPU, the total time to train the model was 8 days and 12 hours, with an average training time of 3 hours and 13 minutes per epoch. The model was trained for a total of 60 epochs, during the first 5 epochs the ResNet50 FPN backbone was frozen and only the model head was trained. This was followed by the remaining 55 epochs to train all the layers and adjust the parameters accordingly.
The weights of the model instance exhibiting minimum validation loss were saved and used for generating predictions on the unseen test set for inference. During inferencing, this trained Faster R-CNN classifier was used to generate predictions for the spectrograms from the unseen test set. The figure below displays a sample prediction, where an audio input provided to the bird audio diarization system whose ground truth values were correctly predicted and accurate bounding boxes were generated.
The results obtained by the trained Faster R-CNN model on the test set were reported as Diarization Error Rate (DER) of 21.81, Jaccard Error Rate (JER) of 20.94 and F1, precision and recall values of 0.85, 0.83 and 0.87 respectively. Three models which have used a similar number of species from comparable datasets were used for validating the performance of our model.
Silla Jr. and Kaestner, approached acoustic bird species classification with 48 species using another subset of the Xeno-canto database, using the Global Model Naive Bayes (GMNB) algorithm and reported 0.50 F1 score.
Incze et al. finetuned a pre-trained MobileNet-based CNN architecture to classify bird species from another subset of the Xeno-canto database. Even though the approach worked well for fewer species, the accuracy dropped to 20% while classifying 50 species.
F.Lima performed transfer learning on the pre-trained VGG16 CNN architecture and achieved a bird audio classification accuracy of 73.5% on the evaluation set, on the same Bird Songs From Europe dataset consisting of 50 classes.
Table 1 outlines the performance of these three approaches against the performance achieved in this work, based on an evaluation of bird species obtained from the Xeno-canto database. From the obtained results, it can be observed that the Faster R-CNN model outperforms standard classification approaches and has the potential to cope with the challenges associated with automated biodiversity monitoring in the wild.
A huge amount of research has been invested to build a fully functional automated non-invasive biodiversity monitoring system. However, bird songs can be easily occluded by various environmental noises and other simultaneously vocalising species which can seriously impact the accuracy of these systems. Compared to the traditional classification approaches, bird audio diarization is able to separate intrinsic avian vocalisations into separate homogeneous segments according to their species and determine the length of their songs alongside identifying the multiple simultaneously vocalising species in an ecosystem.
R. Shrestha, C. Glackin, J. Wall and N. Cannings, “Bird Audio Diarization with Faster R-CNN”, 30th International Conference on Artificial Neural Networks (ICANN), 14 – 17 Sep 2021 , Springer. [Online] https://doi.org/10.1007/978-3-030-86362-3_34
F. Lima, “Bird songs from Europe (Xeno-canto),” 2020. [Online]. Available: https://doi.org/10.34740/kaggle/dsv/1029985
S. Ren, K. He, et al., “Faster R-CNN: Towards real-time object detection with region proposal networks,” Adv Neural Inf Process Syst (NeurIPS), 2015.
Howard, S. Gugger, “Fastai: A layered API for Deep Learning”, Information, vol. 11, no. 2, p. 108, 2020
Vazquez, F. Hassainia, “Icevision: An agnostic object detection framework,” Github, 2020. [Online]. Available: https://github.com./airctic/icevision
N. Silla Jr., C.A.A. Kaestner, “Hierarchical classification of bird species using their audio recorded songs,” IEEE Int Conf Systems, Man, and Cybernetics, 2013
Incze, H. Jancso, et al., “Bird sound recognition using a Convolutional Neural Network,” IEEE Int Symp Intelligent Systems and Informatics (SISY), 2018
Lima, “Audio classification in R,” poissonisfish, 2020. [Online]. Available: https://poissonisfish.com/2020/04/05/audio-classification-in-r/ | <urn:uuid:e6144295-e19c-45d9-a122-7f6339b4dc79> | CC-MAIN-2022-40 | https://intelligentvoice.com/2022/07/21/monitoring-endangered-bird-species-using-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00121.warc.gz | en | 0.914767 | 2,017 | 3.140625 | 3 |
How do hackers hack phones? Several ways. Just as there are several ways you can prevent it from happening to you.
The thing is that our phones are like little treasure chests. They’re loaded with plenty of personal data, and we use them to shop, bank, and take care of other personal and financial matters—all of which are of high value to identity thieves.
However, you can protect yourself and your phone by knowing what to look out for and by taking a few simple steps. Let’s break it down by first taking a look at some of the more common attacks.
Types of Smartphone Hacks and Attacks
Whether hackers sneak it onto your phone by physically accessing your phone or by tricking you into installing it via a phony app, a sketchy website, or a phishing attack, hacking software can create problems for you in a couple of ways:
- Keylogging: In the hands of a hacker, keylogging works like a stalker by snooping information as you type, tap, and even talk on your phone.
- Trojans: Trojans are types of malware that can be disguised in your phone to extract important data, such as credit card account details or personal information.
Some possible signs of hacking software on your phone include:
- A battery that drains way too quickly.
- Your phone runs a little sluggish or gets hot.
- Apps quit suddenly or your phone shuts off and turns back on.
- You see unrecognized data, text, or other charges on your bill.
In all, hacking software can eat up system resources, create conflicts with other apps, and use your data or internet connection to pass along your personal information into the hands of hackers—all of which can lead to some of the symptoms listed above.
These are a classic form of attack. In fact, hackers have leveled them at our computers for years now too. Phishing is where hackers impersonate a company or trusted individual to get access to your accounts or personal info or both. And these attacks take many forms, like emails, texts, instant messages, and so forth, some of which can look really legitimate. Common to them are links to bogus sites that attempt to trick you into handing over that info or that install malware to wreak havoc on your device or likewise steal information. Learning how to spot a phishing attack is one way to keep yourself from falling victim to one.
Professional hackers can use dedicated technologies that search for vulnerable mobile devices with an open Bluetooth connection. Hackers can pull off these attacks when they are range of your phone, up to 30 feet away, usually in a populated area. When hackers make a Bluetooth connection to your phone, they can possibly access your data and info, yet that data and info must be downloaded while the phone is within range. As you probably gathered, this is a more sophisticated attack given the effort and technology involved.
SIM card swapping
In August of 2019, the CEO of Twitter had his SIM card hacked by SIM card swapping scam. SIM card swapping occurs when a hacker contacts your phone provider, pretends to be you, and then asks for a replacement SIM card. Once the provider sends the new SIM to the hacker, the old SIM card will be deactivated, and your phone number will be effectively stolen. This means the hacker has taken control of your phone calls, messages, and so forth. This method of hacking requires the seemingly not-so-easy task of impersonating someone else, yet clearly, it happened to the CEO of a major tech company. Protecting your personal info and identity online can help prevent hackers from impersonating you to pull off this and other crimes.
Ten tips to prevent your phone from being hacked
While there are several ways a hacker can get into your phone and steal personal and critical information, here are a few tips to keep that from happening:
- Use comprehensive security software on your phone. Over the years, we’ve gotten into the good habit of using this on our computers and laptops. Our phones? Not so much. Installing security software on your smartphone gives you a first line of defense against attacks, plus several of the additional security features mentioned below.
- Update your phone and its apps. Aside from installing security software, keeping current with updates is a primary way to keep you and your phone safe. Updates can fix vulnerabilities that cybercriminals rely on to pull off their malware-based attacks. Additionally, those updates can help keep your phone and apps running smoothly while also introducing new, helpful features.
- Stay safer on the go with a VPN. One way that crooks can hack their way into your phone is via public Wi-Fi, such as at airports, hotels, and even libraries. These networks are public, meaning that your activities are exposed to others on the network—your banking, your password usage, all of it. One way to make a public network private is with a VPN, which can keep you and all you do protected from others on that Wi-Fi hotspot.
- Use a password manager. Strong, unique passwords offer another primary line of defense. Yet with all the accounts we have floating around, juggling dozens of strong and unique passwords can feel like a task—thus the temptation to use (and re-use) simpler passwords. Hackers love this because one password can be the key to several accounts. Instead, try a password manager that can create those passwords for you and safely store them as well. Comprehensive security software will include one.
- Avoid public charging stations. Charging up at a public station seems so simple and safe. However, some hackers have been known to “juice jack” by installing malware into the charging station. While you “juice up,” they “jack” your passwords and personal info. So what to do about power on the road? You can look into a portable power pack that you can charge up ahead of time or run on AA batteries. They’re pretty inexpensive and easy to track down.
- Keep your eyes on your phone. Preventing the actual theft of your phone is important too, as some hacks happen simply because a phone falls into the wrong hands. This is a good case for password or PIN protecting your phone, as well as turning on device tracking so that you can locate your phone or even wipe it remotely if you need to. Apple provides iOS users with a step-by-step guide for remotely wiping devices, and Google offers up a guide for Android users as well.
- Encrypt your phone. Encrypting your cell phone can save you from being hacked and can protect your calls, messages, and critical information. To check if your iPhone is encrypted can go into Touch ID & Passcode, scroll to the bottom, and see if data protection is enabled (typically this is automatic if you have a passcode enabled). Android users have automatic encryption depending on the type of phone.
- Lock your SIM card. Just as you can lock your phone, you can also lock the SIM card that is used to identify you, the owner, and to connect you to your cellular network. By locking it, keeps your phone from being used on any other network than yours. If you own an iPhone, you can lock it by following these simple directions. For other platforms, check out the manufacturer’s website.
- Turn off your Wi-Fi and Bluetooth when not in use. Think of it as closing an otherwise open door. There are several attacks that a dedicated and well-equipped hacker can make on devices where Wi-Fi and Bluetooth are open and discoverable. Likewise, while not a hack, some retailers will track your location in a store using Bluetooth technology for marketing purposes—so switching it off can protect your privacy in some situations as well. You can easily turn off both from your settings and many phones let you do it from a pulldown menu on your home screen as well.
- Steer clear of third-party app stores. Google Play and Apple’s App Store have measures in place to review and vet apps to help ensure that they are safe and secure. Third-party sites may not have that process in place. In fact, some third-party sites may intentionally host malicious apps as part of a broader scam. Granted, cybercriminals have found ways to work around Google and Apple’s review process, yet the chances of downloading a safe app from them are far greater than anywhere else. Furthermore, both Google and Apple are quick to remove malicious apps once discovered, making their stores that much safer.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:46c200c9-8782-4e69-9ba3-668b9e3e3f27> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/?p=99360 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00121.warc.gz | en | 0.944337 | 1,799 | 2.78125 | 3 |
You are logging in to your email account, but it’s asking you to enter your password again. Should be easy enough, as you use the same password for everything — typing it in is muscle memory now. But it doesn’t work. You try again, with the same worrying results. You reset the password and have to confirm it through a secondary email. In a panic, you realize you’re locked out of that one too.
Make a list of all your online accounts and imagine they’re all secured with the same password. If a hacker figures out your password and breaks through, your entire network is compromised.
This is the reality of reusing the same password for all your online accounts. This guide will help you never make that mistake again.
What are the dangers of reusing the same password?
First, we need to understand why password reuse is so common. The reason is simple — it’s just easier. With the modern need to create multiple online accounts, the number of unique passwords the average user has to remember verges on double digits.
Should you use the same password for every account? The answer is always no. Despite warnings from cybersec experts, statistics prove that password reuse is still prevalent. A 2019 Google poll of password reuse statistics revealed that 52% of US citizens use the same password across different accounts, and 13% use the same password for all accounts. Alongside those alarming numbers, 41% of Americans think it’s impossible to remember different passwords for all their accounts.
With such statistics, it’s hardly surprising that password reuse is still rampant. But the convenience of password reuse does not outweigh the risks it poses.
If a hacker were to brute-force their way into one of your shopping accounts by repeatedly trying different combinations until they gain access, they will most certainly try to use that password with the rest of your accounts. Your bank account, emails (personal and work), home network – they’re all at the mercy of the hacker because you reuse the same password for everything.
How do I get out of the habit of reusing the same password?
It’s hard to break a habit of convenience. How can you be expected to remember complex passwords for all your online activities? Writing them all down in a notebook is asking for trouble, and reusing the same password but with minor modifications isn’t the wisest choice either.
So where do you start when creating a secure password? First, you should never utilize personal information as a password. Your date of birth or that of your loved ones, old addresses, or even old schools are the first things a hacker will try.
Luckily, with a password manager like NordPass, all the stress of creating and remembering your passwords is a thing of the past. Here’s how it can help:
Automatically create high-strength passwords. The perfect password should be a jumble of letters, numbers, and symbols — something completely illogical that no hacker could guess. NordPass does this for you with its Password Generator. Or you can create your own password, and NordPass will assess its strength. This feature is available both in the web and in-app versions.
No need to remember passwords. NordPass will automatically fill in your passwords, streamlining your online experience. When you log in to an online account for the first time, NordPass will prompt you to save your details automatically. The next time you visit, you can fill in all the details with a single click.
All your passwords in one secure vault. NordPass will keep your passwords locked and protected by XChaCha20 encryption – an algorithm that has been embraced by Silicon Valley tech experts. You’re the only one who has access to the vault, as NordPass has a strict no-knowledge policy.
Completely free. All you have to do is sign up — no credit card needed. You can save an unlimited amount of passwords, keep sensitive notes and information, and synchronize across different devices.
Only 24% of people use a password manager. Get ahead of the curve and subscribe to NordPass today.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:751ab196-c5d6-4c1d-89e7-06a72b60c398> | CC-MAIN-2022-40 | https://nordpass.com/blog/stop-reusing-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00121.warc.gz | en | 0.917778 | 880 | 2.515625 | 3 |
Nested virtualization is a complex process that involves running virtual machines within virtual machines. This process is made possible through the use of hypervisors, which are specialized software programs that manage the operating systems needed within virtual environments. Hypervisors are responsible for allocating essential resources like processing power, memory, and other resources that your virtual environments require to function.
What Are the Benefits of Nested Virtualization?
Nested virtual machines offer many advantages over traditional on-premise solutions. Here are a few of the most notable benefits of nested virtualization:
- Enhanced flexibility. The ability to host virtual environments within virtual environments allows you to develop and test software on your own terms and provides you with flexible sandbox environments that you can adapt to your needs.
- Significant cost savings. Physical equipment is expensive, maintaining it adds to those costs, and having the right staff on hand costs even more. With nested virtual machines, you only pay for the resources you are using. That means no overspending on equipment that you’re not maximizing fully.
- Support for multiple hypervisors. Not all hypervisors are created equally. You may require a specific hypervisor that is compatible with the virtualization environment that you are creating. Most cloud-based environments will support the most popular hypervisors on the market.
- Easy to scale. The goal of any company is to grow. Cloud-based nested virtualization is a scalable solution. You can easily add additional processing power, memory, and other essential resources as needed.
When Should I Use Nested Virtualization?
This technology provides you with a scalable solution that your company can integrate to boost productivity, improve the customer experience, and ensure that your software works as intended in the environments that your customers will use it in.
Nested Virtualization Performance Explained
The nested virtualization performance that you achieve will depend on the virtual environments that you deploy. You can select the number of resources that you want to allocate when deploying your nested virtualizations.
Hypervisors play an important role in regulating performance in your virtual environments. They bridge the gap between the hardware and the operating system, ensuring that your environments are able to perform as expected.
There are two main types of hypervisors that you can implement in your sandbox environments:
- Type 1 “bare metal” hypervisors are installed directly on the target computer where the virtual environments will exist. This is the ideal option if you intend to run multiple virtual servers on a single computer. The most popular type 1 hypervisors include Microsoft Hyper-V, Citrix XenServer, and VMware ESXi.
- Type 2 “hosted” hypervisors are necessary for network virtualization in the cloud. These hypervisors are a form of hosted software that operates within the hosted system itself. A type 2 hypervisor is still capable of running multiple operating systems. They are the ideal choice for companies that want to test multiple operating systems. The most popular type 2 hypervisors include Microsoft VirtualPC, VMWare, and VMWork. | <urn:uuid:0b3d1174-7ed9-4fd3-962c-a4f5d4e87084> | CC-MAIN-2022-40 | https://www.cloudshare.com/virtual-it-labs-glossary/what-is-nested-virtualization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00121.warc.gz | en | 0.924433 | 620 | 2.734375 | 3 |
Picture this: "Webhosting Company loses 13 million plaintext passwords" in bold at the head of a blog or a paper.
Few headlines can send this many chills down the backs of an IT security team, and this is one. Even without the jump-scares, that's how a security team's horror movie looks.
As long as the Internet exists, transferring data between two or more endpoints will always be challenging. There are vulnerabilities in file transfer from the moment a user logs in. Usernames, passwords, encryption, and data are all viable targets.
A Detour to FPS and Telnet Protocol
An article about SSH that doesn't pay homage to its predecessors is incomplete. Long live FPS and Telnet protocols; the foundations of managed file transfer as we know it today.
All forms of data transfer occur across two endpoints: a client and a server. A file transfer protocol such as FPS or SFPS is what facilitates this transfer. For its many shortcomings, being unencrypted is FPS's biggest one.
As users started sharing more crucial and confidential information across client-server endpoints, there was a need for enhanced security. This need gave rise to symmetric password-based authentication through login protocols such as Telnet and RSH.
Login protocols would require a client and server to have a matching key and password. The client would send the key to the server, and if they matched, bidirectional data transfer could occur.
The Rise of SSH Protocols
Symmetric password-based authentication would ensure data protection, but the celebration would be short-lived. It was not long before a myriad of issues reared their ugly heads.
Think of everything from IP, DNS, and routing spoofing to packet sniffing and denial of service attacks. The possibilities of threats were endless.
A malicious user, for example, could change a client's IP address to their own and harvest unencrypted information, including plain text passwords and crucial data.
Subsequently, another malicious user could access usernames and intentionally enter wrong passwords leading to a denial of service for key clients.
Telnet, RSH, and FPS protocols were no longer safe. A breakthrough was long overdue. In 1995, a certain Tatu Ylönen would develop Secure Shell Protocol for his personal use.
Fast forward fifteen years later, and SSH protocol is used in millions of companies worldwide.
SSH File Transfer Protocol Stripped Down to the Bone
Secure Shell (SSH) was born out of the inherent insecurity associated with FTP and Telnet protocols. Unlike Telnet that used two channels for client-server authentication, SSH would use one channel. A client would send their key to the server, and if the server's key matched, bidirectional transfer of data could occur.
Moreover, SSH used industry-standard encryption such as AES to secure data. With encryption, malicious users could not interpret harvested data even after a breach. It doesn't stop there.
SSH uses hashing algorithms such as the SHA-2 to ensure that hackers don't corrupt data during its bi-directional transfer.
Industry-standard encryption, check. Hashing algorithms and multiple upgrades, check. Could asymmetric identification be the cherry on top?
SSH Authentication and Asymmetrical Identification
SSH allowed asymmetric identification. In this case, servers could use cryptography to ensure that the client and server keys were different. This assurance would make man-in-the-middle attacks almost impossible since a hacker could obtain either of the two passwords but not both.
How the SSH Protocol Works
Step 1: The SSH client initiates the connection by contacting the SSH server
Step 2: The SSH server sends the public key
Step 3: Both the SSH server and SSH client negotiate their protocols and constraints
Step 4: The user can then login and access the server host
Another upside of using an SSH protocol is the various options for user authentication. A user can choose these depending on the level of security they desire. They include:
- Password-Based Authentication
In password-based authentication, the server and the client use a password and key to authenticate the sincerity of the connection.
- Key Based Authentication
Key-based authentication applies to the use of public and private keys. A server has a secret private key and a public key that it sends when a client requests it.
The private and public keys are not always similar. However, they undergo algorithmic changes and calculations that provide a similar result. If the algorithms calculate a resultant match between public and private keys, the server grants user access.
When to Use the SSH Protocol
The SSH protocol was a revolutionary improvement. Its many applications have found their way into day to day operations of several B2B and B2C companies. Some of the applications of the SSH protocols include:
- File Transfer
One word—encryption. Because SSH makes good use of AES algorithms, it has a special place in the hearts of companies that require the secure transfer of data and files across endpoints.
- Delivery of Software Updates and Patches
Using passwords to authenticate software updates or patches between a single server and millions of users is begging for chaos. Think updates from Tesla to its millions of cars or Apple to its billions of iPhones. SSH enables you to automate authentication and pass seamless updates and patches through data transfer.
- File Transfer Automation
Using legacy systems, mass file transfer between you and your clients would be a massively time-consuming undertaking without the benefit of centralized monitoring and control. Requiring clients to remember passwords to receive files correctly would also be disastrous. Because the SSH protocol automates authentication, automatic file sharing is a lot easier.
- Remote Maintenance of Crucial Network Infrastructure
The days of manually managing all crucial infrastructure are long gone. These days, your IT teams manage their operating systems, routers, and server hardware remotely. This scenario creates the need for a secure and automated authentication system for data transfer, the best one being SSH.
- Reducing the Reliance on Password Management
The days of symmetric password and key authentication were nothing short of hell for many IT firms. Furthermore, storing millions of passwords in a single database was always a disaster waiting to happen. SSH and private and public keys go a long way to automate server access.
- Automated Machine-to-Machine Processes
Processes such as backups, database updates, and system health monitoring applications across millions of machines could be both risky and time-consuming. Automated authentication allows machine-to-machine process authentication by transferring data and keys across millions of machines automatically.
- SSH and Single Sign-On: A Match Made in Heaven
These past few years have seen SSH find its largest application yet. The ability of SSH to automate authentication has birthed Single Sign-On (SSO) and Password-free access.
In other words, your clients no longer have to enter their passwords each time they access a server or switch between servers. This feature has cut down on login capabilities, and increased signups since customers flow in the path of least resistance.
Reap the Benefits of SSH Today
There is a fine line between satisfactory and excellent when it comes to data security, and MOVEit is here to help you cross it. We leverage secure transfer protocols such as SSH and SFPS together with years of experience to offer unmatched secure file-sharing capabilities. If you are set to cross the bridge from your current data security situation to a whole new level of file sharing security, contact us today. | <urn:uuid:c3ba7f44-2a1e-4fa6-99ec-f807e8f87fd9> | CC-MAIN-2022-40 | https://www.ipswitch.com/blog/ssh-file-transfer-protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00321.warc.gz | en | 0.931808 | 1,548 | 2.9375 | 3 |
Federal Trade Commission targets mobile games designed for children
Privacy in mobile games has become a very problematic issue, according to the Federal Trade Commission, a government agency focused on consumer rights. According to the agency, the developers of mobile games that target children are not doing enough to protect the privacy of these consumers. These mobile applications often collect information from their users, such as name, email address, and even financial information provided by parents. If these applications are compromised by malware, this information could be exploited, with catastrophic consequences.
Mobile games for children are not developed with privacy in mind
Developers of mobile games, especially those targeting children, rarely have privacy as a primary concern. Younger consumers are not necessarily considered to have access to any vital information that would attract the interest of a hacker, but this is not usually the case. The Federal Trade Commission notes that many young consumers make use of their parent’s information, often without the knowledge of their parent. While this practice is typically rare, the growing popularity of mobile devices and their reach to a younger audience is causing this practice to spike.
Developers not supplying enough information
Moreover, the Federal Trade Commission suggests that developers of mobile games have not done enough to provide parents with the information they need to make informed choices regarding the applications that their children use. Mobile games designed for younger consumers are often marketed in a way that would be appealing to children, not adults. The agency notes that only 15% of the mobile games it examined for its investigation provided information letting parents know that there are in-app advertisements, many of which are designed to acquire information from the person following them.
Privacy continues to be a major concern for mobile consumers
Privacy continues to be a hot topic in the mobile space. Companies like Apple have run into legal trouble in the past over their privacy and security methods. Privacy concerns are beginning to affect the mobile applications market, with consumers becoming less willing to purchase or download a particular app unless they can be convinced that their information is protected in some way. | <urn:uuid:b6e95b46-9a1f-4bc4-8deb-48ae0bb73d0e> | CC-MAIN-2022-40 | https://www.mobilecommercepress.com/mobile-games-for-kids-may-not-be-as-secure-as-they-should/851210/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00321.warc.gz | en | 0.964213 | 408 | 2.578125 | 3 |
A White House proposal to curb the use of antibiotics in food animals drew concerns from both sides of the issue on Tuesday.
Meat producers said antibiotics used on livestock and poultry are strictly regulated and safe for humans. Environmental advocates, meanwhile, said the policy would continue to enable the drugs’ "irresponsible" use on farms.
The Obama administration convened dozens of stakeholders as part of a "White House Forum on Antibiotic Stewardship" this week. Antibiotics are often used to prevent disease or speed up development among food animals, but observers argue overuse could contribute to the rise of drug-resistant bacteria.
The White House effort, in part, directs federal agencies to seek "responsible antibiotic-use policies" in its cafeteria food purchases and commits the Presidential Food Service to "to serving meats and poultry that have not been treated with hormones or antibiotics."
The North American Meat Institute joined the forum, and CEO Barry Carpenter expressed hope that the initiative would "help lead to meaningful steps to best ensure both human and animal health.”
NAMI officials, however, also expressed reservations about the language regarding the Presidential Food Service. They said hormones and antibiotics are distinct substances with wildly different regulations and uses. The group also objected to the use of the word "treated."
"Not utilizing antibiotics when a veterinarian deems it appropriate could pose an animal welfare issue," the group said.
By contrast, the Natural Resources Defense Council issued a statement arguing that although the White House proposal would help build the market for "responsible" antibiotic use in the food industry, it doesn't go nearly far enough.
"The federal policy should halt all routine use of medically-important antibiotics, not just one category of routine use," said NRDC health attorney Mae Wu.
Many restaurant chains and food producers are already responding to broader concerns from consumers by cutting back on antibiotic use. Still, the U.S. Food and Drug Administration found that the food animal use of antibiotics considered “medically important” to humans increased by 20 percent between 2009 and 2013. | <urn:uuid:e4eba153-f86d-4fc3-ac7b-b8712dd28e62> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/news/13213677/meat-industry-environmentalists-critique-white-house-antibiotic-efforts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00321.warc.gz | en | 0.944767 | 416 | 2.6875 | 3 |
With this week being Anti-Bullying week, we thought we’d create a blog devoted to the impact of cyber bullying and some top tips on how to spot the signs.
Cyber Bullying – What is it?
Cyber bullying is becoming more and more widespread in recent times with the influx of new technologies, new social media platforms and the increased sense of anonymity that comes as a result of living in an increasingly digitalised culture. Cyber bullying is defined as the use of any digital technology to threaten, tease, upset or humiliate another person. Unfortunately, bullies have a whole range of options open to them when it comes to targeting their victim. This can take the form of text messages, Facebook messaging, online games, chat apps and social networking sites, proving that bullying doesn’t end at home. Somewhere once thought of as a sanctuary can actually become just another place where a child (or adult) can be bullied and threatened. Just earlier this year we witnessed the unfortunate impact of popular AI chat app ‘SimiSimi’ as it was manipulated to display nasty messages in relation to online users.
What Makes Cyber Bullying especially Ugly?
All forms of bullying are ugly and can have detrimental effects on the target – from decreased self-confidence and self-worth to more extreme cases of depression and suicide. Cyber bullying is unique in that it is widespread instantaneously – the moment the send button is clicked a vast audience can be affected. It is repetitive, with multiple digital platforms from which to carry out attacks and it never sleeps- it can impact at any time of day or night.
Cyber bullying is on the rise, in fact 1 in 5 teenagers and 40% of adults have experienced some form of it. It’s also hard to track down, as although it’s easy to collate evidence, it’s sometimes hard to find the perpetrators due to increased anonymity online.
Cyber Bullying Techniques
With a new form of bullying comes many new methods of carrying out attacks with real emotional affects. Here are just a few:
– Catfishing: Posing as someone else and luring a victim into creating an online relationship.
– Cyber Stalking: Sending repetitive threats of physical harm via online platforms.
– Fraping: Logging in on someone’s social media account, impersonating them and posting inappropriate content.
– Outing: Sharing personal and private information and media about someone online with an intention to humiliate them.
– Griefing: Abusing users online via online gaming channels.
How to Spot the Signs:
It can be hard to spot the signs when someone is being bullied but here are a few to look out for:
– Becomes nervous when receiving a text/communication on their mobile device.
– Seems uneasy about going to school or work and pretends to be ill.
– Unwilling to share information about online activity.
– Unexplained anger or depression.
– Abruptly shutting off/ walking away from a device mid-use.
– Unexplained weight loss/gain.
Cyber Bullying – What You Can Do to Help
1. Talk and show support. Find the right time to approach the person and talk to them if you think they are being bullied. Create a plan for how you’ll help them to get through it.
2. Don’t retaliate – Advise them not to respond to abusive messages.
3. Keep evidence – Take screenshots for proof.
4. Block the bullies – In the case of repetitive abuse block the sender and report them to the social/gaming online platform.
5. Keep the support going- Check in and let them know you are there for them. Perhaps suggest counselling to help them deal with the affects.
6. Take it further – In extreme cases and where you feel the victim is in danger consider informing the police.
For more information, this website is a great resource.
Have you or someone you know been the victim of cyber bullying? What advice would you give? | <urn:uuid:5d85ddcb-8679-4324-a958-9076fc35ae30> | CC-MAIN-2022-40 | https://www.metacompliance.com/da/blog/cyber-security-awareness/the-ugly-truth-about-cyber-bullying | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00321.warc.gz | en | 0.939915 | 842 | 3.328125 | 3 |
The Ultimate Guide to Cryptocurrency
‘Digital money’ was ushered in with the 2008 invention and 2009 launch of Bitcoin. While still the most popular and nominally valuable of them all, there are thousands of cryptocurrencies available in the market today. Cryptocurrencies are created with encryption algorithms and are intended as a store of value or a means of exchange, and are generally (but not always) based on blockchain technology. Blockchain is a distributed ledger enforced by a network of independent computers. | <urn:uuid:b054e1ec-1e37-4986-a100-bffbdb003598> | CC-MAIN-2022-40 | https://ecommercenews.asia/tag/cryptocurrency | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00321.warc.gz | en | 0.958079 | 99 | 2.625 | 3 |
Solid-state hard drives are newer, faster, and sometimes smaller than older traditional hard drives. If you've ever owned a computer, you've probably noticed they come with this tiny little spinning disk drive thingy--the one that makes all those annoying noises when you're trying to type up an awesome new blog post.
Even though these things have been around for quite some time now, we still get a lot of questions about them specifically. It's understandable: it's not your monitor slowing down your computer or your graphics card getting all hot and bothered; it is this piece of hardware hogging valuable resources and making funny noises intermittently.
The Hard Disk Drive vs The Solid State Drive
A hard disk drive (aka HD or HDD) is made up of moving parts -- a spindle that spins around and reads/writes data on little magnetic plates which are located inside a metal cage that protects the data. The spindle can break, and moving parts are more prone to failure than non-moving parts. Plus it has a mechanical motor that gets hot over time and will need replacement eventually. What's so great about solid-state drives? They use no moving parts at all. Instead, they store your data on flash memory chips -- millions of them! Flash memory is really neat
because while Solid State Drives are orders of magnitude faster than HDs where you have to wait for mechanical parts to move -- this means your computer boots up FASTER and programs startup FASTER . It's truly remarkable!
This newfangled solid-state drive (SSD) technology is one of the major advancements in the last few years. SSDs are faster and more reliable than their older counterparts. The reason an SSD is so much faster than a hard drive has to do with how data is stored. Hard drives store data on platters that spin around really fast--a lot like vinyl records or CDs that you play at home. This causes some problems: because these "records" are spinning, they will sometimes glitch by skipping over or repeating parts of your song/movie. In technology terms they're called "head crashes", and a computer suffers from this type of crash whenever it loses power unexpectedly or when it's just trying to read/write data super-fast.
Another reason that SSDs are faster is with a hard drive you have to write data on one single layer of the disk--this means that when files go missing, they stay gone for good (or until you run an expensive program to see if it can recover them). With an SSD, you're able to "map" sections of the entire drive and move things around so there's no loss in efficiency or speed. That means all your files will be easily accessible at all times because they're always where they need to be. And as far as reliability goes, solid-state drives use less power and are more reliable than hard drives due to their lack of moving parts and much smaller size.
SSDs, unlike their predecessors, are more resistant to physical damage because there aren't any moving parts. If you've ever dropped a laptop or desktop before, SSDs won't break, because there are no sensitive moving parts to get thrown out of place.
Hopefully, by now you know that SSD's are going to outperform hard drives whenever it comes to speed--but what about storage? SSD's have the same storage capacity options that HDD have. And although SSD's have come way down in price over the years, they are still a bit more expensive than their HDD counterparts.
So faster? check!
whatever storage space you need? check!
Solid State Drives are the clear winner here, even if they're a little more expensive on average.
Conclusion: Get the Solid State Drive, your future frustration tolerance will thank you!
Choosing between Solid State Drives and Internal Hard Drives is easy: if you're building your computer, get an SSD for your operating system and a traditional HDD for storage because the price is right! Call us now or book a diagnostic appointment to see if a solid state drive is right for your computer. | <urn:uuid:f8fd859d-c283-4afb-9385-384319a50943> | CC-MAIN-2022-40 | https://www.jbitabq.com/single-post/ssd-vs-hdd | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00321.warc.gz | en | 0.960414 | 837 | 3.046875 | 3 |
In this white paper, JetCool Technologies explores how microconvective cooling technology using cold plate design can provide future-ready flexibility for data centers.
Facebook announced a series of major investments in water conservation as it confirmed plans for an $800 million data center campus in Mesa, Arizona. The project had brought national attention to the data center industry’s stewardship of scarce water resources in drought-stricken areas.
Extreme heat and drought in the Western US is bringing sharper scrutiny of data center water use, and testing assumptions about climate in some data center destinations. The heightened awareness of water constraints is raising the bar for data center developers.
Data centers are known for large amounts of water and power usage. In fact, it takes a tremendous amount of energy and water to power and cool any data center. That’s according to a new report from Aligned Energy, that outlines how the company installed a soft water program to its fluid cooler system at its Phoenix data center to save 24 million gallons of water a year. | <urn:uuid:594bbb1e-6426-45ea-ba96-21a5b0bdc4da> | CC-MAIN-2022-40 | https://datacenterfrontier.com/tag/water-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00321.warc.gz | en | 0.920343 | 208 | 2.5625 | 3 |
Despite its reputation as the ultimate disruptor, cloud computing has been around – in one form or another – since before the Internet itself. The cloud symbol was used as early as 1977 in the original ARPANET, and its metaphorical representation has endured.
The Evolution Of Cloud Computing
In the late 2000s and into the 2010s, cloud computing was primarily discussed from a hosting position. Companies either migrated to a private cloud, public cloud, or a hybrid cloud model of computing.
Today, hybrid cloud adoption is at 58% — and growing, and almost 90% of enterprise organizations are deploying multicloud solutions.
The rise of as-a-service capabilities shifted the conversation, so we moved into categorizing cloud computing from a functional standpoint: infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS).
SaaS continues to dominate the market, with thousands of startups and proven companies offering a wealth of SaaS solutions. PaaS and IaaS providers are fewer in number, due the increasing involvement of the IT stack (PaaS includes the application layer; IaaS includes OS layer).
So, what’s next? If the functional delivery mechanism of cloud technology is no longer about management, and instead revolves around straight-up accessibility, uptime, and availability, navigating on-premise vs. SaaS vs. PaaS vs. IaaS is almost a moot point.
It’s simple. The way we talk about the cloud is evolving.
What Is Everything-as-a-Service (XaaS)?
The reality is that ‘everything’ is serviceable today — that is anything in the IT sector can be delivered as a service via the internet. Everything-as-a-Service (XaaS) simply denotes the increasing servitization of technology.
Also known as anything-as-a-service, XaaS originated with the SaaS deployment model and now includes IaaS, PaaS, and even more functionally-specific models, such as storage-as-a-service, desktop-as-a-service (DaaS), and disaster-recovery-as-a-service (DRaaS).
Why Is XaaS On The Rise Now?
Public cloud services are booming. The reports on the cloud computing industry are impressive. Research and Markets found that global cloud computing revenue will reach $342 billion by 2025.
This data aligns with Gartner’s latest report as well.
The overall growth and percentage growth within each segment, particularly desktop-as-a-service (shown below), shows something more interesting when considered in tandem with multicloud deployments — which are also on the rise.
Back to our initial question. Why is XaaS (and DaaS, in particular) top-of-mind today? The obvious answer is the sudden rise of remote work, brought on by the global coronavirus pandemic. In addition, a Forbes insights report noted this:
“This Everything-as-a-Service (XaaS) business model—one which has helped companies in the B2B space generate continuous revenue from their products—is being eyed by consumer companies hungry for income that lasts beyond the initial product purchase. Through “servitization”—combining products with services—businesses can innovate faster and deepen their relationships with customers by providing more value. That value includes data insights derived from IoT-powered devices—from thermostats to wind turbines.”
How Will XaaS Continue To Evolve?
Moving to an Everything-as-a-Service offers multi-faceted benefits to companies in every industry:
- Move to opex model
- Lower total cost of ownership (TCO)
- Improved accessibility
- Continuous updates
- Improved security controls
- Maintained through economies of scale
- Enables scalability
- Faster implementation time
- Increased overall strategic IT team capabilities
Getting everyone to XaaS isn’t going to happen overnight, though.
Sid Nag, research director at Gartner, notes:
“As of 2016, approximately 17 percent of the total market revenue for infrastructure, middleware, application and business process services had shifted to the cloud. Through 2021, this will increase to approximately 28 percent.”
In this way, migrating to an XaaS model will look much like early migrations to public cloud providers and hybrid models.
It will be slow, there will be fear, and — in the end — it will happen.
Cloud Migrations In Your Future?
Cloud computing, like Gartner’s Nag indicates, isn’t going anywhere. Soon, everything-as-a-service will be the norm. Ensure you have the right cloud computing strategy with a partner that’s worked with cloud since those early days.
Our Cloud Insights Report is a mini-assessment, giving you insights into your current on-premise and cloud infrastructure along with top recommendations for optimizing your current and potential workloads.
Contact Mike Czerniak, Mindsight’s Vice President of Project Services, to get started today.
See what we can do for you. Contact us today.
Like what you read?
Mindsight is industry recognized for delivering secure IT solutions and thought leadership that address your infrastructure and communications needs. Our engineers are expert level only – and they’re known as the most respected and valued engineering team based in Chicago, serving emerging to enterprise organizations around the globe. That’s why clients trust Mindsight as an extension of their IT team.
Visit us at http://www.gomindsight.com.
About The Authors
Mike Czerniak is the Vice President of Project Services At Mindsight, an IT Services and Consulting firm located in the Chicago area. With 20 years of experience in information technology and the cloud, Mike has helped hundreds of organizations with architecting, implementing, and deploying cloud solutions. For the last 5 years, Mike has focused on providing Mindsight’s customers with guidance in approaching – and managing – the cloud. Mike is AWS, Microsoft Azure, VMware certified, and remains deeply invested in providing an agnostic, consultative voice for organizations on their cloud journey. In his free time, Mike enjoys biking with his 9-year old son, recently completing a 50-mile bike ride!
Siobhan Climer writes about technology trends in education, healthcare, and business. With over a decade of experience communicating complex concepts around everything from cybersecurity to neuroscience, Siobhan is an expert at breaking down technical and scientific principles so that everyone takes away valuable insights. When she’s not writing tech, she’s reading and writing fantasy, hiking, and exploring the world with her twin daughters. Find her on twitter @stbclimer. | <urn:uuid:2fb103ea-716a-4abb-9ebf-cf4f6da864f0> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/everything-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00521.warc.gz | en | 0.938985 | 1,456 | 2.71875 | 3 |
We’ve just released a major update of cyber.dic, the spell checker add-on specializing in cybersecurity terms. It’s the latest resource to come out of a need that we, as editors at Bishop Fox, identified for more consistency in the language used across the cybersecurity industry.
The Bishop Fox editorial team initially made the Cybersecurity Style Guide as an attempt to make sense of the dynamic world of software and cybersecurity terminology for ourselves. As the guide was adopted by users across the internet, we learned that what had begun as a way to keep ourselves accurate, consistent, and forward-thinking within the company was also a useful tool for writers across all fields interacting with technology — journalists, developers, sci-fi writers, and so on.
Some researchers requested an open source version of the style guide, but allowing multiple versions would have quickly diluted the guide’s power as a tool for resolving language conflicts. Instead, we turned to a more adaptive format to address this request: the spellcheck dictionary. We made a companion file, called the cyber.dic, that would add spellcheck support for industry-specific terms in people’s word processors. This way, they wouldn’t need to check the style guide for basic questions or second-guess their own expertise on technical spellings.
The guide, and the cyber.dic that evolved from it, have been a way to extend editors’ ability to communicate farther than we ever could by just working with individual authors. The guide is an aid for people to actively reference outside the document they are working on; the cyber.dic instills confidence and helps writers within the document as they are writing. The only catch is, the cyber.dic has limited communication with a writer: It can only indicate a word’s correctness by adding or not adding a squiggly red underline.
WHAT SPELL CHECKERS USUALLY DO
Practically every word processor in use today has some kind of spell checker that determines if you’ve misspelled a word or committed some sort of grammar sin. They are usually based on some established dictionary. For example, Microsoft Word’s documentation shows that its proprietary dictionary pulls from the American Heritage Dictionary and World English Dictionary. Meanwhile, LibreOffice uses an open source format and approach to crowdsource continuous refinement.
But our point is, spell checkers aren’t a panacea for spelling problems: Like any software, they can be as flawed as the people who create them. If you’ve heard of the Cupertino effect, then you’ll remember how Apple’s early dictionaries supported the spelling “co-operation” but not “cooperation”. Anyone who typed the latter found that it was autocorrected to the name of the city in which the company was founded.
Installed spell checkers aren’t typically on the forefront of technological developments either. While they correctly highlight misspellings and give valid suggestions for many common types of writing, documents about tech and cybersecurity typically end up riddled with red underlines that may or may not be necessary or accurate. As a result, the underlining may be an illegitimate distraction, undermine your confidence in the subject matter, and slow your momentum when drafting a document.
The same problem applies to words that don’t get underlined in technical documents; the spell checker may not properly point out misspellings that happen to be dictionary words. Because automatic spell checkers have to cater to the entire range of possible topics someone might write about, awkward situations can arise with technological subject matter. For instance, an accountant might write about an asset’s value depreciating over time, but a security analyst more typically discusses deprecating an old, vulnerable piece of software.
SPELL CHECKING WITH CYBER.DIC
The cyber.dic is meant to account for the limitations of default word processor dictionaries. We’ve provided a supplemental list of over 3,000 terms that adds onto your regular spell checker, along with an exclusion file that acts as an “anti-dictionary” to underline anything that should be flagged as potentially wrong. Here’s a brief overview of how cyber.dic enhances the built-in spellcheck dictionary.
What does it mean when there’s a red underline?
- You misspelled a term that is in the cyber.dic-augmented spellcheck dictionary.
- You spelled a real word that isn’t typically used in tech writing and that you should double-check (e.g., depreciate, breech).
- You spelled a term correctly that isn’t yet in cyber.dic. (Email [email protected] with your suggestions.)
The red underline is more likely to be valid. Maybe you spelled the name of a technology slightly different from how it’s meant to be, or you used a version of a compound word that isn’t consistent with how most people in the industry spell it. It also tailors the spell checker to catch typos that are valid words used in the wrong context. Meanwhile, if a rare term is actually correct but isn’t in the cyber.dic, then you can easily add it to your custom dictionary.
What does it mean when there’s no underline?
- You spelled a term correctly that is in your cyber.dic-augmented spellcheck dictionary.
Success! Your document is no longer littered with unnecessary distractions, and you can confidently navigate through the landscape of acronyms like GDPR, SMTP, and SNMP; objects like YubiKeys and Torx screws; and activities like Zoombombing, sinkholing, and safelisting.
STYLE GUIDE VS. CYBER.DIC
The Cybersecurity Style Guide is a manual in two ways:
- It is analog and meant for humans to use.
- It is meant to guide writers with explicit instructions on what words to use and how to use them.
It might seem at first glance that turning a style guide’s word list into a spellcheck dictionary would be a simple copy/paste action, but a lot more actually went into that transition.
The cyber.dic is not like the style guide in two important ways:
- It is not meant for a person to read. It is meant to be consumed by a computer program and used as a set of instructions with binary implications (red line or no red line).
- It only implicitly instructs a writer on spelling, filtered through the spellcheck software in use.
The table below gives an overview of the changes and additions that made the cyber.dic’s word list:
|THINGS STYLE GUIDE HAS||THINGS STYLE GUIDE DOESN’T HAVE|
|THINGS CYBER.DIC NEEDS||A big, curated word list of terms used in security, programming, and corporate discussions||
Plural, conjugated, and possessive forms of every word that has those forms
Words categorized based on whether they contain spaces, hyphens, punctuation, etc.
|THINGS CYBER.DIC DOESN’T NEED||
Supplementary information: pronunciation, meaning, related terms
Terms we recommend against using (“abuse,” “segregate”)
|Most things in this universe|
THE MAKING OF THE DICTIONARY
After adapting the initial style guide word list, there was still a lot left to do. We also had to learn a bit about how spellcheck software works.
Step 1: Research the Mechanics
Spellcheck dictionaries designed for specific users aren’t uncommon — academics and scientists have passed around field-specific word lists for years.
However, the resources for actually creating a processor-specific, user-defined dictionary were surprisingly sparse. We gathered tidbits of information about the inner workings of various spell checkers by wading through existing sources on how to make a dictionary, ranging from the banal (right-click and add word) to full-blown language building with Hunspell. Some of our most useful resources were blog posts by a few intrepid superusers who had previously researched the two word processors we wanted to support: LibreOffice Writer and Microsoft Word. In particular, we have to give a shout-out to Bob Mesibov* for his detailed technical guidance for the former and Suzanne S. Barnhill** for her expert documentation on the latter.
The process of getting the final product on GitHub would be simple once we found the right instructions online, right? We pieced together export processes for the word lists that got down to the technical detail of how to encode the text file and how to order the terms. Then, we wrote out installation instructions by aggregating information from existing sources. Done?
Step 2: Test the Limits
There was still one important technical problem that we could not find answers for anywhere: Seemingly no one had documented how spellcheck dictionaries would react to non-traditional terms that may or may not look like typical words. Our cybersecurity-specific word list included words with unexpected symbols (ATT&CK), words with hard capitalization rules (MitM), and terms that consisted of more than one word (RC4 NOMORE). Before finalizing terms in the dictionary, we needed to make sure they didn’t cause some unexpected disaster like trapping the spell checker in an infinite loop.
We began throwing real and nonsense words into dictionary files, typing weird sentences into documents, and observing the spell checkers’ reactions. The charts we built to decipher the results got longer as we found more edge cases to test, and inconsistent results from Microsoft’s spell checker were sometimes confounding.
We first had to figure out the general logic of spellcheck functionality, considering the following questions for each spell checker:
- How does it resolve conflicts? If the same word is included in a custom dictionary and exclusion file, does it get underlined as incorrect? (In Microsoft, no. In LibreOffice, yes.) If a real word that already exists in the built-in dictionary is added to the exclusion file, will the spell checker honor the exclusion? (Yes, but you have to include every form: initial caps, plurals, verb forms, etc.)
- How does it treat capitalization? If a dictionary includes an all-lowercase string, will the spell checker automatically accept the version of it with an initial uppercase character? (In LibreOffice, yes. In Word, dictionaries accepted it but exclusion files did not.) What about the all-caps version? (Sometimes.) What about the all-lowercase version of a term with initial caps? (No.)
- Do the same rules apply to dictionaries and exclusion files? (A resounding “no.”)
- What happens to non-letter characters in a word?
The question of non-letter characters was particularly important to figure out. Would a spell checker break when it encountered terms like ATT&CK, C#, HTTP/2, and ASN.1? We categorized the possible non-letter characters like this:
- Standard punctuation marks: periods, commas, apostrophes
- Special characters: symbols like @, &, #, /
The conclusions were, again, complicated. Word accepted the weirdest characters but did not allow spaces. LibreOffice, meanwhile, was perfectly fine with spaces but did not understand special characters or punctuation inside a term.
After verifying the technical capabilities of each word processor, it was time to codify the rules as conditional statements in Excel. We imported our already culled word list and began noting special cases. Here’s a comparison of how that spreadsheet looked between preparing for initial release and the more robust version with automation that we used to develop cyber.dic v2:
Our formulae have changed as we’ve discovered more exceptions over time and as updates to the software have caused subtle changes in how spell checkers function.
Step 3: Complete the Word Lists
The last part of the technical puzzle was how to make our dictionary and exclusion word lists as complete as possible while working inside the technical boundaries of the spellcheck software.
We went through the entire list multiple times to add variants of terms. On launch, we only included plurals for some singular terms (honeytoken/honeytokens), but with v2 we have considered all forms for all terms, adding verb forms (spidered/spidering) and possessives for proper nouns (GitHub/GitHub’s). We also paid more attention to terms with spaces that were somewhere in between a word and a phrase, splitting up those terms into two entries to account for Word’s notable inability to handle words with spaces in them (CIS CSC).
We also used the mighty exclusion file to overcome some limitations. For example, the style guide uses the term web server with a space, but Word’s built-in dictionary allows webserver without the space. Because we couldn’t simply add web server as a single term, we instead included variations of webserver in the exclusion file to override Word’s automatic spelling. This included webserver, webservers, and, because it wouldn’t account for capitalization automatically, Webserver and Webservers.
Working with and around the strange host of limitations sometimes involved extreme mental gymnastics, and there are still some things that are easily understood by one word processor that we simply cannot add to the other processor’s version of cyber.dic. We like to think that with v2, we’ve managed to troubleshoot most of the problems in v1 and added a richer word list to make the cyber.dic more useful for everyone.
We started out imagining the cyber.dic as an extension of the style guide, but ultimately a spellcheck dictionary is a very different tool for a very different use case. Our approach to creating the latest version of cyber.dic has matured greatly since the initial release. As creators and users of the dictionary over the past year, we have learned its flaws, removed what doesn’t work, and fixed our methodology.
The original cyber.dic included 1,786 terms for LibreOffice Writer and 1,314 terms for Microsoft Word. Since then, it has more than doubled in size with terms from style guide updates and from our experiences using it, and we have devised a more standardized, systematic way to determine how to add terms. We stopped trying to pack in all of the style guide’s nuances and embraced the simpler communication style of the spellcheck dictionary format so that we could maximize our tool’s flexibility and give you a smoother user experience. (It looks like Microsoft has been tweaking its built-in spellcheck dictionaries, too — we like to think we may have given them a little nudge.) We hope that you find cyber.dic to be a valuable asset and that it helps you feel more confident as you write about tech and security.
Introducing cyber.dic (September 2019)
The Bishop Fox Cybersecurity Style Guide (June 2018)
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed. | <urn:uuid:da4e89a0-2abb-4381-aef0-3773690e8b7b> | CC-MAIN-2022-40 | https://bishopfox.com/blog/cyber.dic-2.0-expand-vocabulary | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00521.warc.gz | en | 0.919005 | 3,240 | 2.515625 | 3 |
The COVID-19 symptoms on the center stage are those associated with pneumonia and respiratory distress. But there are also a host of symptoms that relate to the central nervous system. Although it is unclear whether or not the SARS-CoV-2 virus can enter the brain, a new study found that the spike protein can cross the blood–brain barrier (BBB) in mice, strongly suggesting that it can.
The research is published in Nature Neuroscience in the paper, “The S1 protein of SARS-CoV-2 crosses the blood–brain barrier in mice.”
Coronaviruses, including the closely related SARS virus that caused the 2003–2004 outbreak, have been reported to be able to cross the BBB. | <urn:uuid:01f6754f-2101-4bbc-8fba-d5ff4834dabb> | CC-MAIN-2022-40 | https://biopharmacurated.com/sars-cov-2s-spike-protein-can-enter-the-brain-in-mice/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00521.warc.gz | en | 0.947603 | 155 | 3.03125 | 3 |
Is the US electrical grid — a 70-year-old behemoth — equipped to handle the load of nearly 607,000 new electric vehicles (EVs) on the roads? As a security guy working in critical national infrastructure (CNI), I wonder. I know the threats facing the US power grid and see it struggling against an already strained capacity. So let's "strength test" this load, see if the grid can handle all those EVs, and discuss what can be done if not. As the one CNI sector that singlehandedly underpins nearly all the others, it matters.
According to BloombergNEF, there will be nearly 1 million new EVs per month — or one about every three seconds. In the UK, 17% plan to buy an electric vehicle in the next year and nearly 70% would do so if money were no object. It's no surprise, therefore, that BloombergNEF predicts more than 26 million EVs on the road by the end of 2026. An insurance consultancy estimates there will be roughly 4 million in California alone by 2030, and a report by BloombergNEF predicts that by 2040, nearly 60% of global passenger vehicle sales will be electric. From a purely emissions-based standpoint, it's almost too good to be true. But what are the consequences?
Too Big for the Grid
The grid also faces the challenge of supporting a fast-growing fleet of EVs and plug-in hybrids (PHEVs) that may overextend its current capacity.
The current grid is something of a marvel, made up of 9,200 generating units, 600,000 miles of transmission lines, and more than 1 million megawatts of generating capacity. However, it was built when the current electrical needs of a household were a few lightbulbs and a toaster back in the '60s. Now, think about an average Thursday night — your kids are home, there are TVs going in every room, you're running a load of wash, nobody remembered to turn the lights off in the bathroom (of course), someone's gaming, someone's streaming, someone's microwaving something, your toddler is talking to Alexa, and you're charging your Tesla. Now add 26 million more Teslas and you see the problem.
Plus, the current grid is built to give, not receive, energy. This becomes an issue with new sustainable sources of energy putting energy back into the system — like wind turbines, solar panels, and (yes) electric vehicles. We're forcing the current grid far beyond its intended use; to do any more, some suggest switching to a smart grid, which unlike the current infrastructure can give and receive power, and its capacity will be much larger than what we have now. Large loads — like EV charging stations, heating and cooling systems, and football stadiums — can crash the grid, bringing the kind of instability we're trying to avoid and generally being bad for business.
Adopting a transactive method like the above can help offset the overall impact of electric vehicles on the power grid and keep things running smoothly. If done right, it will be more energy-efficient, able to load balance, and more stable. If we're to face a future where nearly 60% of all cars will require a charging station, a new grid, or focused improvements to the one we have, it's not only nice but needed.
Securing the Grid Against EVs
Besides overload, the biggest challenge EVs bring to the grid is security. They're huge Internet of Things devices with wheels, and the liability couldn't be higher. As of now, the IoT still represents a not-so-distant past where technology would fly off the line with minimal (if any) security controls. Yes, there are laws now, but with cloud connectivity, remote access, and various app integrations that may or may not have the same standards of security, risks are still around. As it stands, Bluetooth hacks can unlock car doors and charging stations are being held for ransom.
And according to Yury Dvorkin, an electrical and computer engineering expert at New York University, charging stations can be entry points for cyberattacks directed at the American energy grid. All it takes is one weak point in the giant, interconnected network of an electric vehicle and soon a hacker can have access to the US energy supply.
As Lear Corp.'s Andre Weimerskirch has pointed out, "An electric vehicle has far more hardware chips and software components than an internal combustion engine. More complexity means we need to be more careful around security in general."
My suggestion to energy providers would be to not wait — shore up your cybersecurity posture against a time when less-than-secure EVs hit the market. I imagine it will be like a second IoT wave (quite literally): hastily added devices released with only secondary thought to security and the onus falling primarily on the user. If you're going to allow EVs — and all the connectivity, technology, and vulnerabilities they bring — anywhere near your power utility, learn the risks and build your cybersecurity strategy around government standards for the energy industry. | <urn:uuid:904934d0-6bf7-48eb-9e46-be07ab6c31b7> | CC-MAIN-2022-40 | https://www.darkreading.com/attacks-breaches/how-to-keep-evs-from-taking-down-the-electrical-grid | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00521.warc.gz | en | 0.953951 | 1,038 | 2.578125 | 3 |
How is it possible to exploit SD Card, USB stick and other mobile devices for hacking? Another interesting hack was presented at the Chaos Computer Congress (30C3), in Hamburg, Germany, while yesterday I’ve published a post on an hack against ATMs via infected USB sticks, today I’m writing on the hacking of SD Card flash storage.
The researchers demonstrated how to hack the microcontroller inside every SD and microSD flash cards to realize a man in the middle attack.
SD cards contain powerful micro controllers that are exploitable by hackers to make them insecure.
The hacker Bunnie Huang described the procedure to do it and published also a post on the topic, it seems that to reduce SD cards price and increase their storage capability, engineers have to consider a form of internal entropy that could affect data integrity on every Flash drive.
Almost every NAND flash memory is affected by defects and presents problems like electron leakage between adjacent cells.
“Flash memory is really cheap. So cheap, in fact, that it’s too good to be true. In reality, all flash memory is riddled with defects — without exception. The illusion of a contiguous, reliable storage media is crafted through sophisticated error correction and bad block management functions. This is the result of a constant arms race between the engineers and mother nature; with every fabrication process shrink, memory becomes cheaper but more unreliable. Likewise, with every generation, the engineers come up with more sophisticated and complicated algorithms to compensate for mother nature’s propensity for entropy and randomness at the atomic scale.” wrote Huang.
A hacker could exploit the firmware loading mechanism, usually used only at the factory, to load malicious code, a techniques largely adopted by counterfeiters who create SD cards that report a larger capacity than they have.
The firmware on the SD cards can be updated, but according Huang’s revelations most manufacturers leave this update functionality unsecured.
The hacker during the presentation at 30C3 reverse-engineered the instruction set of a particular microcontroller to inspect firmware loading mechanism.
The attackers suitably modifying the firmware could hack any device that uses the compromised SD card (e.g. A mobile device, Wi-Fi equipped camera), the flash memory will appear to be operating normally while hacking the hosting equipment.
The SD card could make a copy of the contents in a hidden memory area or it could run malicious code while idle avoiding detection mechanisms.
When we speak about USB hacking or SD Card hacking we must consider that we are approaching the hacking on a large-scale due the wide diffusion of these components. Microcontrollers cost as little as 15¢ each in quantity, they are everywhere and every device that use them could be hacked.
Another consideration that must be done is that Governments and high profile hackers could be very interested in this type of attack for both cyber espionage and sabotage purposes, arrange a countermeasure against these types of threat it is very hard.
A curiosity for the “hackers inside” … these cards could be reprogrammed to become Arduino open source microcontroller and memory systems.
“An Arduino, with its 8-bit 16 MHz microcontroller, will set you back around $20. A microSD card with several gigabytes of memory and a microcontroller with several times the performance could be purchased for a fraction of the price,” he writes.
Look closely at the presentation … and distrustful of SD cards from now on.
(Security Affairs – SD card, hacking) | <urn:uuid:c9cb1ffa-11c4-43c2-9d26-76d83dc3a774> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/20868/hacking/sd-card-ill-hack-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00521.warc.gz | en | 0.934809 | 718 | 2.84375 | 3 |
Anyone with hands-on experience setting up long-haul VPNs over the Internet knows it’s not a pleasant exercise. Even factoring out the complexity of appliances and the need to work with old relics like IPSEC, managing latency, packet loss and high availability remain huge problems. Service providers also know this — and make billions on MPLS.
The bad news is that it is not getting any better. It doesn’t matter that available capacity has increased dramatically. The problem is in the way providers are interconnected and with how global routes are mismanaged. It lies at the core of how the Internet was built, its protocols, and how service providers implemented their routing layer. The same architecture that allowed the Internet to cost-effectively scale to billions of devices also set its limits.
Addressing these challenges requires a deep restructuring in the fabric of the Internet and core routing – and should form the foundation for possible solutions. There isn’t going to be a shiny new router that would magically solve it all.
IP Routing’s Historical Baggage: Simplistic Data Plane
Whether the traffic is voice, video, HTTP, or email, the Internet is made of IP packets. If they are lost along the way, it is the responsibility of higher-level protocols such as TCP to recover them. Packets hop from router to router, only aware of their next hop and their ultimate destination.
Routers are the ones making the decisions about the packets, according to their routing tables. When a router receives a packet, it performs a calculation according to its routing table – identifying the best next hop to send the packet to.
From the early days of the Internet, routers were shaped by technical constraints. There was a shortage of processing power available to move packets along their path, or data plane. Access speeds and available memory were limited, so routers had to rely on custom hardware that performed minimal processing per packet and had no state management. Communicating with this restricted data plane was simple and infrequent.
Routing decisions were moved out to a separate process, the control plane, which pushed its decisions, finding the next router on the way to the destination, back into the data plane.
This separation of control and data planes allowed architects to build massively scalable routers, handling millions of packets per second. However, even as processing power increased on the data plane, it wasn’t really used. The control plane makes all the decisions, the data plane executes the routing table, and apart from routing table updates, they hardly communicate.
A modern router does not have any idea how long it actually took a packet to reach its next hop, or whether it reached it at all. The router doesn’t know if it’s congested. And to the extent it does have information to share, it will not be communicated back to the control plane, where routing decisions are actually made.
BGP – The Routing Decisions Protocol
BGP is the routing protocol that glues the Internet together. In very simple terms, its task is to communicate the knowledge of where an IP address (or a whole IP subnet) originates. BGP involves routers connecting with their peers, and exchanging information about which IP subnets they originate, and also “gossip” about IP subnets they learned about from other peers. As these rumors propagate between the peers and across the globe, they are appended with the accumulated rumor path from the originator (this is called the AS-Path). As more routers are added to the path, the “distance” grows.
Here is an example of what a router knows about a specific subnet, using Hurricane Electric’s excellent looking glass service. It learned about this subnet from multiple peers, and selected the shortest AS-Path. This subnet originates from autonomous system 13150, the rumor having reached the router across system 5580. Now the router can update its routing table accordingly.
If we want to see how traffic destined for this IP range is actually routed, we can usetraceroute. Note that in this case, there was a correlation between the AS-Path, and the path the actual packets traveled.
BGP is a very elegant protocol, and we can see why it was able to scale with the Internet: it requires very little coordination across network elements. Assuming the routers performing the protocols are the ones that are actually routing traffic, it has a built in resiliency. When a router fails, so will the routes it propagated, and other routers will be selected.
BGP has a straightforward way of assessing distance: it uses the AS-Path, so if it got the route first-hand it is assumed to be closest. Rumored routes are considered further away as the hearsay “distance” increases. The general assumption is that the router that reported the closest rumor is also the best choice send packets. BGP doesn’t know if a specific path has 0% or 20% packet loss. Also, using the AS-Path as a method to select smallest latency is pretty limited: it’s like calculating the shortest path between two points on the map by counting traffic lights, instead of miles, along the way.
A straightforward route between Hurricane Electric (HE), a tier-1 service provider, as seen from Singapore, to an IP address in China, has a path length of 1.
But if we trace the path the packets actually take from Singapore to China, the story is really different: packets seem to make a “connection” in Los Angeles.
This packet traveled to the West coast of the U.S. to get from Singapore to China simply because HE peers with China Telecom in Los Angeles. Every packet from anywhere within the HE autonomous system will go through Los Angeles to reach China Telecom.
BGP Abused: BGP Meets the Commercial Internet
To work around BGP’s algorithms, the protocol itself extends to include a host of manual controls to allow manipulation of the “next best hop” decisions. Controls such as weight, local preference (prioritizing routes from specific peers), communities (allow peers to add custom attributes, which may then affect the decisions of other peers along the path), and AS path prepending (manipulates the propagated AS path) allow network engineers to tweak and improve problematic routes and to alleviate congestion issues.
The relationship between BGP peers on the Internet is a reflection of commercial contracts of ISPs. Customers pay for Internet traffic. Smaller service providers pay larger providers, and most pay tier-1 providers. Any non-commercial relationship has to be mutually beneficial, or very limited.
BGP gives service providers the tools to implement these financial agreements:
- Service providers usually prefer routing traffic for “paying” connections.
- Service providers want to quickly get rid of “unpaid” packets, rather than carrying them across their backbone (so called “hot potato” routing).
- Sometimes, service providers will carry the packets over long distances just to get the most financially beneficial path.
All this comes at the expense of best path selection.
The MPLS Racket
To address these problems, service providers came up with an alternative offering: private networks, built on their own backbones, using MPLS as the routing protocol.
MPLS is in many ways the opposite of BGP. Instead of an open architecture, MPLS uses policy based, end-to-end routing. A packet’s path through the network is predetermined, which makes it suitable only for private networks. This is why MPLS is sold by a single provider, even if the provider patched together multiple networks behind the scenes to reach customer premises.
MPLS is a control plane protocol. It has many of the same limitations as BGP: routing is decided by policy, not real traffic conditions, such as latency or packet loss. Providers are careful about bandwidth management to maintain their SLAs.
The combination of single vendor lock-in and the need for planning and overprovisioning to maintain SLAs make these private networks a premium, expensive product. As the rest of the Internet, with its open architecture, became increasingly competitive and cost-efficient, MPLS faces pressure. As a backbone implementation, it is not likely to ever become affordable.
A Way Forward
The Internet just works. Not flawlessly, not optimally, but packets generally reach their destination. The basic structure of the Internet has not changed much over the past few decades, and has proven itself probably beyond the wildest expectations of its designers.
However, it has key limitations:
- The data plane is clueless. Routers, which form the data plane, are built for traffic load, and are therefore stateless, and have no notion of individual packet or traffic flows.
- Control plane intelligence is limited. Because the control plane and the data plane are not communicating, the routing decisions are not aware of packet loss, latency, congestion, or actual best routes.
- Shortest path selection is abused: Service providers’ commercial relationships often work against the end user interest in best path selection.
The limited exchange between the control and data planes has been taken to the extreme in OpenFlow and Software-defined Networking (SDN): the separation of the control plane and data plane into two different machines. This might be a good solution for cutting costs in the data center, but to improve global routing, it makes more sense to substantially increase information sharing between the control plane and the data plane.
To solve the limitations of the Internet it’s time to converge the data and control planes to work closely together, so they are both aware of actual traffic metrics, and dynamically selecting the best path.
This article was first published on Tech Zone 360 | <urn:uuid:abde5d7d-8ae1-4bf7-891a-ad17bb4bf8f8> | CC-MAIN-2022-40 | https://www.catonetworks.com/blog/the-internet-is-broken/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00721.warc.gz | en | 0.946946 | 2,019 | 2.828125 | 3 |
Exelis has demonstrated six transmitter assemblies that are built to function as payload components for the U.S. Air Force's first set of GPS satellites.
The transmitters were tested in a simulated space environment for random vibration, pyroshock and thermal vacuum to qualify for the GPS III's mission requirements, Exelis said Tuesday.
McLean, Va.-based Exelis built the transmitters to send GPS signals between space and Earth for military, commercial and civilian users.
Mark Pisani, vice president and general manager of positioning, navigation and timing business area for Exelis Geospatial Systems, said the navigation payload transmitters will be replicated for the next space vehicles.
The GPS III team led by the Air Force's GPS directorate is developing the navigation payload in Clifton, N.J. | <urn:uuid:ecc6ba72-f44a-40d2-92b5-0e26b5dff6c3> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2014/03/exelis-tests-air-force-gps-iii-transmitter-assemblies-mark-pisani-comments/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00721.warc.gz | en | 0.928791 | 168 | 2.578125 | 3 |
Do you really think you are safe from web vulnerabilities or that they are just minor problems?
A few days ago Sophos, one of the world’s most renowned security companies, found an SQL Injection in their product. What is worse, they found the vulnerability because malicious hackers have been using it to attack their clients.
What Happened to Sophos?
Sophos discovered that malicious hackers mounted attacks on their hardware product called Sophos XG Firewall. The vulnerability that allowed them to do so turned out to be an SQL Injection. This vulnerability, in turn, lead to another very serious issue: remote code execution.
Attackers were able to use this SQL Injection to download the Asnarok trojan (read the whole technical description here). This trojan was then able to steal the login credentials of firewall users.
The vulnerability has been hotfixed and all users of the Sophos XG Firewall have been asked to download the firmware update.
What Does This Mean to You?
- If a security giant such as Sophos can fall victim to an SQL Injection and RCE, so can you. Not to mention other vulnerabilities.
- SQL Injections have been known for more than 20 years and most programming languages have countermeasures. And still, they happen.
- An SQL Injection can lead to someone taking over your system and installing a trojan on it. But it can have even more fatal consequences.
What Can You Do?
The only way to protect yourself against such attacks is to regularly check for vulnerabilities. Of course, you can do it manually, performing penetration testing, but it’s much more efficient to automate the process with a vulnerability scanner. And Acunetix does it best. So give us a try.
Get the latest content on web security
in your inbox each week. | <urn:uuid:e2c4a071-fc2d-44cc-a553-fc8dc30a5d75> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/web-security-zone/sql-injection-sophos-xg-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00721.warc.gz | en | 0.950358 | 382 | 2.578125 | 3 |
Having a clear understanding of where your data is being consumed is a critical first step toward being able to secure and ultimately protect it. Using data flow diagrams, it is possible to know the flow of data through each of the systems and processes being used within your organization.
Though often used during the development of a new software system to aid in analysis and planning, data flow diagrams give unparalleled insight into every instance where data is potentially vulnerable.
Anatomy of a Data Flow Diagram
Data flow diagrams visually detail data inputs, data outputs, storage points, and the routes between each destination.
Components of a Data Flow Diagram
- Entities – Show the source and destination for the data. They are generally represented by a rectangle.
- Process – The tasks performed on the data is referred to as a process. Circles in a data flow diagram indicate a process.
- Data Storage – Data is generally stored in databases, which are seen in data flow diagrams inside a rectangle with the smaller sides missing.
- Data Flow – Displays the movement of data with the help of lines and arrows.
Also read: Unifying Data Management with Data Fabrics
Logical Vs. Physical Data Flow Diagrams
There are two primary types of data flow diagrams, each with a specific function and designed to inform a different target audience.
Logical data flow diagrams
Logical data flow diagrams illustrate how data flows in a system, with a focus on the business processes and workflows. With a focus on how the business operates at a high level, logical data flow diagrams are a great starting point, providing the outline needed to create more detailed physical data flow diagrams.
Benefits of logical data flow diagrams:
- Provide an overview of business information with a focus on business activities
- Less complex and faster to develop
- Less subject to change because business functions and workflows are normally stable processes
- Easier to understand for end-users and non-technical stakeholders
- Identify redundancies and bottlenecks
Physical data flow diagrams
Physical data flow diagrams provide detailed implementation information. They may reference current systems and how they operate, or may project the desired end-state of a proposed system to be implemented.
Physical data flow diagrams offer a number of benefits:
- Sequences of activities can be identified
- All steps for processing data can be described
- Show controls or validating input data
- Outline all points where data is accessed, updated, retrieved, and backed up
- Identify which processes are manual, and which are automated
- Provide detailed filenames, report names, and database field names
- Lists all software and hardware participating in the flow of data, including any security-related appliances
Also read: Top Data Quality Tools & Software
Strategies For Developing Data Flow Diagrams
Avoid feeling overwhelmed by the creation of a data flow diagram by following a few simple strategies.
- Begin with lists of all business activities, vendors, ancillary systems, and data stores that need to be included.
- Take each list and identify the data elements needed, received, or generated.
- Always include steps that initiate changes to data or require decisions be made, but avoid creating a flowchart (for example, identify that the user needs to accept or reject an incoming order or reservation, but don’t break it down by ‘if yes, then’ and ‘if no, then’).
- For complex systems, it may be helpful to start by adding data stores to the diagram and working outward to each of the processes involved – it is likely that single data inputs are used or accessed repeatedly.
- Ensure that there are no freestanding activities – only include processes that have at least one data flow in or out.
- Review labels to be sure they are concise but meaningful.
- Try to limit each data flow diagram to a maximum of 5-7 processes, creating child diagrams where appropriate or required.
- Consider numbering the processes to make the diagram easier to review and understand.
- A successful data flow diagram can be understood by anyone, without the need for prior knowledge of the included processes.
Using A Data Flow Diagram To Mitigate Security Threats
The best way to protect data from security threats is to be proactive instead of reactive.
Data flow diagrams can support cybersecurity initiatives in many ways:
- Identify when data is at rest and in transit.
- Visualize when data is shared with external vendor systems.
- Know which users and systems have access to which data, at which time.
- Enable the notification of affected users, systems, and vendors in the event of a security breach or threat.
- Understand the schedule of automated processes to know when data is being offloaded or consumed.
To best support the mitigation of security threats, data flow diagrams should include all risk assessments (corporate governance, external vendors and ancillary systems, and key business processes), complete inventory listings (hardware and software systems), and all user roles that have and require access to data at every point.
For targeted threat modeling, it may be helpful to create additional data flow diagrams to support a specific use case. One example would be a diagram that looks at authentication separate and apart from the workflows and processes that access will be granted to.
Comprehensive data flow diagrams ultimately show where the systems make data vulnerable. Threat modeling best practices generally consider data safest when at rest, so look to points in data flow diagrams where data is sent or received to ensure security and integrity are maintained.
A Living Part of System Documentation
Don’t forget that data may move through systems and processes in non-technical ways as well. Paper-based or non-technical business processes where information is gathered or stored should also be included in data flow diagrams.
Data flow diagrams should become a living part of system documentation and be thought of as a source of truth. As systems and processes are updated, it’s important that the consequences to data flow or data integrity are considered and reflected in any existing diagrams.
Read next: Best Data Governance Tools & Software | <urn:uuid:dc8f5e23-0911-4696-95c5-5623a247bb8d> | CC-MAIN-2022-40 | https://www.itbusinessedge.com/security/data-flow-diagrams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00721.warc.gz | en | 0.89907 | 1,261 | 3.75 | 4 |
5G is often heralded as the future of communications technology. It’s actually the antithesis. It is the anti-Internet clawing intelligence back into the network and limiting innovation. With 5G consumers would once again be limited to a choice of offerings and a new generation would rediscover the busy signal.
From Services to Opportunity
5G represents a threat to the level playing field and innovation of the Internet. It is the new face of the battle over network neutrality.
At its inception in the 19th-century, the talking telegraph (AKA the telephone) was an amazing feat of engineering using analog technology. One could speak into a microphone in one city and be heard in another city. Accomplishing this required a very large investment in technology. The customers were consumers of the service.
Today, we have generic computing platforms and generic connectivity (The Internet). This has made it possible for anyone to write a telephony application and share it with others. The nature of consumer technology has changed.
We no longer need to depend on a provider for services such as telephony. We have many companies offering not only voice over IP (VoIP) but video too. Using open APIs, those with programming skills (or toolkits) can implement such applications themselves. Despite the term “Consumer Technology”, we can each be creators and contributors.
5G is at odds with this movement of functionality from the providers’ networks into our devices. We are moving from intelligence in the network to intelligent devices. As I see it, 5G seems more like the past of networking rather than the future.
My view is that the IEEE has a responsibility to take a policy-neutral stand and treat 5G as one of many approaches to connectivity.
From Consuming to Creating
In 1956 the Supreme Court ruled against ATT and in favor of Hush-a-Phone (nothing more than a box used to provide some privacy). ATT provided telephony as a service and argued that Hush-a-Phone degraded their service. Every element of a trillion-dollar worldwide phone system was built towards supporting that one service. They were the provider, and customers consumed their telephony service.
5G is an attempt to return to the days before Hush-a-Phone using the 5G radio as a MacGuffin. The argument is that without 5G, we can’t have a world in which people can casually assume video conference and connected devices because only a phone company can do voice and video. This from the industry that couldn’t make a business of Picturephone after forty years of trying. Zoom and others succeeded because they changed the rules and didn’t try to capture value in the network. The consumers have won, and that’s an existential crisis for telecom.
Today a “telephone” is merely an app using generic connectivity (such as Wi-Fi). While we still use the word consumer, we are as much producers as consumers. We can harness the technology to create and, by sharing software, we can share our creativity with others. Many have the skills and potential to write a telephone app.
In that sense, everything has become consumer technology as we use software to reinvent the world. You can get a phone app from Skype rather than your service provider.
As I wrote in “From Broadband to Infrastructure” this shift from telephony being a network service to being implemented as an app requires
a shift from purpose-built infrastructure to generic connectivity and generic computing. From special purpose circuits to generic microprocessors.
The Internet has changed how we think about connectivity. With 5G, the traditional providers are trying to put the genie back in the bottle. The IEEE should play a leadership role in helping policymakers to understand this new landscape. This is why I’m concerned when I see 5G declared as the future of networks.
The Saga of Red/Green
Before we get to 5G, it’s worth looking back at why the red/green analog interface was so powerful. In the 1980s, the carriers introduced their digital service — ISDN (Integrated Services Digital Network). It extended the intelligent phone network all the way to the premises (home or office and, at first glance, that seemed wonderful.
But users (such as myself) had taken advantage of the simplicity of the red/green analog wire interface to attach our own devices (even before the 1968 Carterfone decision made it legal). By the 1990s, modems went at 56Kbps — the same speed as an ISDN B channel. The only reason that they didn’t go faster was that the repurposed digital telephone network had implemented a hard limit at 56Kbps in their protocol.
Consumer innovation outpaced telecom. This is a recurring theme. I had proposed replacing the standard equipment in the central office (line cards) with digital versions that would automatically offer DSL capabilities at a very lost cost. At $100/line, we could’ve had universal broadband in the 1990s!
From Telephony to The Internet. And Back Again.
Telephony was an amazing achievement of 19th-century engineering. Different telephone companies had competed for customers but getting them to cooperate and interoperate was difficult, so ATT was given stewardship of the phone network as a regulated utility.
Television, in the 1930s, was another amazing feat of engineering. It required every element of the system to be engineered to microsecond precision using analog electronics. Unlike the telephone system, you owned your television just like you owned your radio. Interoperability was achieved through standards and licensing technology.
The IEEE played a vital role in creating these standards. Today’s Consumer Technology Society is a direct descendent of the IEEE Broadcast Society. The IEEE itself was formed by the merger of the IRE (Institute of Radio Engineers) and the AIEE (American Institute of Electrical Engineers). Value was created using electronics (radios) and electricity (power engineering).
Getting all elements of a telephone network to work together and preserve the waveform over a long distance is very difficult and expensive. Note my careful use of words — “telephone network” rather than “communications network” and “waveform” rather than “signal”. The latter words were borrowed from day-to-day language because their technical and common meaning were in tight alignment. Words like “network” are so generic that they make it easy to talk past each other and confuseo networking-as-a-service with social networks.
Continued use of those words blinds us to how today’s connected world is based on fundamentally different principles from traditional telecommunications networks. Rather than using purpose-built systems, we use generic connectivity and software to create disparate solutions. On the surface, nothing has changed, but behind the scenes, the story is very different.
The Discovery of “Best-Efforts”
Digital technology was developed, in part, to reduce the cost of telephony. A digital signal could be regenerated because there are discrete states. In the simple case, a signal is one or zero. Von Neumann represented a big breakthrough since a string of bits could be interpreted as an instruction (an opcode or operation code), allowing the use of a generic interpreter instead of purpose-build logic.
In a period of about twenty years from the 1940s to the 1960s, we went from plug-board computing (wired logic) to today’s modern operating systems (Multics and Unix).
We also used modems (Modulator/Demodulator) to repurpose the telephone network as a data communications network.
At first, this was a poor fit since staying dialed up occupied dedicated resources. Dedicated circuits made sense in the days of analog technology and, initially, the digital technology was used to emulate the analog phone network, including preserving the characteristics of a dedicated circuit by dedicating resources along the path.
The idea that we need to dedicate a path to a particular connection is at the heart of 5G. In the 1990s, using those voice circuits for modem connections created a crisis. The resources were unavailable as long as the user was connected, even if the connection was idle most of the time. By sharing the connection without making latency promises, the scarcity disappeared.
In effect, 5G brings back the busy signal. If you can’t get the resources you need, you are out of luck. This is one of the ironies of 5G. By making promises, they guarantee failure. Dedicating resources to some leaves nothing available to others. At its heart, the Internet is a way to share a common infrastructure using protocols such as TCP (Transmission Control Protocol). 5G replaces this distributed control with central authority demanding payments.
The idea of grouping bits together into packages and sending them over a computer network seems to have arisen independently in a number of places, including the work of Donald Davies in Britain. What is telling is that his goal was to interconnect computers rather than emulate traditional phone networks.
As I see it, the development of Packet Radio networks forced further innovation and brought software to bear on the problem of exchanging messages between computers using an unreliable medium — radios. The traditional approach was to build reliability into the network. But ALOHAnet was do-it-yourself and used software to program around the limitations of packet radios. I wrote about this in my first column.
At first, programming-around seemed like a clever one-off hack, but the idea turned out to be transformational — a new paradigm. Traditional engineering is layered in that you built elements and layers of abstractions for a purpose. Instead, we can use software to harness any available resources and discover what we can do with them.
One of the telephony’s secrets is that compositing the elements across disparate providers never really worked without analog shims. Complexity doesn’t scale very well.
One reason that IP (Internet Protocol) won was because it didn’t make any promises. It punted and put the burden on applications to do what they could with the available resources. The conventional wisdom was that this could not work for voice because the ear was so sensitive to glitches.
Or so it seemed. There was no need to challenge this assumption because there was already a perfectly fine and profitable voice network. I use the term complacent engineering for accepting the givens rather than challenging them.
The Discovery of Voice over IP as a Service
One VoIP origin story is that VocalTec, a small company in Israel, developed a simple software solution to handling jitter and packet loss on their local network, thus enabling voice to work. They were surprised to find that their customers were using the app across the wider Internet. It happened to work because, by that time, the capacity of much of the Internet had grown as a byproduct of the demand for broadband to support the web. “broadband” is another technical term that has been repurposed to mean fat pipe.
VoIP, in itself, was an invention. The discovery was that it could act like the traditional phone network without needing to build voice into the network. Today VoIP calls are bridged to the traditional network.
Broadband was attractive to providers because it allowed them to sell additional services without an additional cost of dedicated facilities.
I played a role by getting Microsoft to ship IP (Internet Protocol) support as a standard part of Windows along with support for Network Address Translators (NATs), which, today, we call routers (even though technically they are not routers). “Internet” was meant to be just one service in the mix along with web, interactive television, e-commerce, etc. The NAT changed this by enabling the user to do everything with a single connection. That came to include voice and video. The problem is that the provider didn’t get additional revenue from that value created. Oops.
In 2003 Skype was founded. Though they were not the first to do VoIP, their app provided phone calls as a global service despite the telecom providers’ efforts to limit competition. This puts a lie to the idea that VoIP needs a special network. Not only that, Skype could offer video! This is very counter-intuitive. The reason Skype could offer video at no additional cost is that they didn’t guarantee it would work. It just happened that as the generic capacity of the Internet grows, new services become possible. Netflix is another beneficiary of the new opportunities.
Video from the telcos failed because they had to charge a high price for dedicated facilities on a specially built network — just like 5G.
Today we casually expect video conferencing to just work everywhere! And, again, none of the added value goes to the traditional providers. Video is now consumer technology. https://jitsi.org/ is an open-source video conference capability that you can host yourself! Buying video as a service is an option but not a requirement.
All this value is created outside the network. Providing the infrastructure to enable all these new services has become an economic problem rather than a technical one. We need connectivity to be available as infrastructure, but we can’t finance it out of service revenue. In that sense, it is like roads and sidewalks.
Yet public policy is still centered on the notion that we can fund the infrastructure by selling services and that the Internet is just another television channel.
5G is the fifth generation of cellular telephony. The big surprise of the fourth generation (LTE or Long Term Evolution) is that we didn’t need a special voice network because we could use VoIP techniques over a data network.
The defining premise of 5G is that the networks should be aware of each application’s needs. This was indeed true in the early days when every element had to be tuned to each application. Hush-a-phone made sense in that context — it was simply an element of the telephone network. Another special network was needed for video. This doesn’t mean that you had a separate physical network. It is the job of the network control plane to resolve conflicts between the competing requirements for the shared facilities.
The fast lane is one way for applications to buy priority. Selling such lanes and hosting applications is the business model of 5G.
This is why the concept of best-efforts is so disruptive and why it is so hard to understand. It allows the applications to resolve the conflicts among themselves without the need for a control plane. In fact, it can work better than a control plane since instead of a busy signal, the application can adapt, such as using text instead of voice. We’ve had situations where people have been unable to communicate because they can’t get a good cellular signal, but there was enough connectivity for the diagnostic messages. That capacity could’ve allowed testing for help.
Recently T-Mobile had a major outage. One of the reasons they cited is that their dependency on IMS. Another reminder that traditional telecommunications architecture is not very resilient.
Embracing best efforts requires changing our metric of success. In the 1970s, we could measure call completion by determining whether the other phone rang. By that measure, there was no need for an answering machine to take a message. Once users could create their own solutions, answering machines became the norm.
Without the ability to take capacity from the commons and sell it to the highest bidder, how does a provider make money? There are many examples of how to fund public infrastructure such as roads and sewer systems. How should the IEEE navigate a transition which threatens the current shareholders?
5G – The Future that Was to be
In 2004 we saw IMS (IP Multimedia System) based on the premise that we need a special control plane in order to make multimedia work. This was despite the fact that multimedia was working quite well. At some point, people figured this out, and Lucent’s stock price plummeted. But the basic idea that you need to build applications into the network did not die. It simply hibernated and emerged as 5G even though its use cases are already working quite well.
The 5G radio
As Cisco’s John Apostolopoulos observed, Wi-Fi 6 and 5G radios are basically the same. The main technical difference being in the frequency bands and the economic model. This is entirely a policy decision. If anything, Wi-Fi 6 has a compelling advantage because it can interoperate with the existing Wi-Fi infrastructure, whereas a 5G requires spending billions of dollars to achieve the same thing without any of the synergies of Wi-Fi!
The primary difference is economic, and telecom providers must recoup billions of dollars investing in a brand-new infrastructure. App developers, on the other hand, can’t depend on 5G, thus do not drive demand for 5G radios in the interim.
Both radios offer very high performance within the same radio. The problem is extending such high-performance guarantees beyond the radio.
But, before we leave the subject of the radio, isn’t it strange that 5G is entirely about wireless. If the protocols are so important, why aren’t they available via fiber (or other wires) and Wi-Fi? T-Mobile further muddies the water by rebranding its service as 5G using existing radios but with 5G network protocols.
These are all tell-tale signs of what I call marketecture. It looks like system architecture but is designed by the marketing department. I wrote about a striking example in my previous column. The Android TV box that Verizon Wireless provides is branded as “5G” but contains no 5G technology. It can work with any Internet connection.
The 5G network
The story of 5G is reminiscent of the challenge of IMS in the assumption that there is a need to extend such promises. Creating the perception that there is a need is a marketing challenge. Part of this is getting researchers on board by labeling their work on radio technology as 5G, thus creating the appearance of a strong body of research supporting the business model. The 5G radio becomes a MacGuffin or element that serves the purposes of furthering the larger narrative.
This chart from Huawei is the best I’ve found for explaining the real reason for 5G — the ability to violate network neutrality and sell fast lanes to the highest bidder. The more they can get engineers to develop systems that are dependent upon brittle promises, the more money they make.
This works at cross-purposes to innovation and consumer technology and takes capacity off the table. The worse they make the open Internet, the more money they make.
Perhaps the bigger goal is to get into the information services business. When ATT developed Unix and Minitel started offering services in France, there was a concern that ATT’s control of the network would allow them to stifle competition. 5G is an attempt to get into the information services business. Hosting services in their facilities means their customers — Amazon Web Services, Microsoft Azure, and others — are their competitors. This represents an inherent conflict of interest.
I recently came across the term MEC or Mobile Edge Computing. In a previous column, I was skeptical and asked where the edge is. In reading about 6G, I see that the term is being used for provider-owned facilities on the customer’s premises. This is a commercial version of the failed residential gateway — another attempt to return to the days before Hush-a-Phone, when the carrier owned the customer premises (to use a telecom term) gear, not the customer!
There is another audience for 5G — those who want to control what people do with the network. 5G is very good for authoritarian governments. It makes worries about Facebook and Google surveillance seem mild by comparison. If you depend on the network for security, you are really choosing who can snoop, not whether.
We can take a quick look at some of the applications used to justify 5G
- Speed and Reach. Yes, 5G radios can be higher performance than LTE if you are close enough. But that’s also true of Wi-Fi 6. The longer range of 5G comes from getting first dibs on frequency bands and not because of better technology. To the extent we treat frequency bands as property (a problematic idea), it should be part of our commons and not sold off to the highest bidder. Wi-Fi has shown how very well the shared-medium approach can work.
- Remote Virtual Reality. We already do remote gaming with Steam and other services. Remote VR seems to be based on the idea that if 5G radios give you the very low latency you can’t get from older versions of Wi-Fi, we can extend such promises over a network. The very characteristics that make it difficult with old versions of Wi-Fi make it difficult to extend such promises over an arbitrary network. There’s also the business question of why the focus on a zero-billion-dollar industry. But it’s a nice story.
- Remote-Control. If these applications require such precise timing, then they fail if there is the slightest network glitch. Oops. Any good systems engineers should focus on resilience rather than burst performance.
- Remote Surgery. Really? Take remote control to the next level and let people die if there is a network glitch? ‘Nuff said.
- Connecting Vehicles. There are a few strains of this:
- Robot Driving. This is the idea that there would be drivers housed in a building remotely driving and operating vehicles. Whether this makes economic sense, I can’t say. But, again, the remote-control systems must be designed to be resilient. When something goes wrong at 200kph, you can’t rely on a remote driver. Or if the vehicle loses the signal in a tunnel?
- Highly connected autonomous vehicles. The goal is autonomous. It is indeed useful to assure connectivity, but the vehicles should use generic connectivity rather than having a brittle dependency on 5G protocols.
- Coordinating cars. Having a complex network with cars tracking all other cars is at odds with a complex network. What we need is the simplest connectivity without the gratuitous complexity of 5G. And without the failure point of a billing system in the path. Hasn’t anyone learned the lesson of the failure of ATM? It can be useful to have ad-hoc routing to interconnect cars. Such ad-hoc routing requires a best-efforts approach in order to scale
- Software-Defined Network. Wait, hasn’t all network software-defined for the last half-century? Oh, you mean a control plane. Didn’t we learn that that is a terrible idea? And unnecessary?
- Network Function Virtualization. This is, perhaps, the big motivation. It’s just another term for cloud services. Calling them network functions obscures the fact that they are competing with their customers.
The use cases are all about making applications brittlely dependent upon a network provider. What they have in common is that they build on the assumption that each application needs a purpose-built infrastructure or, at least, special accommodations. This is why understanding the discovery of VoIP is so important and forces us to rethink that assumption.
It’s tempting to want to buy a dedicated lane on the highway, but those of us sharing the common facilities shouldn’t subsidize those who want to buy an advantage.
We also need to heed to examples of large projects that spend hundreds of millions of dollars on a single project and then failing. Generic infrastructure is a far safer bet and is highly leveraged because they allow for rapid iteration at low risk.
Building smarts into networks or cities rewards deep pockets and prevent innovation. That is at odds with the new world of consumer technology in which we are creators and not just consumers.
Alternatives to 5G
Imagine if we had an infrastructure that rewards innovation (such as VoIP) and gives new ideas a chance by creating a level playing field. If I wrote this column even a few years ago, I would have the burden of explaining that the best-efforts Internet could scale. Today I don’t need to — we take video conference for granted. It isn’t even remarkable! And that is remarkable.
The major lesson of the Internet is that we can composite a path out of locally funded facilities that are open to all. Instead of a provider owning the entire path, we simply interconnect facilities and use software to program-around issues such as congestion. We already have the protocol in TCP and are improving upon it.
The term “5G” won’t go away — there has been too much of an investment selling something called “5G” as the future. But we must think critically about what it means. Despite my concerns, 5G will play a role in the future. But it is not the one future of connectivity. The IEEE has a role to play in presenting 5G in a larger context. At the very least, it should distinguish between the 5G radio and the network protocols. For the radio itself, Wi-Fi 6 must be presented as another option — one that puts users in control.
The Consumer Technology Society has a particular role in a world in which consumers create and don’t just consume. Let’s assure that people and companies, both large and small, have the tools to be contributors.
This article was originally published in the IEEE Journal. | <urn:uuid:8098416f-f1b3-413c-9456-bd9eaa9a364d> | CC-MAIN-2022-40 | https://resources.experfy.com/consumer-tech/consumer-technology-vs-5g/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00721.warc.gz | en | 0.958973 | 5,315 | 2.546875 | 3 |
Clustering is the immense pool of technologies to catch classes of observations (known as clusters) under a dataset provided, that contribute identical features.
Clustering is arranged in a way that each observation in the same class possesses similar characteristics and observation of separate groups shows dissimilarity in characteristics.
As a part of the unsupervised learning method, clustering attempts to identify a relationship between n-observations( data points) without being trained by the response variable.
With the intent of obtaining data points under the same class as identical as possible, and the data points in a separate class as dissimilar as possible.
Basically, in the process of clustering, one can identify which observations are alike and classify them significantly in that manner. Keeping this perspective in mind, k-means clustering is the most straightforward and frequently practised clustering method to categorize a dataset into a bunch of k classes (groups).
Table of Content
What is K-means clustering?
Features and Limitations
Expectation-Maximization: K-means Algorithm
Working of K-means clustering
Applications of K-means clustering
K-means vs Hierarchical clustering
Beginning with Unsupervised Learning, a part of machine learning where no response variable is present to provide guidelines in the learning process and data is analyzed by algorithms itself to identify the trends.
Opposite to that, supervised learning is where existing data is already labelled and you know which behaviour you want to recognize from new datasets, unsupervised learning doesn’t exhibit labelled dataset and algorithms are there to explore relationships and patterns in the data. You can learn more about these types of machine learning here.
It is a known fact that the data and information are usually obscured by noise and redundancy so making it into groups with similar features is the decisive action to bring some insights.
One of the excellent methods in unsupervised machine learning treated for data classification, k-means suits well for exploratory data analysis to understand data perfectly and get inferences from all data types despite the data in the form of images, text content or numeric, k-means works flexibly.
( Prefered blog: (GAN) in Unsupervised Machine Learning)
What is K-means Clustering?
K-means algorithm explores for a preplanned number of clusters in an unlabelled multidimensional dataset, it concludes this via an easy interpretation of how an optimized cluster can be expressed.
Primarily the concept would be in two steps;
- Firstly, the cluster centre is the arithmetic mean (AM) of all the data points associated with the cluster.
- Secondly, each point is adjoint to its cluster centre in comparison to other cluster centres. These two interpretations are the foundation of the k-means clustering model.
You can take the centre as a data point that outlines the means of the cluster, also it might not possibly be a member of the dataset.
In simple terms, k-means clustering enables us to cluster the data into several groups by detecting the distinct categories of groups in the unlabelled datasets by itself, even without the necessity of training of data.
This is the centroid-based algorithm such that each cluster is connected to a centroid while following the objective to minimize the sum of distances between the data points and their corresponding clusters.
As an input, the algorithm consumes an unlabelled dataset, splits the complete dataset into k-number of clusters, and iterates the process to meet the right clusters, and the value of k should be predetermined.
Specifically performing two tasks, the k-means algorithm
Calculates the correct value of K-centre points or centroids by an iterative method
Assigns every data point to its nearest k-centre, and the data points, closer to a particular k-centre, make a cluster. Therefore, data points, in each cluster, have some similarities and far apart from other clusters.
You can learn k-means clustering by the example given in the following video,
Key Features of K-means Clustering
Find below some key features of k-means clustering;
It is very smooth in terms of interpretation and resolution.
For a large number of variables present in the dataset, K-means operates quicker than Hierarchical clustering.
While redetermining the cluster centre, an instance can modify the cluster.
K-means reforms compact clusters.
It can work on unlabeled numerical data.
Moreover, it is fast, robust and uncomplicated to understand and yields the best outcomes when datasets are well distinctive (thoroughly separated) from each other.
Limitations of K-means Clustering
The following are a few limitations with K-Means clustering;
Sometimes, it is quite tough to forecast the number of clusters, or the value of k.
The output is highly influenced by original input, for example, the number of clusters.
An array of data substantially hits the concluding outcomes.
In some cases, clusters show complex spatial views, then executing clustering is not a good choice.
Also, rescaling is sometimes conscious, it can’t be done by normalization or standardization of data points, the output gets changed entirely.
(Recommended blog: Machine Learning tools)
Disadvantages of K-means Clustering
The algorithm demands for the inferred specification of the number of cluster/ centres.
An algorithm goes down for non-linear sets of data and unable to deal with noisy data and outliers.
It is not directly applicable to categorical data since only operatable when mean is provided.
Also, Euclidean distance can weight unequally the underlying factors.
The algorithm is not variant to non-linear transformation, i.e provides different results with different portrayals of data.
Expectation-Maximization: K-means Algorithm
K-Means is just the Expectation-Maximization (EM) algorithm, It is a persuasive algorithm that exhibits a variety of context in data science, the E-M approach incorporates two parts in its procedure;
- To assume some cluster centres,
- Re-run as far as transformed;
E-Step: To appoint data points to the closest cluster centre,
M-Step: To introduce the cluster centres to the mean.
Where the E-step is the Expectation step, it comprises upgrading forecasts of associating the data point with the respective cluster.
And, M-step is the Maximization step, it includes maximizing some features that specify the region of the cluster centres, for this maximization, is expressed by considering the mean of the data points of each cluster.
In account with some critical possibilities, each reiteration of E-step and M-step algorithm will always yield in terms of improved estimation of clusters’ characteristics.
K-means utilize an iterative procedure to yield its final clustering based on the number of predefined clusters, as per need according to the dataset and represented by the variable K.
For instance, if K is set to 3 (k3), then the dataset would be categorized in 3 clusters if k is equal to 4, then the number of clusters will be 4 and so on.
The fundamental aim is to define k centres, one for each cluster, these centres must be located in a sharp manner because of the various allocation causes different outcomes. So, it would be best to put them as far away as possible from each other.
Also, The maximum number of plausible clusters will be the same as the total number of observations/features present in the dataset.
Working of K-means Algorithm
Don’t you get excited !!! Yes, you must be, let’s move ahead with the notion of working algorithm.
By specifying the value of k, you are informing the algorithm of how many means or centres you are looking for. Again repeating, if k is equal to 3, the algorithm accounts it for 3 clusters.
Following are the steps for working of the k-means algorithm;
- K-centres are modelled randomly in accordance with the present value of K.
- K-means assigns each data point in the dataset to the adjacent centre and attempts to curtail Euclidean distance between data points. Data points are assumed to be present in the peculiar cluster as if it is nearby to centre to that cluster than any other cluster centre.
- After that, k-means determines the centre by accounting the mean of all data points referred to that cluster centre. It reduces the complete variance of the intra-clusters with respect to the prior step. Here, the “means” defines the average of data points and identifies a new centre in the method of k-means clustering.
Clustering of data points (objects in this case)
- The algorithm gets repeated among the steps 2 and 3 till some paradigm will be achieved such as the sum of distances in between data points and their respective centres are diminished, an appropriate number of iterations is attained, no variation in the value of cluster centre or no change in the cluster due to data points.
Stopping Criteria for K-Means Clustering
On a core note, three criteria are considered to stop the k-means clustering algorithm
If the centroids of the newly built clusters are not changing
An algorithm can be brought to an end if the centroids of the newly constructed clusters are not altering. Even after multiple iterations, if the obtained centroids are same for all the clusters, it can be concluded that the algorithm is not learning any new pattern and gives a sign to stop its execution/training to a dataset.
If data points remain in the same cluster
The training process can also be halt if the data points stay in the same cluster even after the training the algorithm for multiple iterations.
If the maximum number of iterations have achieved
At last, the training on a dataset can also be stopped if the maximum number of iterations is attained, for example, assume the number of iterations has set as 200, then the process will be repeated for 200 times (200 iterations) before coming to end.
Applications of K-means Clustering
The concern of the fact is that the data is always complicated, mismanaged, and noisy. The conditions in the real world cast hardly the clear picture to which these types of algorithms can be applied. Let’s learn where we can implement k-means clustering among various
K-means clustering is applied in the Call Detail Record (CDR) Analysis. It gives in-depth vision about customer requirements and satisfaction on the basis of call-traffic during the time of the day and demographic of a particular location.
It is used in the clustering of documents to identify the compatible documents in the same place.
It is deployed to classify the sounds on the basis of their identical patterns and segregate malformation in them.
It serves as the model of lossy images compression technique, in the confinement of images, K-means makes clusters pixels of an image in order to decrease the total size of it.
It is helpful in the business sector for recognizing the portions of purchases made by customers, also to cluster movements on apps and websites.
In the field of insurance and fraud detection on the basis of prior data, it is plausible to cluster fraudulent consumers to demand based on their proximity to clusters as the patterns indicate.
K-means vs Hierarchical Clustering
K-means clustering produces a specific number of clusters for the disarranged and flat dataset, where Hierarchical clustering builds a hierarchy of clusters, not for just a partition of objects under various clustering methods and applications.
K-means can be used for categorical data and first converted into numeric by assigning rank, where Hierarchical clustering was selected for categorical data but due to its complexity, a new technique is considered to assign rank value to categorical features.
K-means are highly sensitive to noise in the dataset and perform well than Hierarchical clustering where it is less sensitive to noise in a dataset.
Performance of the K-Means algorithm increases as the RMSE decreases and the RMSE decreases as the number of clusters increases so the time of execution increases, in contrast to this, the performance of Hierarchical clustering is less.
K-means are good for a large dataset and Hierarchical clustering is good for small datasets.
K-means clustering is the unsupervised machine learning algorithm that is part of a much deep pool of data techniques and operations in the realm of Data Science. It is the fastest and most efficient algorithm to categorize data points into groups even when very little information is available about data.
More on, similar to other unsupervised learning, it is necessary to understand the data before adopting which technique fits well on a given dataset to solve problems. Considering the correct algorithm, in return, can save time and efforts and assist in obtaining more accurate results. | <urn:uuid:a9552839-ee27-4642-8098-753ce936a9ac> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/what-k-means-clustering-machine-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00721.warc.gz | en | 0.906157 | 2,823 | 3.640625 | 4 |
Last winter, two power distribution companies in Ukraine were breached, resulting in a blackout that affected more than 200,000 people. The hackers responsible “likely used BlackEnergy3 to get into the utilities’ business networks,” according to Wired’s Kim Zetter. From there, it was just a matter of navigating to operator systems and turning the lights off.
“The operator grabbed his mouse and tried desperately to seize control of the cursor, but it was unresponsive,” Zetter wrote. “Then as the cursor moved in the direction of another breaker, the machine suddenly logged him out of the control panel.”
Fortunately, there were no reports of terrorist activity during the outage. Nevertheless, the events set a frightening precedent. According to U.S. officials, energy infrastructure in the U.S. is just as vulnerable to the tactics used to bring down parts of Ukraine’s power grid. The possibility that an attack on the power grid could be used for politically motivated reasons, or as a form of terrorism, is no longer outside the realm of possibility.
Understanding the Stakes
Nearly every component of our modern infrastructure is in some way driven by the power grid. Hospitals, water systems, public transportation, traffic lights, surveillance cameras, chemical manufacturing plants, data centers and government agencies are just some of the essential amenities that could be severely disrupted in the event of a premeditated attack against the power grid – to the extent that lives could be put in danger.
Offline traffic lights could cause gridlock, making it difficult for emergency responders to reach their destinations in a timely manner. Public transportation shutdowns could leave passengers stranded underground. Security systems could fail, resulting in an escalation in crime. Hospitals with backup generators will more or less be running against the clock. Perhaps most frightening of all, the attackers responsible for knocking the grid offline could use the outages to act on more sinister intentions.
Guarding Against the Worst
The fact that some lines of malicious code can precipitate apocalyptic conditions is a terrifying prospect, but it’s an evil that comes with the convenience of living in an internet-connected world.
That said, it’s important to understand that there are ways to keep systems clean on an ongoing basis. While you may not be able to prevent every instance of malware from occurring, there are ways to wipe any threats living on the system – and that’s a vital component of protecting the power grid. In fact, according to Wired, it’s still unclear as to when the hackers got into the system, but operators were fending off spear-phishing attacks starting as early as March 2015.
With a tool like Faronics Deep Freeze, it’s possible to sanitize critical computer systems, with a daily/ weekly/ custom automated maintenance schedule, using a simple restart. The ‘reboot to restore’ technology effectively eradicates any system configuration changes, thereby ensuring that malware cannot endure on a system long enough to cause problems.
With so much at stake, the time to start guarding against the worst is now. Contact Faronics to learn more. | <urn:uuid:8cfca17f-2232-44d5-8df8-cd31eae45ab2> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/protecting-mission-critical-systems-securing-energy-grid | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00721.warc.gz | en | 0.953794 | 645 | 2.875 | 3 |
The recent unveiling of Microsoft’s new Surface tablet highlights how quickly we’re moving into new, highly mobile era of computing. Marc F. Bernstein, former superintendent of Bellmore-Merrick Central High School District and Valley Stream Central High School District in New York, wrote in an opinion piece for Newsday that schools should be moving as quickly as possible into the new digital era.
“The traditional, antiquated model – 25 to 35 students sitting in tidy rows with a single teacher in the front of the classroom – now can and should be replaced by a blend of interactive online education and face-to-face teacher-student interactions focused upon a motivating curriculum,” Bernstein said in the piece. “You would’ve thought the technological revolution would have already had a greater impact on schools. But so far, its impact has largely been limited to providing Advanced Placement courses in smaller, rural communities, computer labs and the occasional incorporation of laptops into individual classrooms.”
Bernstein writes that the high cost of education has resulted in public education being threatened. He believes that to move forward and improve, it will take changes in the instructional method rather than organizational changes to the schools and districts.
Bernstein proposed high school courses with online lessons and teacher-directed lessons, with each occurring on alternate days. This is something like the flipped classroom method, in which students watch lectures via computer and then spend classroom time doing “homework” or collaborative projects.
“This technological revolution must come to public education if America’s children are to be competitive in the global economy,” Bernstein said. “Our students are ready to embrace interactive online learning, as long as they don’t lose the crucial human element provided by excellent teachers.”
One example of where technology could go a long way toward helping a school save time and hopefully improve educational outcomes is in Arizona. The Mesa Public School District is asking voters to approve a $230 million bond to fix aging schools, purchase buses and bring in new classroom software and other technology.
Schools could save time and space with the use of technology as a teaching tool, but the district said it can no longer rely on the state’s School Facilities Board to help with technology procurements or facility repairs, the news source said.
What kind of positive effect do you think technology could have on schools? Do you see classroom management running through technology as a viable approach? Tell us what you’re thinking! | <urn:uuid:cb4e77b3-fd54-44f7-9ae5-e79491c4804f> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/public-schools-could-be-bolder-in-using-technology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00721.warc.gz | en | 0.950551 | 511 | 3.078125 | 3 |
When DataGrid, Inc. Announced it successfully developed an AI system capable of generating high-quality photorealistic Japanese faces, it was impressive. But now the company has gone even further. Its artificial intelligence (AI) system can now create not only faces and hair from a variety of ethnicities, but bodies that can move and wear any outfit. While these images are fictitious, they are incredibly photorealistic.
Using GAN to Help AI’s Creativity
The Kyoto-based startup focuses on creative applications of artificial intelligence. DataGrid uses generative adversarial networks (GANs), so the AI can “learn” from a database of existing images and then from there it can generate its own versions.
GANs are one of the latest developments in the ever-evolving field of artificial intelligence. GANs were first proposed in 2014 and essentially pair two AI systems together that battle it out where one system creates while the other critiques the outcome, learn from the experience and adjust to improve the quality of results. GANs can create “new” information from following the existing rules and are a very exciting development.
While DataGrid’s artificial faces weren’t the first to be generated, with the help of GANs the “humans” they created are the most believable, yet plus the bodies have the capability of movement. The company’s website explained that this involved two lines of research and development, including “whole body generation” and “motion generation.” In the past, GAN-generated images would often have tell-tale flaws that allowed you to see they are were artificial such as asymmetry in the eyes or ears and even backgrounds blending into the faces. DataGrid minimises these flaws because they use a non-descript white background and realistic light shining down on the computer-generated models.
How will this technology be used?
DataGrid plans to licence the technology to advertising agencies and clothing companies so they can use it to create artificial models that are photogenic and the right shape and size for any marketing campaign.
The Swedish fashion chain H&M admitted to using computer-generated models on its website after it was confronted and challenged about “uncanny similarities” with the models. In this case, the heads of real models were superimposed on the same body. While the company emphasised that their process simplified photo shoots and would allow customers to focus on the clothes rather than the models, there was a backlash by others saying these computer-generated images set an unrealistic body image.
There is also concern that this technology could be used for more nefarious reasons. Imagine what can happen when entire bodies of people can be generated, and it’s challenging if not impossible to determine what’s real and what’s fake.
DataGrid suggested they would offer this generative technology to other fields without going into any specifics. While it’s only speculation what those fields might be it’s plausible that instead of television news anchors or sports broadcasters, a cable or local news station might “create” its talent using this technology instead of hiring human journalists. Artificial humans could be used in any marketing endeavour to replace human actors or spokespeople. Speaking of actors, would the ability to create artificial humans reduce and eventually eliminate the need for human actors on television and films?
Now that the technology has advanced to the point, it’s challenging to determine real from fake, what expectations should we have about disclosures regarding what is an “artificial human” versus a real person? At this stage, the focus is on creating entirely unique “artificial humans” rather than using an existing person’s likeness, but the same technology could create artificial versions of very real humans—such as already been done through deepfakes. There could be serious ramifications if an artificial human was believed to be the real thing (i.e. A political leader) and those listening to the message believed the message was being delivered by the actual person. Rumours and conflicts could easily be started based on what a notable person’s AI likeness says.
As one of the latest developments in artificial intelligence, it’s understandable why there’s hype around GANs and the artificial human beings it can create. As the hype gives way to real applications of the technology to create images, video, artificial human beings and more, it will be intriguing to watch it unfold. | <urn:uuid:9ddfe545-7f14-47c1-a3e7-601b297c73a1> | CC-MAIN-2022-40 | https://bernardmarr.com/ai-can-now-create-artificial-people-what-does-that-mean-for-humans/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00721.warc.gz | en | 0.946534 | 920 | 2.734375 | 3 |
Communication Service Providers(CSPs) can now use social information to create meaningful value for their customers, better outcomes for debtors, and have a positive social impact.
In most developing markets, there are few ways to asses a person’s ability to pay future debts, or calculate what most would call ‘credit worthiness’.
A typical credit history is compiled from various financial sources – beginning with a bank account, and typically extending to credit cards, loans, etc. But 48% of the world’s adults are considered ‘un-banked’; those without a bank account. Businesses often feel it is risky to extend credit to this population, but that’s over 2 billion people who are being underserved - almost ½ of the world’s population. This group of people may not have a bank account, but they still have incomes and spend money. In fact, according to the World Bank, the unbanked population is running 200 million small businesses and has an overall buying power worth $5 trillion. That’s too much money for businesses to ignore.
CSPs rely upon credit information for supporting their postpaid customers, but in many developing countries they often have little to none of the data they might traditionally use to make sound credit scoring and credit control decisions (for example, official proof of income and a credit history).
One of the prime barriers is that financial institutions typically reside in cities that are closer to a higher concentration of people and capital found in urban centers. These institutions have shied away from engaging rural populations because of high transaction costs due to poor infrastructure, and a remote, widely-dispersed client base. This creates a dearth of financial as well as vital records, creating a significant impediment to assessing a person’s credit risk. In regions like Africa, which is home to the world’s fastest growing middle class, many products and services remain out of reach.
But there is an opportunity for lenders to chart another path. Instead of utilizing banking records to determine creditworthiness, businesses can now benefit from increased computing power and new sources of information and data, such as mobile-phone usage patterns, demographic data from social network profiles, geolocation data, social media relationships and others, to build better risk models. With these assets, and with scrupulous attention paid to privacy laws and customer consent and preferences, CSPs can make responsible credit control decisions in low-touch and low-cost ways.
New risk management approaches based on social media information can provide the data that service providers need to create credit profiles for subscribers with little or no formal credit history. CSPs have ignored this group of customers for years due to their lack of solid credit ratings, not wanting to extend access to products and services may never be repaid. Excluded from the mainstream– these customers are left with few options. This exponentially expands a service provider’s customer base, without significant additional risk.
New data, new insights
The problem with traditional credit scoring tools is that they overly dependent upon past financial data as a guide to the future, but for a huge percentage of subscribers this data simply doesn’t exist. We believe there is more to the picture – valuable pieces of information that are being overlooked by the business community. For many, a better, or enhanced profile can be achieved through other online sources. With Social Scoring tools, service providers can take advantage of the explosion in data being generated from social media and other online sources.
What can social relationships and connections prove in terms of creditworthiness? Depending on how frequently a person posts online, social media can give us powerful insight into how and where a subscriber spends a good portion of their time or income. For instance, a new subscriber that lives in São Paulo but spends a lot of time vacationing in Florida may be a low credit risk, because his location hints that he has the means to spend money traveling abroad. Another social scoring capability is the ability to automatically correlate and analyze comments a user posts to Facebook or Twitter. A person’s friends and associates can also provide clues for creditworthiness and be a good indicator for predicting potential credit problems in the future.
I believe that applying algorithms on top of an applicant’s behavior-based data provides a more complete picture than just marking off a checklist of credit scoring requirements. After all, spending patterns that might make sense for a “soccer mom” in the United States of America might be deemed suspicious by an unemployed housekeeper in Nigeria. When looking for correlations in a wealth of data there may not be a single “right answer”. | <urn:uuid:99c92d9c-a694-41cb-8daf-fb30fcf3e877> | CC-MAIN-2022-40 | https://blog.mobileum.com/how-mobile-phones-can-help-build-credit-profiles-for-the-unbanked | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00721.warc.gz | en | 0.946937 | 952 | 2.59375 | 3 |
In this cloud training tutorial, we’re going to cover the cloud service model, IaaS, Infrastructure as a Service. Scroll down for the video and text tutorial.
This is part of my ‘Practical Introduction to Cloud Computing’ course. Click here to enrol in the complete course for free!
Cloud IaaS Infrastructure as a Service Video Tutorial
NIST defines IaaS as, “The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer can deploy and run arbitrary software, which can include operating systems and applications.
The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components, for example, host firewalls.”
With IaaS, this is the service model that gives the customer the most control, they get access down at the operating system level.
Let’s have a look at that by looking at the data center stack. With IaaS, the provider will manage from the facility up to the hypervisor and the provider will also install the operating system and may patch it as well.
The customer gets access from the operating system level, so they can customize the operating system as they want, they also install the applications they want on there and they’ll be looking after their data.
Let’s have a look and see how that works in AWS. I’ve logged into the Amazon Web Services console and clicked on the instances tab and you can see the virtual machine that I created earlier.
I’m going to click on the connect button and I’ll get a pop-up. The first thing that I need to do is to find out what the administrator password is because this is the first time I’m connecting to this virtual machine. I’ll click on the get password button and then I need to browse to the Key Pair that I downloaded earlier.
In my downloads folder, there’s that Demo.PEM file, I’ll double click on that and now I can click on the decrypt password button and it shows me the password that was created for the administrator account.
I’m going to copy that into my clipboard and the next thing I’m going to do is download the remote desktop file, which is going to make it easy for me to connect with RDP.
I’ll click on OK to download that into the downloads folder. I will go to my downloads folder, there is the RDP file, I’ll double click on that and click on connect. Now I’m going to paste in the password that I copied and click on OK. Click YES, to the warning message and then this should log me into the desktop of the virtual machine that I created.
I can see my virtual machine is ready, I’m on the desktop now. I’m in Windows and what I would do now is I would install whichever applications I wanted to use this virtual machine for. Another thing that I would do at this point is to change the administrator password.
There’s a bit of a misconception where people think that if you’re on Infrastructure as a Service the cloud provider also has access to your machine so it’s not secure. That’s not that case at all, it’s a best practice that the first thing you do is change the administrator password and then it’s only you that has got access to the desktop of your virtual machines, the provider does not have any access at all. It is a secure solution.
You can see with Infrastructure as a Service that the provider is providing the underlying infrastructure and they installed the operating system for me, I get in at the operating system level at the desktop and I can do anything I want with the virtual machine from there.
Gartner IaaS Magic Quadrant
Let’s have a look at the most well-known IaaS providers through the Gartner IaaS magic quadrant below. Gartner is a research company and they research who are the biggest players in cloud services amongst a whole heap of other things. We’ve got Amazon Web Services up here in the top right.
AWS is by far the biggest cloud provider. They’re currently bigger than all of their competitors combined, however, Microsoft Azure is gaining market share. They’ve been able to do that because it can be a very cost-effective option since pretty much all companies are Microsoft customers in some shape or form. They’re probably using Windows as their desktop operating system.
Because they’re already a Microsoft customer, they can get cheaper options for using Microsoft for cloud as well. AWS is still by far the biggest player with IaaS right now.
In the bottom section of the magic quadrant, you’ll see other well-known cloud services providers:
- IBM (SoftLayer)
Different flavors are available for IAAS. Cloud providers will often offer three of these:
- Virtual machines on shared physical servers
- Virtual machines on dedicated physical servers
- Dedicated bare-metal physical servers
If you’re an IaaS customer, you don’t have to choose one of the three, you can mix and match between the three of them.
Virtual Machines on Shared Physical Servers
In virtual machines on shared physical servers, different customers can have their virtual machines on the same shared underlying physical servers.
Customer A could have a virtual machine on Physical Server 1 and Customer B could also have a virtual machine on that same shared underlying Physical Server 1. This is the least expensive option because you’re using shared resources. It’s cost-effective for the provider and they can pass those cost savings on to you as the customer as well.
Typically, it’s going to have the least amount of options in terms of how many vCPUs, RAM and storage settings available for the virtual machine out of the three possible flavors. The virtual machines can usually be provisioned more quickly than dedicated options. These can usually be provisioned very quickly, typically in less than 15 minutes. Since it’s the least expensive option, this is also the most commonly deployed option of the three as well.
Virtual Machines on Dedicated Physical Servers
In virtual machines on dedicated physical servers, the customer is guaranteed that the underlying physical server is dedicated to them.
If customer A has a virtual machine on Physical Host 1, no other customers are going to have any virtual machines on the Physical Host 1. Physical Host 1 is dedicated to Customer A.
This is a substantially more expensive option than virtual machines on shared physical servers because the provider has to dedicate physical hardware to the customer. It’s going to be a more expensive option.
There are typically more options here in terms of how many vCPUs, RAM and storage options are available for that virtual machine. Since the customer has got dedicated hardware, they may be required to sign a minimum length contract for this, but not necessarily. It depends on a particular cloud provider.
Dedicated Bare-Metal Servers
The last of the three options is the dedicated bare-metal servers. With these, a customer is given access to their physical server down at the lower server level.
A hypervisor is not installed and managed by the cloud provider. The customer can either install an operating system directly on the server or they can install and manage their own hypervisor.
This is the most expensive option of the three and it typically has the most options in terms of virtual CPUs, RAM and storage options that are available. Again, the customer may be required to sign a minimum length contract. AWS, who is the biggest cloud IaaS provider does not offer this option. With AWS, you only get the first two options currently. You can’t get a dedicated bare-metal server with them.
Looking at the data center stack where the provider is going to look after and where the customer gets in:
- Virtual machines on shared physical servers – Operating system level
- Virtual machines on dedicated physical servers – Operating system level
- Dedicated bare-metal servers – Compute level
The operating system is not even installed yet, so the customer will get access using some kind of management application like IPMI or Lights Out, down to the physical server and they will be able to install the operating system from there.
They can install any operating system they want. They could install Windows or Linux directly onto the hardware, or they could install a hypervisor like VMware or Citrix Xen Server. The choice of the operating system is up to them.
The hypervisor is optional, maybe they have a hypervisor or maybe the OS is installed directly on the hardware. If the customer wanted to run just one workload on that particular physical server, let’s say they’re going to run an Oracle database and they want to have a high-performance server, then the Oracle database is going to be the only thing running on the server.
In that case, they would install the operating system directly onto the hardware. They wouldn’t put a hypervisor in there because it’s another layer. It’s another thing that can go wrong and it would add overhead as well. They’re going to want the best performance for that particular workload, therefore, they would install it directly on their hardware.
Virtual Machines on Dedicated Physical Servers vs Dedicated Bare-Metal Servers
The most common reason to choose virtual machines on dedicated physical servers is for compliance. The customer may have some kind of regulatory requirement that means that they can’t have virtual machines on shared physical servers.
Dedicated bare-metal servers will also fulfill the same compliance requirements. Both of these options require dedicated physical servers for the customer. The cost is typically similar, with bare-metal servers maybe being a little more expensive.
One reason a customer may prefer virtual machines on dedicated physical servers is if they do not have expertise in-house to manage the hypervisor. In dedicated bare-metal servers, running a hypervisor will require them to install it and manage it themselves. Therefore, they’ll need their staff with expertise in that area.
With virtual machines on dedicated servers, the provider is going to install and manage the hypervisor for them. They will just choose the operating system they want when they spin up the virtual machine and they get in at that level. That’s the better option if they don’t have IT staff who have got expertise in server virtualization.
Customers can also be offered options for shared or dedicated network infrastructure appliances like firewalls and load balancers.
Again, it depends on the particular cloud provider if they’re going to offer those options or not. Customers can typically connect into the cloud providers’ data center over the internet and/or via a direct network connection.
With the storage options, the customer will typically have the option of local hard drives in the:
- external SAN
The customer also has the option of managing their own storage operating system on a virtual machine or bare-metal server.
With IaaS, the customer gets in at the operating system level, so they could install some storage management software in the operating system and look after their own storage. The most common reason for doing this would be if they want to look after their own encryption.
The customer may also be able to install their own whole physical storage system in the cloud providers’ data center, so maybe they’ve got a storage system from a company like Net App or EMC. They can install that in the data center and connect their servers into that. Again, it depends on the individual cloud provider.
The customer can manage their servers to install applications, patches, etc., through standard remote management methods such as:
- Remote Desktop for Windows Servers
- Secure Shell for Linux
API is also typically available to allow for automation of common tasks, such as provisioning a new virtual machine.
The customer may also have the option of applications such as Microsoft SQL or antivirus. For the installation of the application, they can either:
- Install the application and look after the licensing themselves – Capital Expenditure (CapEx)
- The cloud provider will do it for them – Operational Expenditure (OpEx)
What I’m talking about here is when the customer gets in at the operating system level and they want to run SQL Server on their virtual machine, they could install SQL Server themselves and they would have to provide the license.
If the provider provides this option, the provider can install SQL for them and then the customer just pays for the license as a monthly fee to the service provider rather than providing the license themselves.
The cloud provider may also offer to manage the application as well. They might have, in our example, SQL database administrators on the provider’s staff who can look after the database for the customer.
Let’s have a look at how the billing works with IaaS. For virtual machines on shared physical servers, the CPU and RAM will typically only be billed when the virtual machine is powered on.
The physical CPU and RAM in the underlying server hardware will be available for use by other customers when the virtual machine is powered off. The provider isn’t going to charge you as the customer for CPU and RAM usage when you’re not using it because you’ve powered the server off, so you can get some cost savings there.
Network bandwidth will be billed as it’s used, some usage will typically be bundled in if you’ve got a monthly plan. Data storage will typically be billed whether the virtual machine is powered on or off as the data is always going to be there and taking up physical storage space. Optional software extras, such as a Windows operating system or a SQL server will be billed as a flat monthly fee.
If you’ve got Linux in your virtual machine, Linux is a free operating system so there’s no additional charge for that. But if you want to be running Windows in your virtual machine, then, there’s a fee for that operating system, so the service provider will include it in the charge.
Let’s have a look at some examples of billing. In AWS, let’s use AWS simple monthly calculator which is a tool that you can use to estimate what your monthly charge is going to be every month.
The first thing we do is up at the top we chose the region because there’s a slightly different charge for different regions. I will choose Singapore in here, and then under the Amazon EC2 Instances, you add your virtual machines that are on shared servers.
I’ll click on the plus button and then select the type. Let’s say that we have got a virtual machine which has four vCPU cores and 16GB of memory, that’s the type T2 extra-large in Amazon. I’ll select that, and close, and save.
You can see that the monthly fee for this server if I had it powered on 100% of the time, would be a little over $175. You can also select your other options here as well, for example, I also required 500 gigabytes of SSD storage. When I enter my additional storage, that cost will be added as well which was another $50 per month. Further down you can add all of the other options. So that’s how you can figure out your bill on AWS.
Let’s look at another example, we’ll have a look at Telstra’s pricing structure. I’ve opened up their pricing guide, which comes as a PDF.
Telstra is the main Telco in Australia and they also offer IaaS services. Telstra offers both virtual servers and dedicated servers. Let’s have a look at how the billing works for the virtual servers on shared underlying infrastructure.
Telstra uses a monthly plan structure. If you spend $200 a month with them you get the extra small plan where you get two vCPUs and 4 GB of RAM, if you go up to $4000 per month you get 64 CPUs and 256 GB of RAM.
These vCPUs and RAM can be divided up amongst multiple different virtual machines. For example, you could have eight virtual machines with eight cores each, or you could have 16 virtual machines with four cores each. You can mix and match.
They also have a pay as you go plan as well but you get a bit of a discount if you avail one of the monthly plans.
Cloud IaaS Infrastructure as a Service Design Example
Now, I will show you the basics of how to do an IaaS design. The reason I’m going to do this is that if you are moving from purely On Premises to a Cloud solution, you can be tasked with doing the design.
Designing an IaaS solution is just like designing an On Premises solution which is accessed from a remote office. It uses the same data center design principles, it’s just that the data center hardware is in the Cloud Provider’s facility instead of in yours.
The hardware components that you’re going to use are the same, the way it’s all networked together is the same, the way it’s accessed is the same, and the way it’s secured is also the same.
Traditional On Premise Solution
How the network looks like in a traditional On Premise solution is shown below. We’re accessing our servers in the company data center over on the left, from branch offices over on the right. From teleworkers working from a hotel or at home, for example.
That’s how it looks like, that’s how we do the network design for a traditional On Premise solution. How the network looks like for a Cloud IaaS solution is exactly the same.
The only difference is that the servers are now in the Cloud providers data center rather than in our data center. For doing the design, we do the design just the same way as we’ve always done it traditionally.
IaaS Design Example
In the example, I’m going to use a pretty standard three-tier eCommerce application:
Compute and Storage
The first thing to consider is what are we going to do for compute and storage, and we need to figure out of those three flavors of IaaS, what are we going to use for the different types of servers?
Front end web servers:
For the middleware application server:
Database server at the back end:
That’s the compute taken care of, we’ve made those decisions. The next thing to consider is what are we going to do for the storage.
For the front end web server and the middleware application servers, we’re going to have multiple of those servers, but they’re all going to have the same content on there.
We’re going to put the contents into a Server Farm for both types of the two different servers. The easiest option we’re going to have for the storage there is to use SAN storage for them.
For the back end database server, let’s say that for this example we have got high-performance requirements for the storage as well, we need a certain amount of IOPS there, so in that case, we’re going to use local discs in that dedicated bare-metal server to get the highest possible storage performance.
The next thing we’re going to look at is networking. With this three-tier eCommerce application, traffic is going to come in from external customers over the internet.
It’s then going to hit our front end web servers where the customers will be able to browse our catalog and be able to put things into their shopping carts. From there the traffic then hits the application server middleware, and from there it goes to our database servers at the back end. So that’s the traffic flow.
I’m going to have a firewall in front of my web servers to make sure that traffic can only come in as web traffic on port 80. I’m going to have a local load balancer as well because I don’t just have one web server. I’m going to have more connections coming in than one server can handle.
Also, I don’t want to have a single point of failure. I’m going to have multiple web servers which are all identical copies of each other, they’ve got the same content, and I’m going to put them into our server pool.
The local load balancer in front of them is going to balance the incoming connections to the different servers that are on my Server Farm. I’ve got a global load balancer on the outside as well, I’ll talk about it in the disaster recovery topic below.
So, I’ve got my firewall and my load balancer in front of my front end web servers. I’m going to put my application servers into a different subnetwork because the traffic should never hit the application servers directly from the internet.
I’m going to have a firewall in front of the application servers and they’re going to be in a different subnet. The traffic is only going to be allowed to get to the application server if it’s coming from the web servers and if it’s coming through the correct port number. I’m doing that to secure them.
Again, I don’t just have a single application server. I’m going to have multiple servers to handle the volume of traffic and because I don’t want a single point of failure. Therefore, I’m going to have an application load balancer in front of my application servers to load balance those incoming connections.
At the back end of my database server, traffic should not hit the database servers directly from the internet or the web servers. I’m going to have a firewall in front of them, I put them in a different subnet, and on my firewall rules, I allow traffic from the application servers on the correct ports.
I don’t have a load balancer in front of my database servers, because for this example application, that is handled within the application itself. I’m going to have at least two database servers because I don’t want to have a single point of failure.
Other things to talk about here, the Server Farms can be automatically scaled. With those web servers and the application servers, they’re identical, they’ve got the same content on them. I can build an image of those ahead of time.
Then, I can configure a threshold where I say that if the load on my existing servers goes above a certain level, I’m going to automatically spin up an additional server and add it to the server pool.
The load balancer will add it to the servers that it’s going to be sending the incoming connections to. Therefore, I can automatically scale up and scale down the number of servers I have in line with the current demand.
With the traffic flow we’ve discussed there, that was for traffic coming from external customers to do their shopping. We also need to consider management traffic as well, because our own IT engineers are sometimes going to need to get onto those servers to do maintenance.
For incoming management connections, our engineers can either use a virtual private network (VPN) over the internet, or we could set up a direct connection from our office into the cloud provider’s facility.
We need to consider backups in the same way as we would with an On-Premise solution. The Cloud Provider will not automatically back up your data.
This is another bit of a misconception or misunderstanding some people have about Cloud. They think if they have their servers deployed as a Cloud solution, it’s in a hardened data center, there are no single points of failure, and backups will be automatically taken as well.
That is not the case. The service provider is not going to back up your data by default, you need to provide that.
The data center is a hardened facility with no single points of failure if you’ve designed your solution like that, but that doesn’t protect your data against regional disasters, the entire data center going down, or data corruption.
If we look back at the previous slide you see with my database servers here, I’ve put two of them in there for redundancy, but if my data gets corrupted, it’s going to get replicated between both of them.
It’s going to be corrupted on both servers, so having two servers isn’t going to help me, I need to take backups in case I need to do a restore from a previous version.
You have network connectivity to the Cloud facility, so one of the ways you could configure your backup is you could backup back to your own premises office, and use your existing backup solution. So you could back up to tape in your office for example if you wanted to.
You can also back up to the Cloud Provider’s storage. If you are going to back up your data to the Cloud Provider’s storage make sure you’re backing up to a different data center than where your servers are located.
Again, we might have that regional disaster, if we lose the entire data center, it’s not going to help us much if our backups are also in the same data center. Data should always be backed up to an offsite location.
The next thing to talk about is disaster recovery. If the data center is lost, you’ll be able to recover to a different location, to a different data center from those backups as long as they were stored offsite. In that case, you’ll lose all new data since the last backup was taken.
We’re talking about RPO here, the recovery point objective. What RPO means is in the worst-case scenario, how much data could you lose if you have to restore to a different location?
For example, if you’re taking backups every night, your RPO would be 24 hours, because the worst-case scenario would be you had the disaster just before you take the next backup. So all of the new data that was written today since the last backup is going to be lost.
The best-case scenario would be that the disaster occurs just after we’d taken the back up. When we talk about RPO, it’s the worst-case scenario we talk about. So if you’re recovering for backups and you take a back up every day, your RPO would be 24 hours.
It could take a significant amount of time to deploy the infrastructure in the new location and store the data as well. Just like we’ve got the RPO, the recovery point objective, we need to consider the recovery time objective as well.
Using our same example again, let’s say we’re going to just restore from backups, so our RPO is 24 hours. When we do a failover to the new location, it’s not like, bang, we can just click our fingers and everything is going to be back up and running.
We’re going to have to do the restore which is going to take time. We’re also going to have to deploy our new servers. We’re also going to need to configure our firewall rules, configure our load balancing, et cetera, that’s all going to take time.
The RTO is going to be how long it takes to get back up and running again. It’s not as easy to calculate this like it is with RPO, the RTO, if you need to calculate this you need to do a test recovery. So do a test failover at the new site and see how long it takes you to get back up and running again.
Okay, so if we are just restoring from backups, you can see there that the RPO and the RTO are going to be quite long, and that might not be acceptable. You may want to provision a disaster recovery solution to reduce the RPO and RTO.
On the left is the same Cloud solution that we deployed. This is the main site here, customers are going to be coming in over the internet and we’re going to be hitting our three-tier application in the main Cloud data center on the left. But we want to have a fast disaster recovery solution available as well.
What we’re going to do for that is in a different data center, we are going to provision a web server, application server, and database server, and configure our load balancer and Firewall rules as well.
We’re going to have the infrastructure already set up ahead of time so that if we do have to failover, this is going to give us a fast RTO because we’re ready to failover when we need to.
We are also going to need the data to be available in that disaster recovery site as well, so we’re going to need to replicate the data from the database servers on the left in the main site, to the database server in the DR site.
I don’t need to replicate my web servers and application servers in this example, because they are just using static content, so I can deploy these from images.
The last thing to mention here is my global load balancers. They’re there to direct incoming connections to the correct data center. In normal operations, incoming connections will get directed to the main site.
If the main site goes down, I will failover to the DR site and my global load balancer will direct new incoming connections there. You only need the global load balancer if you’ve got a disaster recovery solution. If we only had our servers running in one site, we wouldn’t need that component.
Now obviously if you’re going for this kind of disaster recovery solution rather than just backups, it’s going to be more expensive because you do need to deploy additional infrastructure in the disaster recovery site, but this is going to give you reduced RPO and RTO.
We’re typically not going to deploy exactly the same infrastructure in the disaster recovery site as in the main site, because this is just a backup site, we’ll just put a minimal infrastructure in there to give us the most cost-effective way of doing this.
What is IaaS? Your data center in the cloud: https://www.infoworld.com/article/3220669/what-is-iaas-your-data-center-in-the-cloud.html
Cloud Architecture Principles for IaaS: https://uit.stanford.edu/cloud-transformation/iaas-architecture-principles | <urn:uuid:06561133-8010-40bc-8763-9ed1dc6245f1> | CC-MAIN-2022-40 | https://www.flackbox.com/cloud-iaas-infrastructure-as-a-service-tutorial | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00721.warc.gz | en | 0.935667 | 6,664 | 2.625 | 3 |
In this Cisco CCNA training tutorial, we’re going to take on a stage by subnetting larger networks. We’re going to look at subnetting our Class A and our Class B networks. Scroll down for the video and also the text tutorial.
Subnetting Class A and Class B Networks Video Tutorial
Example 1 – Class B on 4th Octet
For our first example, we’ve been allocated a Class B network with an IP address of 188.8.131.52/16.
If we subnet that into /29 subnets, we’re going to have 3 bits for host addressing. We’ve got 32 bits in the address, 32 minus 29 gives us our 3 bits.
This will allow us 6 hosts per network because 2 to the power of 3 is equal to 8 minus 2 gives us 6 hosts. So, a /29 will give us 6 available hosts per network even if we’re using a Class A, B, or C. It will be the same.
In a Class B /16 range we’re going to have 13 bits for the network address. If it was a Class C, we would only have 5 bits for the network address because 29 minus 24 has a difference of 5.
It would give us 5 bits but since it is Class B, we’ve got those extra 8 bits. 5 plus 8 gives us 13 bits and 2 to the power of 13 is going to allow a total of 8,192 subnets.
For the IP address, 184.108.40.206/29, what would be the network address, the broadcast address, and the range of valid IP addresses?
We’ve got the IP address and the subnet mask. We’re putting a line after the /29 and the line is going in after the 8. In the top right, we’ve got ones under 128 and 8. We add those two together which is equal to 136.
So, the network address is 220.127.116.11. The line is after the 8, so we add 8 to 136 and that is 144. The next network address would be 18.104.22.168. Now, if the next network address is 144, then the broadcast address is 22.214.171.124.
The valid addresses for our hosts fall between the network address and the broadcast address. So that’s 126.96.36.199 up to 188.8.131.52.
The Magic Number Method – Example 1
Another popular way of calculating the network address, the broadcast address, and the host addresses is by using the magic number method.
You’ll see this being cited in quite a few places on the internet. This one is very handy if you’ve been given the subnet mask in dotted decimal notation rather than with a slash.
So a /29, if we wrote that out in dotted decimal notation it’s 255.255.255.248. What you do with the magic number is you take the value in the octet that is being subnetted. So 248 in this case, you take that away from 256. 256 minus 248 gives you 8, and now you know that the network addresses are going to go up in blocks of 8.
In that example, our address was 184.108.40.206. We find which block of 8 is closest to that, which is 136. The network address must be 220.127.116.11. Then, we add 8 to the 136 to get the 144 and we know that that is the next address block.
Example 2A – Class A on 4th Octet
In this example, we’ll do a Class A where we’re going to subnet on the fourth octet. We’ve been allocated 18.104.22.168/8. If we apply with the subnet mask 255.255.255.240, how many subnets do we have, and how many hosts per network?
We were given a dotted decimal 255.255.255.240, which is the same as a subnet mask of /28. We put the line in after 16, and we can see that we’ve got 4 bits for our host addressing. So, that’s 2 to the power of 4 equals 16 minus 2 and it gives us 14 hosts per network.
We have 20 bits for the network addresses which is the difference between the default /8 and the /28 that we’re using. 2 to the power of 20 works out a little over one million subnets.
Example 2B – Class A on 4th Octet
For the IP address, 22.214.171.124/28, what are the network address, the broadcast address, and the range of valid IP addresses?
For this example, the line is after the 16 when we draw it out. Therefore, the network address is going to go up in multiples of 16. The network address is going to be 126.96.36.199 and if I add 16 to that, the next network address will be 188.8.131.52.
Our broadcast address here must be 184.108.40.206. And the range of valid host addresses falls between the network address and the broadcast address, 220.127.116.11 up to 18.104.22.168.
The Magic Number Method – Example 2B
Another way you can do it is by using the magic number. In this is way, you can do it quite quickly in your head. Especially if you were given a subnet mask in dotted decimal notation rather than in slash notation.
Even if you have been given a slash notation, you can convert it to dotted decimal first. Our example was a /28 which is 255.255.255.240. A /28 is going to use the first 4 bits in the last octet. So 4 bits is going to be 128 plus 64 is 192, plus 32 is 224, plus 16 is going to be 240.
Then what we do with the magic number is we take that number away from 256. So if the number we’re subnetting at is 240, then 240 taken away from 256 is going to be 16. Now, we know that the address blocks go up in multiples of 16 until we get to the closest subnet. The network address must be 22.214.171.124.
We know that the next block starts at 80, so the broadcast address must be 126.96.36.199, and the valid hosts would be 188.8.131.52 to 184.108.40.206.
Example 3A – Class A on 3rd Octet
This time we’re going to do a Class A on the third octet. In our example, we’ve been allocated a Class A 220.127.116.11/8. If we subnet it into/19 networks, how many subnets do we have, and how many hosts per subnet?
In a /19, the line is going to be after 3 bits on the third octet. So that leaves us 13 bits for hosts. 8 in the last octet and then 5 on the right-hand side of the third octet.
To figure out how many hosts each network is going to support, it’s 2 to the power of 13 minus 2 so that’s 8,190 hosts per network.
Since we were allocated a Class A /8 range, the difference between a /8 and a /19 is going to be 11 bits. So to figure out how many networks we have, it’s 2 to the power of 11 that will give us 2048 subnets.
Example 3B – Class A on 3rd Octet
As usual, we’ve got the second part of the question. So for the IP address, 18.104.22.168/19, what’s the network address to the broadcast address and the range of valid IP addresses?
For this example, we are subnetting on the third octet. The other examples we’ve been subnetting on the fourth octet. The line is after the 32 on the third octet. The network block addresses are still going to go up and multiples of 32, but it’s just it’s going to be on the third octet rather than the fourth octet now.
Our network address is 22.214.171.124. You can see it if you write out the whole IP address. Also, we’re at 60.15.10, we’re going up in multiples of 32, so obviously, 10 is less than 32 so the network address must be 126.96.36.199. The next network address would be 188.8.131.52.
The broadcast address is going to be 1 less than that on the third octet, and 255 on the fourth octet. So the broadcast is 184.108.40.206. The valid host addresses will be between the network address and the broadcast address which is 220.127.116.11 up to 18.104.22.168.
So the value in the fourth octet, the lower range is going to be a 1, the higher range is going to be the 254 for the hosts. The subnetting is done on the third octet if a subnet mask is anything between a 16 and a 24.
The Magic Number Method – Example 3B
We can use the magic number method for that example again. It was a /19, so /19 that’s 3 bits on the third octet. That is 128, 192, and then 224. We subtract 224 from 256, which gives us 32. Now, we know that the address block is going up in values of 32.
Again, it’s on the third octet rather than the fourth octet here. It must be a network address of 22.214.171.124 because our value in the third octet is 10.
The broadcast address is 126.96.36.199, and our valid hosts are 188.8.131.52 up to 184.108.40.206. The address block is figured out exactly the same way. Again, you just need to remember that on the fourth octet, your hosts are going to go from 1 on the low end, up to 254 on the high end.
Subnetting Large Networks – Example 4
You’ve been asked to subnet the 220.127.116.11 network into 6 different networks, what subnet mask are you going to use?
The network is 18.104.22.168, therefore, we know it’s a Class B network. We need to split it into 6 networks, so we’re going to need 3 bits. It’s a Class B, the default subnet mask is a /16.
We need 6 networks, which were 3 bits. We add 3 to the /16 and it will give us a /19.
Some extra information that we weren’t actually asked for it in the question are:
- A /19 in dotted-decimal was 255.255.224.0.
- The network address would be 22.214.171.124.
- We would have 8,190 hosts in each subnet.
- We’ve got 13 bits available for the host address.
Subnetting Question Categories
When you are on the exam, there are lots of different ways that they can ask you questions about subnetting. But it’s all going to boil down to just a few things.
It could be a variation of given a network requirement of ‘X’ amount of subnets and ‘Y’ amount of hosts per subnet, what network address and subnet masks should you be using for each subnet?
The other basic question they can ask is if they give you a particular IP address and subnet mask, calculate that subnet’s network address, the broadcast address, and the range of valid host IP addresses.
So again, it could be any variation in those questions. They may ask it differently, but as long as you can answer those questions, which you can now because we’ve done loads of practice examples of them. Then, you’re going to be fine for anything that they throw at you on the exam.
Subnetting practice questions.
IP Addressing and Subnetting for New Users: https://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13788-3.html
Host and Subnet Quantities: https://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13790-8.html
Subnetting a Class C network address: https://www.techrepublic.com/article/subnetting-a-class-c-network-address/ | <urn:uuid:3a545198-38d0-4ad1-84ac-5aa334897e4b> | CC-MAIN-2022-40 | https://www.flackbox.com/subnetting-class-a-and-class-b-networks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00721.warc.gz | en | 0.932155 | 2,772 | 4.0625 | 4 |
There are multiple underwater oil plumes in the area surrounding BP’s broken wellhead off the coast of Louisiana, but so far their concentrations seem to be “very low,” according to a report released Tuesday by the National Oceanic and Atmospheric Administration.
NOAA analyzed water samples collected by the University of South Florida’s R/V Weatherbird II between May 22 and 28.
“We have always known there is oil under the surface; the questions we are exploring are where is it, in what concentrations, where is it going, and what are the consequences for the health of the marine environment?” said NOAA Administrator Jane Lubchenco.
“This research from the University of South Florida contributes to this larger, three-dimensional puzzle we are trying to solve, in partnership with academic and NOAA scientists,” she added.
‘Very Low in All Samples’
Samples collected by the R/V Weatherbird II came from three stations located 40, 45 and 142 nautical miles from the well head, respectively. Sampling depths ranged from 50 to 1,400 meters.
Of particular interest in NOAA’s analysis were the levels found of both hydrocarbons and polycyclic aromatic hydrocarbons, or PAHs, some of which are known to be carcinogenic, mutagenic and teratogenic.
NOAA’s analysis found that the concentration of hydrocarbons is in the range of less than 0.5 parts per million, while PAH levels are in the range of parts per trillion.
“PAH levels are very low in all samples, with only five of 25 having reportable concentrations of the priority pollutant PAHs,” noted Steven Murawski, chief scientist for NOAA Fisheries.
The hydrocarbons associated with only one of the three sampling stations was considered clearly consistent with the BP oil spill source, NOAA said.
BP CEO Tony Hayward has insisted that there are no such plumes, asserting instead that the oil has remained on the ocean’s surface.
NOAA’s confirmation backs up assertions made by other scientists that the oil plumes are real, though some other findings have produced much more alarming evidence.
A team of researchers led by Samantha Joye of the University of Georgia’s department of marine sciences, for example, recently uncovered an underwater oil plume 15 miles wide, 3 miles long and about 600 feet thick, according to a report in The New York Times. With a core between 1,100 and 1,300 meters below the surface, that plume is reportedly creating conditions close to what might be considered a dead zone.
BP officials did not respond by press time to TechNewsWorld’s requests for comment.
Given that the R/V Weatherbird II made its cruise several weeks ago, “it would be of interest to know the values now,” Kevin Trenberth, head of the climate analysis section at the National Center for Atmospheric Research, told TechNewsWorld.
One big question that remains is what effect BP’s heavy use of dispersants will have, Trenberth noted.
“The strategy of heavily using dispersant is one that is designed to foster the widespread distribution of the pollutant such that the concentrations at most places are not enough to cause problems,” he explained. “The analogy is using taller smoke stacks on chimneys for plumes of smoke from factories to reduce the concentrations at ground level for air pollution, but at the expense of raising background levels everywhere.”
In the atmosphere, one consequence of such practices is global warming, he pointed out, but it’s not known “what those pervasive background values do” in the sea.
“We are deeply concerned about what this oil spill means for the health of the Gulf of Mexico, and for the millions of people who depend on these waters for their livelihoods and enjoyment,” NOAA’s Lubchenco said. “NOAA is using all the scientific methods at our disposal to assess the damage, from satellites in space, planes in the air, boats on the water, gliders under the sea, scientists in the field, and information online.” | <urn:uuid:b392b26b-1569-4ed8-b506-7037e3408977> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/noaa-confirms-streamers-of-oil-deep-in-the-gulf-70165.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00721.warc.gz | en | 0.950211 | 889 | 2.796875 | 3 |
After a series of high-profile cyber security incidents on critical infrastructures, governments and enterprises of such facilities have taken malwares seriously into considerations. Apparently, the malwares or ransomware over the past couple years, such as Stuxnet, WannaCry and Crash Override, have publically exposed the vulnerability of SCADA Networks or Industrial Control Systems in today’s power grid automation, petroleum sites and other critical infrastructures.
The vulnerability can be analyzed as a loophole in the convergence between the two technological paradigms: IT (information technology) and OT (operational technology). The IT today is long-established, and consists of open architecture computing hardware, efficient memory and storage, and networking connections that allows the generation, storage and exchange of traffic flow. Therefore, the IT management is familiar with mitigations of cyber threats through years of experience. In fact, since IT architecture is commonly situated in headquarters and office settings, most of the funding have been invested in this sector.
On the contrary, OT systems are implemented at operating sites like PLCs, and ICS (industrial control systems) and SCADA. Traditionally, these proprietary assets were made to perform specific tasks and remotely isolated, and thus were not designed with security functions. In fact, OT systems are made to operate in a long life cycle and thus, there is still a considerable number of OT systems deployed decades ago. The sunk investment of OT makes it less favorable in budgeting for security measures. Therefore, when OT are also connected to the network, air gaps are closed and they have instantly become the targets for cyber malwares.
To fully protect critical infrastructures from advanced cyber malwares, it is necessary to establish multi-layer protections covering both IT and OT segments. In a common digitalized setting for critical infrastructures, OT controls and manages level 0 to level 2 networks like instrumentation bus, controller LAN and supervisory HMI LAN, whereas the IT monitors and authenticates HQ and office based networks like web server, email server, FTP server and enterprise servers, which are favored by the managements. In a more advanced model, a DMZ (Demilitarized Zone) is established as an additional layer of protection towards externally interfaced services.
In order to protect digitalized and connected critical infrastructures today, it requires a well-converged architecture that can protect IT, OT and even the DMZ. Therefore, an Israel-based ICS cyber-security start-up contacted Lanner to collaborate for a hardware-software integrated solution with real-time monitoring visibility and policy-enforced control to protect critical facilities against malicious cyber attacks. In this collaboration, Lanner provides firewall hardware platforms that can fulfill the following requirements in ICS SCADA settings:
Lanner’s Converged IT/OT Cyber Security Solutions
For aggregated Critical Infrastructure cybersecurity solution, Lanner introduces the integrated and converged solution pack includes the LEC-6032C as the rugged industrial UTM in OT, FW-7525 as the industrial DMZ firewall between IT and OT, and lastly NCA-4010 as the enterprise firewall for the IT segment.
LEC-6032 is selected to perform DPI, white list and virtual segmentation for the protection of assembly line servers, PLCs and SCADAs. LEC-6032 is ideal as next-generation firewall for the harsh OT environment due to its physical qualities such as fanless design, wide operating temperature range, and dual power path. Processor wise, LEC-6032 is driven by small-footprint Intel® Atom C3845 SoC for power efficiency at ICS and SCADA sites. In case of network disruption, LEC-6032 offers LAN Bypass fault-tolerant design to provide alternate traffic route.
FW-7525 is deployed in the industrial DMZ between IT and OT. Lanner’s FW-7525 is a valued pack of power-efficient multi-core Intel® Atom SoC, rich LAN I/O configurations, built-in cryptographic accelerator, and hardware-assisted AES-NI instructions. With these qualities, FW-7525 is the optimal UTM gateway for DMZ firewall to deeply inspect packets, monitor traffic and enforce security policies.
For the IT environment, Lanner selects NCA-4010 to perform DPI, white list and firewall tasks. This 1U rackmount appliance is empowered by server-level Intel® Xeon® D-1500 SoC and ECC-supported DDR4 memory. Bandwidth-wise, NCA-4010 supports up to sixteen RJ-45 GbE ports and two (2) 10G SFP+ ports. The bandwidth can be expanded by adding 1 Ethernet NIC module. Meanwhile, the 19” form factor of NCA-4010 makes it a space-efficient appliance for data centers today.
The deployments of next-generation firewalls: NCA-4010, FW-7525 and LEC-6032 in each IT, OT and DMZ respectively are aggregated to provide a well-converged Critical Infrastructure cybersecurity solution. | <urn:uuid:19b5b2f4-64f4-445b-82f6-6a1175fcf769> | CC-MAIN-2022-40 | https://www.lanner-america.com/critical-infrastructure/ensuring-itot-network-security-critical-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00721.warc.gz | en | 0.924238 | 1,067 | 2.578125 | 3 |
What is Python?
Python is an interpreted, object-oriented, high-level programming language. It uses an elegant syntax, making programs easier to read and comes with an extensive standard library that makes it usable in a wide range of scenarios.
Why use Python?
Python is easy to use and is considered an industry-standard, general purpose language for data analytics. Whether it’s the best is a source of endless debate in the data science community. You can use Python as your data analytics language with our database – and we’ve designed it to be open and flexible, so you can add on any new data analytical language to it, as they develop over time.
When was Python created?
Python was conceived in December 1989, the name comes from Monty Python’s Flying Circus, a favourite of Guido van Rossum who created Python
Latest Python Insights
Interested in technical topics such as automatic vs. manual software testing, Python, or would just like to get some compelling tech book recommendations? Look no further.
Python is tightening its grip in the world of Machine Learning, data-centricity became pivotal for organizations, and new ways came to existence in order to merge BI and Data Science.
Data Analysis is one of the major components of the various data tools that data-driven businesses use in a daily basis. It was shown that the availability of languages like R and Python is crucial for data scientists in order to interact closely with data.
Interested in learning more?
Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data. | <urn:uuid:e27c45cb-8cf5-44df-b115-0415c6278c56> | CC-MAIN-2022-40 | https://www.exasol.com/glossary-term/python-programming-language/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00121.warc.gz | en | 0.923735 | 351 | 2.859375 | 3 |
March 19, 2020
ISSP Continues Operations in Face of COVID-19 Epidemic
November 30, 2020
What if some of your computers survive a cyberattack?
Imagine that your organization becomes the victim of a cyberattack that paralyzes your information systems. Your IT and security departments work hard to bring the damaged infrastructure back to life and discover that around 10% of your computers were unaffected. They might sigh in relief assuming there’s less work to do to mitigate the consequences of the attack. But this is where they can be totally wrong…
Targeted attacks usually consist of many stages, the last often being a clean-up stage. Adversaries want to delete traces of their presence and malicious activities, so they wipe out the computers they gained control of. This is what happened during the Not Petya cyberattack in 2017 – infected computers all over the world were encrypted with no possibility to restore the data.
In the event of such an attack, the first thing any company will do is try to restore whatever they can – data, software – to renew their business activity. A successful advanced persistent threat (APT) attack against any organization will always result in financial, operational, and reputational losses. Each working computer and server is an asset, and the more information systems are up and running, the better. Under these circumstances, seemingly unaffected computers are usually left unattended.
While investigating the Not Petya cyber invasion in 2017, experts from the ISSP Labs and Research Center discovered that in most companies whose information systems they analyzed, about 10% of computers survived the attack and seemed unaffected. The usual attitude of IT and security teams in respect to these machines was to ignore them and focus on recovering from the disaster. And after everything is fixed, this 10% is usually forgotten about and never analyzed.
It’s hard to blame anybody for thinking and acting this way. When your organization is on the brink of extinction, your first impulse is to bring it back to life, restoring one IT asset after another. It’s a natural and correct response. But the next step – the one that in most cases is ignored – should be taking a very close look at the “unusual 10%.” These computers and servers should be thoroughly analyzed.
Don’t allow adversaries to trick you twice
Obviously, attackers would like to retain control of a hacked organization once they’ve reached their goals. After all, they went through a long, hard, and costly process to penetrate the organization’s infrastructure and take control of its assets. So if some day they need to use its information systems for a different purpose, they won’t want to conquer the now better fortified castle again. They’ll want to use a backdoor they previously left behind. And this backdoor can be left in a certain number of computers that survive the first attack and will later be ignored by the IT and infosec departments.
This is why in the case of a targeted cyberattack, it’s essential to not only analyze and fix the affected IT infrastructure but also to conduct a wholescale investigation and assessment of every IT asset, looking for indicators of compromise and leaving no backdoors for future invasions. | <urn:uuid:c5613f8c-10f2-43ea-953a-c468711fd3a0> | CC-MAIN-2022-40 | https://www.issp.com/what-if-some-computers-survive | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00121.warc.gz | en | 0.960731 | 670 | 2.578125 | 3 |
When I teach Learning Tree’s Cyber Security introduction, participants are often amazed at the ways bad actors can eavesdrop on computers. Most of them are aware of software that can be planted by bad actors that can capture keystrokes, for instance, but few are aware that noises from keystrokes, the vibration of a notebook, and even power fluctuations can be used to capture keystrokes.
A little history
We’ve known for a long time that computers leaked information through the radio waves the electronics put out. The waves are very low power but can be detected. Wired ran an article about this over a decade ago.
In his book Spycatcher, former spy Peter Wright explains how a telephone near a classified teleprinter had been modified so its microphone was always on. The mic listened to the sounds the printer made when a message arrived. Because each letter made a unique sound, the audio could be decoded showing the secret messages!
Powerlines and lasers
In How to use electrical outlets and cheap lasers to steal data Tim Greene of Network World reports on how the attacks work. In the first attack, researchers watched a signal leaked to the ground line of a power outlet when the keys of a keyboard were pressed. The researchers pressed the keys on a keyboard and watched the small signals generated on the ground wire. Each generated a unique signal pattern. They then typed a password on the keyboard and noted which signal patterns appeared. From that, they were able to discover the password.
In the laser attack, the researchers shone a small laser onto a laptop. Each keypress vibrated the
laptop differently and caused the reflection of the laser to change with the vibration. From that, they could discern which key was pressed and discover what was being typed.
Using the microphone
There are two interesting attacks using a device’s microphone. The first is quite complex. In it, researchers used the microphone to listen to the noises produced by a monitor’s power supply. The virtually inaudible sounds changed based on what was being displayed! With some AI software, the researchers could decode the sounds with surprising accuracy. In addition, the attack could be carried out from over thirty feet away with the proper type of microphone. An article in ArsTechnica has more details.
Another interesting acoustic attack impacts mobile devices. For this attack, the researchers listened to the sound of a finger typing different virtual keys on the mobile device’s on-screen keyboard. They found that the sounds – particularly on devices with stereo microphones – could be used to identify the location of the finger press and hence the virtual key being “pressed”.
All of these are what is called “side-channel attacks”. That is they attack a device or the system implementation not some weakness in the algorithm (such as a piece of software) itself. There are many more side-channel attacks than the ones I’ve mentioned here, of course. I wanted to illustrate that an attacker may not need to plant software on a device to compromise it to some extent. While some of these attacks may be difficult to detect (and difficult to implement, to be sure), others may be doable in a crowded area. High-security organizations have defenses for these, although the details may be classified. For the rest of us, awareness and diligence are the best tools.
To your safe computing, | <urn:uuid:3bea29ce-6230-4b46-b3b1-78144b786a7f> | CC-MAIN-2022-40 | https://www.learningtree.ca/blog/eavesdropping-computers-afar/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00121.warc.gz | en | 0.956609 | 700 | 3.34375 | 3 |
What is Scareware?
A common scareware definition is a cyberattack tactic that scares people into visiting spoofed or infected websites or downloading malicious software (malware). Scareware can come in the form of pop-up ads that appear on a user’s computer or spread through spam email attacks.
A scareware attack is often launched through pop-ups that appear on a user’s screen, warning them that their computer or files have been infected and then offering a solution. This social engineering tactic aims to scare people into paying for software that purportedly provides a quick fix to the "problem." However, rather than fix an issue, scareware actually contains malware programmed to steal the user’s personal data from their device.
Scareware can also be distributed by spam email, through messages that trick people into buying worthless items or services. Hackers then use the details they successfully steal to widen their criminal enterprise that is mostly based on identity theft.
Scareware Ads and Pop-ups
So how is scareware used? Typically through pop-up ads from rogue security providers that may sound legitimate but are fake. For example, rogue scareware or fake software names to watch out for include Advanced Cleaner, System Defender, and Ultimate Cleaner.
Scareware ads, which pop up in front of open applications and browsers, aim to scare computer users into thinking they have a major problem with their device. The hacker uses pop-up warnings to tell them their computer has been infected with dangerous viruses that could cause it to malfunction or crash. Some scareware ads also purport to be scanning the user’s device, then showing them hundreds of viruses that are supposedly present but are actually fake results. Typically, the more menacing or shocking an ad pop-up sounds, the more likely the claims being made are scareware.
Another key feature of scareware is urgency. Hackers attempt to convince users that the supposed device problem requires immediate action and then prompt them to install the program as quickly as possible. Therefore, always be careful with any ad that demands the user to act immediately. It is most likely scareware.
Even more concerningly, scareware ad pop-ups can be particularly difficult for users to remove from their device. Hackers want the fake software to linger on a user’s screen, so they make the close button difficult to find and show even more fake warnings when the user manages to locate and click on it.
How to Protect Yourself from Scareware?
The most effective way for users to protect themselves from scareware is to only use software from legitimate, respected, well-known providers. It is also important to avoid what is known as “the click reflex.” In other words, ignore all unexpected pop-up ads, warnings about new viruses, or invites to download free software that is not from a trusted organization.
If scareware appears on your device, never click the "download" button, and always close the ad carefully. A better option is to simply close the web browser rather than attempt to click on the pop-up ad. This can be achieved with the Control-Alt-Delete command on a Windows device and Command-Option-Escape to open the Force Quit window on a Mac device. If that does not work, perform a hard shutdown of the device.
Another option is to use tools like pop-up blockers and Uniform Resource Locator (URL) filters that prevent users from receiving messages about fake or malicious software. Furthermore, legitimate antivirus software, network firewalls, and web security tools will protect users from the spread of scareware. These tools must be kept updated at all times to provide effective protection from scareware and other types of malware.
Organizations can help employees protect themselves against scareware by providing regular training on how to spot suspicious activity or software. Users must remain vigilant and recognize the telltale signs of a cyberattack, such as suspicious pop-up ads and email messages.
Scareware alerts and pop-up ads signal that a user’s computer has been infected with some form of malware. Removing scareware and any other form of malware involves using a third-party removal tool that can eliminate all signs of the virus infection and then re-enabling the antivirus software the scareware bypassed or disabled to carry out its purpose.
The computer and all software on the device must have the latest patches and security measures from the software provider.
Examples of Scareware
In 2010, the website of the Minneapolis Star Tribune newspaper began serving Best Western ads, which redirected users to fake websites that infected their devices with malware. The attack launched pop-up ads that told users their device had been infected and that the only way to remove it was to download software that cost $49.95. The attackers managed to make $250,000 before being arrested.
Other examples of scareware are targeted at specific devices. For example, Mac Defender is an early form of malware targeting Mac devices, and Android Defender is scareware or fake antivirus software that targets Android phones.
How Fortinet Can Help?
The Fortinet range of next-generation firewalls (NGFWs) helps protect organizations and their users from all forms of malware, including known and emerging security threats. The Fortinet firewalls filter network traffic and use features like Internet Protocol security (IPsec) and secure sockets layer virtual private network (SSL VPN) to keep users secure.
The Fortinet firewall technology enables Internet Protocol (IP) mapping and network monitoring, which provide deeper inspection of content to identify and block cyberattacks, malware infections, and other security threats. It also offers protection at scale and enables future updates to ensure organizations are protected against the latest threats.
What are Scareware and Ransomware?
Scareware and ransomware are both forms of malicious software or malware. Scareware is malware that attempts to scare users into thinking their device has been infected with a virus and then encourages them to quickly download a program to fix it. It usually warns users that their device has a dangerous file or risky content and then offers a solution that will remove the threat. It aims to convince users to download software from a provider they have never heard of.
Ransomware is a type of malware that, when downloaded, encrypts files on a device or locks a device completely. The attacker will then demand payment or a ransom from the victim, promising to unlock the data or device once the transaction has been completed.
How do I get rid of Scareware?
Scareware can be removed using a software tool that removes malware and all signs of a virus infection. The original antivirus software that was bypassed or disabled by the scareware also needs to be re-installed and patched.
How do I know if I have a fake virus?
Scareware is typically used to download malicious software onto a computer. Telltale signs that a virus is present on a device include receiving lots of unwanted pop-up ads or error messages, unexpected freezes, crashes, or restarts, icons unexpectedly appearing on the desktop, sudden device or file lockouts, a computer suddenly running slowly, and web browsers being set to a new homepage or having new toolbars.
Reputable software providers and antivirus vendors do not use scare tactics to force users into downloading their programs. So a good rule of thumb is that any software ad that sounds malicious or threatening and attempts to scare the user into downloading it should be avoided. | <urn:uuid:e306eab8-ff1a-4c67-84f0-d941ae8e9509> | CC-MAIN-2022-40 | https://www.fortinet.com/lat/resources/cyberglossary/scareware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00121.warc.gz | en | 0.937423 | 1,542 | 3.25 | 3 |
Inserts and deletes characters.
sChanged = Stuff (sOrig, iPos, iLen, sReplace)
|sChanged||The returned string. STRING.|
|sOrig||The string in which characters are inserted or deleted. STRING.|
|iPos||The character position to start inserting or deleting. INTEGER.|
|iLen||The number of characters to delete. INTEGER.|
|sReplace||The string to insert. STRING.|
Stuff deletes iLen characters from sOrig, and then writes the string sReplace into sOrig beginning at the iPos position.
If iLen is zero, Stuff inserts sReplace.
If iLen is equal to the number of characters in sReplace, Stuff replaces characters starting at iPos. The length of sOrig is unchanged.
If sReplace is an empty string (""), Stuff deletes iLen characters starting at iPos.
If the number of characters in sReplace is less than iLen, the resulting string is shorter than sOrig.
If iLen is greater than or equal to the number of characters in sOrig after iPos, all subsequent characters in sOrig are deleted before Stuff inserts sReplace.
STRING s = "hello there" Print (Stuff (s,7,0,"OUT ")) // prints: hello OUT there Print (Stuff (s,7,5,"WORLD")) // prints: hello WORLD Print (Stuff (s,6,6,"")) // prints: hello Print (Stuff (s,1,5,"HERE")) // prints: HERE there | <urn:uuid:e70198c8-8dc2-4abb-9e61-199eeae0b4d7> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/silk-test/195/en/silktestclassic-195-help-en/STCLASSIC-EC69B979-STUFFFUNCTION-REF.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00121.warc.gz | en | 0.775872 | 356 | 3.765625 | 4 |
With Hurricane Rita promising a reprise of the communications collapse during Hurricane Katrina earlier this month, Federal Communications Commission (FCC) Chairman Kevin Martin told lawmakers Thursday the Internet must become a vital part of the nation’s emergency response system.
According to the FCC, Katrina knocked down more than 3 million customer telephone lines in Louisiana, Mississippi and Alabama. More than 20 million telephone calls did not go through the day after Katrina. Local wireless networks fared no better with more than 1,000 cell sites out of service.
Even if calls had been able to get through, first responders were hamstrung by the fact that thirty-eight 911 call centers went down.
”We should take full advantage of IP-based technologies to enhance the resiliency of a traditional communications network,” Martin told a Senate panel. ”IP technology provides the dynamic capability to change and reroute telecommunications traffic within the traditional network.”
Martin added that when traditional systems fail, IP-based technologies will enable service providers to more quickly restore service and provide the flexibility to initiate service at new locations.
”If we learned anything from Hurricane Katrina, it is that we cannot rely solely on terrestrial communications,” Martin said. ”We should use new technologies so that first responders can take advantage of whatever terrestrial network is available.”
Martin said smart radios would allow first responders to find any available towers or infrastructure on multiple frequencies. He added that Wi-Fi, spread spectrum and other frequency-hopping techniques would allow emergency workers to use limited spectrum quickly and efficiently.
Most of all, Martin urged, any emergency alert system should ”incorporate the Internet, which was designed by the military for its robust network redundancy functionalities.” | <urn:uuid:f639a121-6b05-484e-8b55-da1224cd9a73> | CC-MAIN-2022-40 | https://www.datamation.com/networks/fcc-ip-vital-for-emergency-communications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00121.warc.gz | en | 0.937454 | 357 | 2.53125 | 3 |
The phrase “In the beginning was the Word” may soon take on a whole new meaning for cellphone users, if work being done by a researcher at South Korea’s Sungkyunkwan University pans out.
The scientist, Sang-Woo Kim, has been working on converting sound into electricity.
Among other things, the technology might harness cellphone users’ voices to charge their devices’ batteries while they’re talking, Woo has speculated.
A prototype device he used generated 50 millivolts of electricity from about 100 decibels of sound, The Telegraph reported.
About Woo’s Device
Woo’s device reportedly uses tiny strands of zinc oxide placed between two electrodes, capped off with a sound-absorbing pad on top. The pad vibrates when it’s hit by sound waves, compressing the zinc oxide strands. The strands then decompress, much like springs will when you press down on them.
It’s that movement that generates an electrical current.
Apparently, 100 decibels of sound generated 50 millivolts of electricity. One millivolt is a thousandth of a volt.
Loud passages in classical symphonies exceed 100 decibels, while rock concerts “easily exceed” 130 decibels, Orest Symko, a physics professor at the University of Utah, told TechNewsWorld.
Why zinc oxide? It’s a wide-band gap semiconductor.
Semiconductors have smaller band gaps than insulators. They conduct electricity at room temperature.
A band gap, also called an “energy gap,” is an energy range in a solid where no electron states can exist. It generally refers to the energy difference between the top of the valence band and the bottom of the conduction band in insulators and semiconductors.
Powering Up With Sound
“Turning sound into electricity isn’t anything new,” Carl Howe, director of anywhere consumer research at the Yankee Group, told TechNewsWorld. “Piezoelectric devices have existed for decades. It is the principle used for the microphone built into most cellphones.”
The University of Utah’s Symko has been working on a variant on the conversion of sound to power since 2005. That year, he kicked off a five-year project to convert heat to sound and then to electricity, with collaborators at Washington State University and the University of Mississippi.
The project, named “Thermal Acoustic Piezo Energy Conversion” (TAPEC), uses heat engines developed by the researchers to convert heat into sound. The sound is beamed at piezoelectric devices to convert it into electricity.
Piezoelectric devices compress in reaction to pressure, including sound waves, and change that pressure into electrical current.
“We’re interested in using waste heat from power plants, nuclear power plants, computers and automobiles and converting that into electricity,” Symko said.
TAPEC was initially funded by the U.S. Army. The project is continuing, and Symko is looking to source more funds to keep it going.
Let’s Make Some Noise!
“While the energy currently generated by Dr. Woo’s experiment isn’t enough to charge a battery today, it sounds like he’s optimistic enough about the approach to think it might do so in the future,” the Yankee Group’s Howe said.
This technology falls into the broad category of energy harvesting, or getting power from everyday activities, Howe pointed out. The self-winding watch is an early example of an energy harvesting device, collecting and storing kinetic energy generated by its wearer’s moment, he said.
“What’s changed now is that we can use nanotechnology — very small mechanical structures — to power nanoelectronics, which require less and less power to operate,” Howe stated.
“The hope is that, just like self-winding watches, at some point we won’t have to consciously spend our time recharging our devices, they’ll just be powered by our routine activities,” he added.
Harvesting energy from sound could be combined with other solutions such as kinetic energy harvesting and solar panels, suggested Jim McGregor, chief technology strategist at In-Stat.
“That might be enough to recharge a handset or power a plethora of other low-power devices,” McGregor told TechNewsWorld.
“I think mobile devices or low-power devices would be the obvious focus,” McGregor added. | <urn:uuid:19a934fc-752b-495b-bc43-db5a70cf770d> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/energy-harvesting-tech-could-boost-power-for-cell-yellers-72419.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00121.warc.gz | en | 0.919243 | 971 | 3.484375 | 3 |
Approximately 19 million Americans (6% of the population) do not have access to broadband, according to an FCC Broadband Report. In rural areas, the number jumps to a whopping 25%. There are several reasons Americans live without broadband, but a key issue is the lack of access to an Internet Service Provider (ISP).
In rural areas of the U.S., ISPs struggle to provide broadband. Potential customers often live in sparsely developed locations where it's not economical for ISPs to trench or hang fiber for miles to reach a few customers.
In recent years, there have been advancements in fixed wireless that make it possible to service customers in some of these rural areas with no access to broadband. Fixed wireless is more economical in these scenarios. However, servicing rural customers with fixed wireless still has challenges for Wireless Internet Service Providers (WISPs).
TVWS Cuts Through The Forest
Most spectrum available for WISPs is for LOS (Line-of-Sight) applications. But what does this mean for non Line-of-Sight (NLOS) situations where customer are blocked by foliage or hilly terrain?
Although wireless signals cannot penetrate through rocks, TV White Space (TVWS) wireless technology gives WISPs a way to reach customers through heavy foliage. TVWS operates on the lowest frequency available for fixed wireless (470-790 MHz). This low frequency has excellent penetration power, and allows WISPs to service customers in dense foliage.
Microsoft's Airband Lends a Hand
Microsoft thinks TVWS is the key to providing broadband to rural customers. In 2017, Microsoft started the Airband ISP Initiative to provide resources for ISPs to service subscribers in rural areas. ISPs that are part of the Airband initiative receive preferential pricing for TVWS equipment and billing software. Microsoft also provides training and other benefits.
Corey is a Wireless Engineer for a California WISP that's an Airband member. Regarding the benefits of the program, he explains that:
“Being in the Microsoft Airband ISP program provides us a major discount on TVWS equipment, and also gives us a direct line to an engineer. They’ve been super-responsive over email, as they really want to make TVWS work.”
Interested in learning more? In chapter 3 of our WISP Guide 2019, we provide several examples of WISPs successfully partnering with Microsoft’s Airband initiative to provide customers with broadband behind foliage.
Download our WISP Guide today for more information on TVWS, CBRS and much more! | <urn:uuid:b8c51a62-efa9-4c37-a7e0-2e59ef929dc4> | CC-MAIN-2022-40 | https://blog.doubleradius.com/tvws-and-airband-helping-wisps-reach-more-customers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00322.warc.gz | en | 0.931536 | 524 | 2.65625 | 3 |
Technology is a part of every facet of our lives – from our technologically savvy homes to hybrid careers and everything in between. Over the years the line between our online and offline lives has become indistinguishable, making cybersecurity one of the top concerns for societal well-being, economic prosperity, and national security.
This month is the 18th annual Cybersecurity Awareness Month, which is designed to educate and remind us to be cyber smart all year long. Cybersecurity Awareness Month is brought to you by the U.S. Cybersecurity & Infrastructure Security Agency (CISA) and the National Cyber Security Alliance (NCSA).
Cybercriminals have created an influx of new threats as technologies and businesses have evolved to accommodate hybrid and remote work models in response to the pandemic. Ransomware is now one of the most common threats to any organization and in 2020, ransomware attacks cost $4.4 million on average. Cybersecurity Ventures, an organization that provides research and reports on cybercrimes, predicts that cybercrime, including ransomware, will cost the global economy $6 trillion dollars this year, which represents the greatest transfer of wealth in the history of mankind. By 2025, cybercrime will cost the world’s economy $10.5 trillion dollars annually.
With trillions of dollars at stake and security at risk, cybersecurity has become the number one issue for businesses, governments, and individuals. This month is a collaborative effort between government and industry to ensure everyone has the resources to be safer and more secure online. This year’s theme, Do Your Part, Be Cyber Smart, reinforces the idea that cybersecurity is everyone’s responsibility. There are currently an estimated 5.2 billion internet users, which accounts for over 65% of the world’s population. This number will only grow, making the need for cybersecurity more important than ever.
For over 50 years, cybersecurity has remained a top concern for Compunetix. During that period of time, Compunetix has become the global leader in secure conferencing and collaboration solutions, including VoIP HD and encrypted video, all powered by superior technical design and managed with unsurpassed customer service. Dedicated to customer-focused and innovative technology, Compunetix engineers and manufactures all aspects of its conferencing equipment in-house in the United States.
Compunetix solutions are designed and manufactured at a military-grade security level based on the needs of the U.S. Federal Government. For additional reassurance, Compunetix utilizes an external cybersecurity firm to assess our products for vulnerabilities that could lead to a data security breach. For organizations concerned with public cloud security, our solutions can be deployed on-premises or on a private cloud. We provide this level of security across the board to all our customers to guarantee high-quality, reliable, and secure solutions.
Interested in learning more? Contact us today to speak to a Compunetix expert! | <urn:uuid:f1b3aabb-7491-4c74-8ffd-eebf3df7a2e6> | CC-MAIN-2022-40 | https://www.compunetix.com/october-is-cybersecurity-awareness-month/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00322.warc.gz | en | 0.942859 | 597 | 2.6875 | 3 |
Load balancing has been integral to successfully providing high availability and nearly transparent scalability to web-based applications for almost two decades. Used at first to simply scale web-oriented protocols such as HTTP, once the integration of web-based applications with other key data center protocols became more prevalent, load balancing became more of an imperative for those protocols as well.
Unlike HTTP, which is synchronous and stateless, some protocols—such as Diameter, RADIUS, and Session Initiation Protocol (SIP)—are not only asynchronous but also do not adhere to a single request-reply communication sequence. This makes it more difficult to distribute those protocols because most load balancing systems are designed to operate best in a synchronous messaging environment in which a single request is made and responded to before another is processed.
Some of the most demanding environments require scalability solutions for such protocols. Service providers and financial institutions routinely process multi-gigabits of data in intervals that are unprecedented even on the most highly trafficked websites. And, they demand high levels of availability while simultaneously maintaining strenuous performance criteria. For example, SIP is one of the most commonly used signaling protocols in the world of service providers, but its inherent nature and its reliance on authentication protocols such as Diameter, RADIUS, and LDAP make it very difficult to scale efficiently.
There are several challenges associated with scaling the protocols common to service-provider offerings, and all of them must be addressed to support the scalability and stringent performance requirements of this demanding market.
Traditional web-based protocols—HTTP, FTP, and SMTP—are all synchronous: A client sends a message and expects a response. (See Figure 1.) In most cases, no other requests can be made until a response has been received, which enforces a strict synchronous communication paradigm. This adherence to a request-reply message pattern, although unique for each protocol, makes load balancing such traffic fairly easy and technically similar in implementation.
Protocols such as Diameter and SIP also maintain the one-to-one (1:1) relationship in which there is always a matching reply for every request, but unlike traditional web-based protocols, they do not need to maintain a strict synchronous exchange. Multiple requests may be sent before receiving a reply. This makes intermediaries based on traditional protocols like HTTP unable to handle load balancing responsibilities because they cannot process more than one request at a time and are limited to load balancing on a per-connection basis.
The result is that a Diameter client, for example, may send one or more requests over the same connection without waiting for a response to the first request. It is also possible that responses will be sent to the client in a different order than they were received. (See Figure 2.) This requires that Diameter servers—as well as any intermediary managing Diameter traffic—must be able to handle this process. However, most load balancing systems cannot because they assume a traditional request-reply protocol processing paradigm in which every request is followed by a reply that adheres to the order in which requests were received.
Further complicating support for these protocols is the lack of distinction between client and server roles. In a traditional communication exchange, the client sends requests and the server sends responses. SIP, Diameter, and related protocols erase those lines and allow the server to act essentially as a client, sending requests and waiting for responses. This bidirectional communication exchange is not present in any traditional request-reply protocols and thus is not a behavior typically supported by load balancing solutions that focus on handling HTTP and similar protocols.
In traditional load balancing solutions, load balancing is accomplished at layer 4 (TCP/UDP) on a per-session or connection basis. All requests received over the same session are load balanced to the same server. When communications are complete, the session is terminated and is available for use by another client.
This behavior is not acceptable for some protocols, particularly those associated with service provider and telecommunications implementations. Protocols such as SIP, Diameter, and RADIUS establish longer-lived sessions over which associated protocols and messages will be sent, each potentially requiring processing by a different server. For example, authentication and authorization services are often handled by Diameter but are tunneled through SIP. When the default load balancing behavior assumes all requests sent over the same TCP/UDP connection should be sent to the same server, it does not allow for the processing and subsequent load balancing of individual messages sent over the same session. This means traditional load balancing mechanisms are incapable of supporting the scalability and availability requirements of such protocols.
The alternative to long-lived sessions is to require that each request be made over a new and separate session. Yet, this is impractical because the overhead associated with establishing connections on a per-protocol basis would degrade performance and negatively impact availability. Any load balancing solution supporting protocols such as SIP, Diameter, and RADIUS must be capable of applying load balancing decisions on a per-message basis over the same session.
In a traditional request-reply protocol load balancing scenario, each request can be directed to a specific server based on a variety of parameters, such as the content or request type. This behavior is also desirable in message-oriented communication, but it is typically more difficult to support because of the need to scale intermediaries to open and maintain multiple connections to different servers. Traditionally load balancing maintains a 1:1 ratio between requests and server-side connections, but in a message-oriented protocol such as SIP or Diameter, there is a need to maintain a one-to-many (1:N) ratio instead.
Once the problem of maintaining 1:N ratios is addressed, it is then necessary to attend to the lack of clearly defined message boundaries inherent in message-oriented communications. HTTP, for example, clearly differentiates between messages using specific headers and syntactical control. Diameter, by contrast, is less specific, and it requires an intermediary to distinguish between messages based on recognition of the specific request type and length. This means intermediaries must be able to parse and interpret the data associated with message-oriented protocols before load balancing decisions can be made.
Once a message is extracted from the session, it is then necessary to examine some part of the content to determine the type of the request. In the case of Diameter, this is usually achieved by examining an attribute–value pair (AVP) code representing different requests. AVP codes indicate a variety of specific services related to authentication and authorization of services. Diameter is load balanced on AVP codes even though the requests are sent over the same long-lived session. Because the sessions are long-lived, the load balancing solution must be able to maintain connections to multiple servers (hence the 1:N ratio) and correctly route requests based on the AVP codes contained within the Diameter messages.
The primary requirement to solve the challenges associated with scaling message-oriented protocols such as SIP and Diameter is the ability to extract individual messages out of a single, shared TCP connection. This means that a high-availability solution must be able to inspect application layer data in order to split out individual messages from the TCP connection and distribute them appropriately—including maintaining persistence.
Because so many protocols in the telecommunications arena are message-oriented, F5 has architected a foundation of message-based load balancing (MBLB) that is easily extended via new profiles and iRules. MBLB capabilities provide the core support for scaling protocols such as SIP, Diameter, LDAP, and RADIUS. These capabilities enable BIG-IP Local Traffic Manager (LTM) to perform the disaggregation of messages from a single TCP connection and handle asynchronous message types.
MBLB makes it possible for BIG-IP LTM to implement specific protocol support in a turnkey manner while still enabling customer- and environment-specific customization through iRules. F5 already supports several message-oriented protocols such as SIP, and it now also supports scaling of Diameter.
Base Diameter (RFC 3588) protocol support has been provided on BIG-IP LTM in the past strictly via iRules, but there is now more turnkey support via a new profile option. The Diameter profile enables customers to specify load balancing of Diameter messages based on a single AVP code.
The Diameter profile configuration on BIG-IP LTM allows for messages with specific AVP codes to be directed to a specific server (or pool of servers) that is responsible for handling the requests. (See Figure 3.) The specified AVP code is used to keep Diameter messages persistent on the same server so that clients can maintain state across a long-lived session. At the same time, BIG-IP LTM maintains availability by ensuring no individual server is ever overwhelmed with requests and sessions. Further customization or the use of additional AVP codes to determine routing and persistence can be accomplished using iRules.
BIG-IP LTM is configured with a virtual IP address (proxy) to which the Diameter profile is applied. When a client initiates a session (A) (see Figure 4), the AVP code specified in the Diameter profile configuration is extracted from the message and mapped to a server using the virtual server’s configured load balancing algorithm.
Subsequent messages received over that connection with the same AVP code will be load balanced to the same server. This is the same behavior as is expected with traditional application protocols and load balancing mechanisms involving persistence. Where Diameter and message-based load balancing diverge from traditional load balancing solutions is that messages received over the same connection but containing a different AVP code (B) will be processed in the same way as the original. BIG-IP LTM extracts the code and stores it to map all subsequent messages over that session with that AVP code to the same server.
Traditional load balancing solutions generally expect only one value per client on which to execute persistence-based routing. Because BIG-IP LTM leverages TMOS, the extensible foundation of the F5 BIG-IP platform, it has the fundamental understanding of how to extract and act upon message-level information rather than just connection information. It recognizes that each message may have its own persistence key and may require routing over different server-side connections based on that key.
The choice of server during the initial connection can be determined through the use of standard load balancing algorithms or can be customized using iRules. If more than one AVP code is necessary for purposes of routing requests, then iRules can be used to provide the flexibility necessary.
The ability to load balance some protocols requires a deep understanding of the way in which the applications that use those protocols behave. Protocols that are both asynchronous and communicate bidirectionally are challenging to scale for most load balancing solutions because they do not have the ability to extract and route requests at a message level and are inherently tied by their architecture to load balancing based on connections.
F5 TMOS architecture provides the means by which F5 is able to quickly implement support for message-based protocols such as Diameter, RADIUS, and LDAP. TMOS enables BIG-IP Local Traffic Manager to extract and act upon message-level information, providing turnkey scalability and high availability while maintaining the ability to extend and adapt that functionality through iRules. | <urn:uuid:344d5794-d254-4767-9cc3-5bc195b755b1> | CC-MAIN-2022-40 | https://www.f5.com/ja_jp/services/resources/white-papers/message-based-load-balancing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00322.warc.gz | en | 0.918478 | 2,322 | 2.953125 | 3 |
Cryptography — the practice of taking a message or data and camouflaging its appearance in order to share it securely
What is cryptography used for?
It’s the stuff of spy stories, secret decoder rings and everyday business — taking data, converting it to another form for security, sharing it, and then converting it back so it’s usable. Infosec Skills author Mike Meyers provides an easy-to-understand walkthrough of cryptography.
Watch the complete walkthrough below.
Cyber Work listeners get free cybersecurity training resources. Click below to see free courses and other free materials.
Cryptography types and examples
Cryptography is the science of taking data and making it hidden in some way so that other people can’t see it and then bringing the data back. It’s not confined to espionage but is really a part of everyday life in digital filesharing, business transactions, texts and phone calls.
What is cryptography?
Simply put, cryptography is taking some kind of information (encrypting) and providing confidentiality to it to share with intended partners and then returning it to its original form (decrypting) so that the intended audience can use that information. Cryptography is the process of making this happen.
What are obfuscation, diffusion and confusion?
(00:40) Obfuscation is taking something that looks like it makes sense and hiding it so that it does not make sense to the casual outside observer.
(00:56) One of the things we can do to obfuscate a message or image is diffusion, where we take an image and make it fuzzier, so the details are lost or blurred. Diffusion only allows us to make it less visible, less obvious.
(01:26) We can also use confusion, where we take that image, stir it up and make a mess out of it like a Picasso painting so that it would be difficult for somebody to simply observe the image and understand what it represents.
How a Caesar cipher works
(02:10) Cryptography has been around for a long, long time. In fact, probably one of the oldest types of cryptography that has ever been around is something called the Caesar cipher. If you’ve ever had or seen a “secret decoder ring” when you were young, you know how a Caesar cipher works.
Encrypting using a Caesar cipher
(02:40) I’ve made my own decoder ring right here. It’s basically a wheel with all the letters of the alphabet, A through Z and on the inside, and all of the letters of the alphabet A through Z on the outside, and to start, you line them up from A to A, B to B, C to C.
(02:59) To make a secret code, you can rotate the inside wheel to change the letters from our original, plain text on the outside wheel. We call this substitution. We’re taking one value and substituting it for another. (03:20) Rotating the wheel two times is called ROT two; turning it three times would be ROT three. (03:37) So we can take, like the word ACE, A-C-E, and I can change ACE to CEG. Get the idea? So that’s the cornerstone of the Caesar cipher.
(04:00) As an example, our piece of plain text that we want to encrypt is, “We attack at dawn.” The first thing we’re going to do is get rid of all the spaces, so now it just says “weattackatdawn.” We’ll rotate our wheel five times — it’s ROT five. And now the encrypted “weattackatdawn” is “bjfyyfhpfyifbs.” (04:44) So we now have generated a classic Caesar cipher.
(04:49) Now there’s a problem with Caesar ciphers. Even though it is a substitution cipher, it’s too easy to predict what the text is because we’re used to looking at words.
How a Vigenere cipher works
(05:32) To make things more challenging, we can use a Vigenere cipher, which is really just a Caesar cipher with a little bit of extra confusion involved. For illustrative purposes, the Vigenere cipher is a table that shows all the possible Caesar ciphers there are. At the top, on Line 0 is the alphabet — from A to Z. On the far left-hand side, it says zero through 25. So these are all the possible ROT values you can have, from ROT zero, which means A equals A, B equals B, all the way down to ROT 25.
Encrypting using a Vigenere cipher and key
(6:17) Let’s start with a piece of plain text. Let’s use “we attack at dawn” one more time. This time, we’re going to apply a key. The key is simply a word that’s going to help us do this encryption. In this particular case, I’m going to use the word face, F-A-C-E.
(06:34) I’m going to put F-A-C-E above the first four letters of “we attack at dawn,” and then I’m going to just keep repeating that. And what we’ve done is we have applied a key to our plain text.
(06:58) Now we’re going to use the key to change the Caesar cipher ROT value for every single letter. So the first letter of the plain text is the W in “wheat” up at the top, and the key value is F, so let’s go down on the Y-axis until we get to an F. Now you see that F, you’ll see the number five right next to it. So this is ROT five.
(07:31) So all I need to do is find the intersection of these, and we get the letter B.
(07:39) The second letter in our plain text is the letter E from “we,” and in this particular case, the key value is A, which is kind of interesting, because that’s ROT zero, but that still works. So we start up at the top, find the letter E, then we find the A, and in this case, because it’s ROT zero, E is going to stay as E.
(08:00) Now, this time, it’s the A in attack. So we go up to the top. There’s the letter A, and the key value is C, as in Charlie. So we go down to the C that’s ROT two, and we then see that the letter A is now going to be C.
(08:19) Now, do the first T in attack. We come over to the Ts, and now the key value is E, as in FACE. So we go down here, that’s ROT four, we do the intersection, and now we’ve got an X. So the first four letters of our encrypted code are B, E, C, X.
Understanding algorithms and keys
(08:52) The beauty of the Vigenere is that it actually gives us all the pieces we need to create a classic piece of cryptography. We have an algorithm. The algorithm is the different types of Caesar ciphers and the rotations. And second, we have a key that allows us to make any type of changes we want within ROT zero to ROT 25 to be able to encrypt our values.
Any algorithm out there will use a key in today’s world. So when we’re talking about cryptography today, we’re always going to be talking about algorithms and keys.
(09:31) The problem with the Vigenere is that it’s surprisingly crackable. It works great for letters of the alphabet, but it’s terrible for encrypting pictures or Sequel databases or your credit card information.
(09:53) In the computer world, everything is binary. Everything is ones and zeros. We need to come up with algorithms that encrypt and decrypt long strings of just ones and zeros.
(10:11) While long strings of ones and zeros may look like nothing to a human being, computers recognize them. They could be a Microsoft Word document, or it could be a voiceover IP conversation, or it could be a database stored on a hard drive.
How to encrypt binary data
(10:37) We need to come up with algorithms which, unlike Caesars or Vigeneres, will work with binary data.
(10:45) There are a lot of different ways to do this. We can do this using a very interesting type of binary calculation called “exclusive OR.”
(11:08) For our first encryption, I’m going to encrypt my name, and we have to convert this to the binary equivalents of the text values that a computer would use. Anybody who’s ever looked at ASCII code or Unicode should be aware that we can convert these into binary.
Exclusive OR (XOR) encryption example
(11:38) So here’s M-I-K-E converted into binary. Now notice that each character takes eight binary digits. So we got 32 bits of data that we need to encrypt. So that’s our clear text. Now, in order to do this, we’re going to need two things.
(11:58) First, we need an algorithm and then we’re going to need a key.
(12:09) Now our algorithm is extremely simple, using what we call an exclusive OR and what we call a truth table. This Mike algorithm chooses a five-bit key for this illustration. In the real world, keys can be thousands of bytes long.
(12:41) So, to make this work, let’s start placing the key. I’m going to put the key over the first five bits, here at the letter M for Mike, and now we can look at this table, and we can start doing the conversion. So let’s convert those first two values, then the next, then the next, then the next.
(12:58) Now, we’ve converted a whole key’s worth, but in order to keep going, all we have to do is schlep that key right back up there and extend the key all the way out and just keep repeating it to the end. It doesn’t quite line up, so we add whatever amount of key is needed to fill up the rest of this line.
(13:28) Using the Exclusive OR algorithm, we then create our cipher text.
(13:44) Notice that we have an algorithm that is extremely simplistic. We have a key, which is very, very simple and short, but we now have an absolutely perfect example of binary encryption.
(13:58) To decrypt this, we’d simply reverse the process. We would take the cipher text, place the key up to it, and then basically run the algorithm backward. And then we would have the decrypted data.
What is Kerckhoffs’s principle?
Having the algorithm and a key makes cryptography successful. But which is more important, the algorithm or the key?
(14:30) In the 19th century, Dutch-born cryptographer Auguste Kerckhoffs said a system should be secure, even if everything about the system, except the key, is public knowledge. This is really important. Today’s super-encryption tools that we use to protect you on the internet are all open standards. Everybody knows how the algorithms work….[…] Read more »…. | <urn:uuid:bf3f4a0b-223b-425f-b49d-af86f5be1e97> | CC-MAIN-2022-40 | https://apexassembly.com/category/business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00322.warc.gz | en | 0.911829 | 2,552 | 3.09375 | 3 |
By Stephanie Nebehay and Ludwig Burger
GENEVA (Reuters) – The Omicron coronavirus variant, reported in more than 60 countries, poses a “very high” global risk, with some evidence that it evades vaccine protection but clinical data on its severity is limited, the World Health Organization says.
Considerable uncertainties surround Omicron, first detected last month in southern Africa and Hong Kong, whose mutations may lead to higher transmissibility and more cases of COVID-19 disease, the WHO said in a technical brief issued on Sunday.
“The overall risk related to the new variant of concern Omicron remains very high for a number of reasons,” it said, reiterating its first assessment of Nov. 29.
At least one patient has died in the United Kingdom after contracting the Omicron variant, British Prime Minister Boris Johnson said on Monday.
The WHO said there were early signs that vaccinated and previously infected people would not build enough antibodies to ward off an infection from Omicron, resulting in high transmission rates and “severe consequences”.
It is unclear whether Omicron is inherently more contagious than the globally dominant Delta variant, the WHO said.
Corroborating the WHO’s assessment, University of Oxford researchers published a lab analysis on Monday that registered a substantial fall in neutralising antibodies against Omicron in people who had had two doses of COVID-19 vaccine.
While the antibody defences from courses of AstraZeneca vaccine and BioNTech/Pfizer have been undermined, there is hope that T-cells, the second pillar of an immune response, can prevent severe disease by attacking infected human cells.
THRESHOLD OF PROTECTION?
A number of vaccine recipients did not produce any measurable neutralising antibodies against Omicron, the Oxford researchers said. One of them, Matthew Snape, said it was not yet clear how pronounced the real-world decline in vaccine efficacy will be.
“We don’t know how much neutralising antibody is enough. We still haven’t really pinned down what is the threshold of protection,” Snape said, adding the best advice for the not-yet-vaccinated is to seek an initial course and for those vaccinated to get booster shots.
The Oxford researchers said there was no evidence yet Omicron caused more severe disease.
Their findings were broadly in line with another lab analysis last week on the blood of twice-vaccinated individuals conducted by researchers at the Medical University of Innsbruck, Austria.
The analysis also registered a significant drop in antibodies reacting to Omicron, with many blood samples showing no response at all.
Both the Innsbruck and the Oxford teams said they would widen their research to those who had three vaccine shots.
Pfizer and BioNTech said last week that two shots of their vaccine may still protect against severe disease, because its mutations were unlikely to evade the T-cells’ response.
They also said a third booster shot restored a level of antibody protection against Omicron comparable to that conferred by a two-shot regimen against the original virus identified in China.
The WHO cited preliminary evidence that the number of people getting reinfected with the virus has increased in South Africa.
While early findings suggest that Omicron may be less severe than the Delta variant, more data is needed to determine whether Omicron is inherently less dangerous, it said.
“Even if the severity is potentially lower than for the Delta variant, it is expected that hospitalisations will increase as a result of increasing transmission. More hospitalisations can put a burden on health systems and lead to more deaths,” it said.
Further information was expected in coming weeks, it added, noting the time lag between infections and outcomes.
(Reporting by Stephanie Nebehay in Geneva, Ludwig Burger in Frankfurt, Editing by William Maclean, Robert Birsel and Barbara Lewis) | <urn:uuid:2f61d5d4-2f1a-41e1-85bb-2dd43e0aa054> | CC-MAIN-2022-40 | https://bizdispatch.com/omicron-poses-very-high-risk-but-data-on-severity-limited/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00322.warc.gz | en | 0.958839 | 816 | 2.578125 | 3 |
Earlier this month, a software engineer at Google made headlines after sharing transcripts of a conversation with one of the company's AIs. The incident raises interesting questions: will we ever create a sentient AI – and when we have, how will we be able to tell?
Blake Lemoine had been working with an AI called Language Model for Dialogue Applications (LaMDA), designed to predict and generate natural-sounding language for chatbots based on large quantities of text scraped from the internet.
But he was suspended from his job for publishing conversations with the AI, which he claimed were evidence that it was actually sentient.
Patently, LaMDA is nothing of the sort.
"Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent," writes Gary Marcus, a psychology professor at New York University and founder and former CEO of machine learning firm Geometric Intelligence.
"All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient."
What is sentience?
The debate over artificial intelligence goes back to 1950. The British computer scientist Alan Turing proposed that a computer could be said to be intelligent if a human conversing with it could not detect that it was a computer at least half the time.
The definition of sentience, however, is somewhat different, with the Collins dictionary defining 'sentient' as 'having the power of sense perception or sensation; conscious'.
When it comes to attributing sentience to animals, there's some disagreement. In the UK, new legislation will soon come into force attributing sentience to all vertebrate animals and some invertebrates, such as octopuses and lobsters (problematic for those who like to boil them alive).
It goes slightly further than the EU and way further than the US, where there's no federal recognition that animals are sentient at all.
And if we can't agree on whether, say, a dog is sentient, it's hard to see how a consensus will be reached if an AI does ever start to show what might be genuine signs of consciousness.
This hasn't stopped pundits from making predictions about when artificial general intelligence (AGI) might be achieved. Forecasting body Metaculus, which aggregates expert opinion, makes a prediction of 2038.
However, there's huge variation between opinions, with half of AI researchers saying there's a 50 percent chance of high-level machine intelligence by 2040 - but one in five saying that 50 percent probability won't be reached until 2100 or later.
Elon Musk, meanwhile, recently suggested that 2029 might be the year – a claim rubbished by Marcus.
"Current AI is great at some aspects of perception, but let’s be realistic, still struggling with the rest. Even within perception 3D perception remains a challenge, and scene understanding is by no means solved," he wrote.
"We still don’t have anything like stable or trustworthy solutions for common sense, reasoning, language, or analogy."
How can we weed out the zombies?
In philosophy, there's a concept of a zombie – a being that, like the perfect chatbot, can simulate human behavior – but without any consciousness.
In one conversation with LaMDA, head of Google’s AI group Blaise Aguera y Arcas asked LaMDA how it could prove it wasn't a zombie: "You’ll just have to take my word for it. You can’t 'prove' you’re not a philosophical zombie either," was the reply.
In fact, we may be approaching the question from the wrong direction. A conscious AI might, in fact, try rather hard to prove it wasn't – wondering just what we might do to it if we knew…
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:58a06b5a-a20e-47d9-a58c-6542fe3682f5> | CC-MAIN-2022-40 | https://cybernews.com/editorial/will-sentient-ai-ever-exist/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00322.warc.gz | en | 0.958504 | 828 | 3.1875 | 3 |
Business Process Management (BPM)
Business process management (BPM) is a regulation that uses different tools and methods to design, model, execute, monitor, and enhance business processes. A business process correlates the behavior of people, systems, information, and things to bring out the business outcomes in support of a business strategy.
It focuses on placing a congruous, automated process in place for routine transactions and human interactions. BPM helps to bring down the business’s operational costs by reducing wastes and by improving the overall efficiency of the team.
Types of Business Process Management Systems
BPM system can be divided into 3 categories based on the purpose they serve. They are as follows:
- System-Centric BPM or Integration-Centric BPM: This type of BPM structure handles processes that are mostly dependent on existing business systems like HRMS, CRM, and ERP without much human involvement. An integration-centric business process management software has large-scale integrations and API access. By using integrations and API access, we will be able to create fast and efficient business processes. We can consider online banking as an example for integration-centric process, which can incorporate different software systems coming together.
- Human-Centric BPM: Human-centric BPM are the processes that are mostly executed by humans, and automation cannot replace them easily. These continually have a lot of approvals and tasks which are performed by individuals. Providing customer service, on-boarding employees, conducting e-commerce activities, and filing expense reports are examples of human-centric processes.
- Document-Centric BPM: Document-centric BPM model emphasizes on the flow of documents from one team to another. This type of model is useful when the importance of documentation is high within the organization. It helps in the seamless and organized document flow thereby enhancing the overall process of the company.
There are five Steps in Business Process Management. They are:
Business analysts evaluate current business rules, interviews numerous stakeholders, and discusses desired outcomes with management. The main goal of the process design stage is to acquire an understanding of the business rules and make sure if the outcomes are in alignment with the organizational goals.
Modeling interpolates to identifying, defining, and making a representation of new processes to assist the current business rules for numerous stakeholders.
The business process is executed by testing it with a small group of users first and then open it up to all users. In the case of automated workflows, artificially strangle the process to minimize errors.
Key Performance Indicators (KPIs) are established and metrics are tracked against them using reports or dashboards. It’s important to focus on the macro or micro indicators.
Business Process Optimization (BPO) is the redesign of the business processes to rationalize and improve process efficiency and strengthen the order of individual business processes with a comprehensive strategy.
Benefits of Implementing Business Process Management
Business Process Management helps organizations to advance towards total digital transformation and assist them to conceive bigger organizational goals. Some of the major benefits of using BPM in business are:
- Improved Business Agility: Optimizing and altering an organization’s business processes is mandatory to meet the market conditions. BPM permits organizations to halt the business processes, apply the changes, and re-execute them. Modifying workflows, as well as reusing the work flows and customizing them, makes business processes to become more responsive and gives the organization deeper understanding of the effects that process modifications have.
- Reduced Costs and Higher Revenues: A business process management tool eliminates bottlenecks, which remarkably reduces costs over time. One of the effect of this is depletion in lead time for product sales, providing customers quick access to services and products, which guides to higher sales and better revenue. It can also assign and track resources to lower the waste, which can also lower the cost and steer to higher profits.
- Higher Efficiency: The incorporation of business processes brings the potential for end-to-end development in process efficiency. If the right information is provided, process owners can closely observe the delays and allocate extra resources if needed. Removal of repetitive tasks and automation brings more efficiency in the business process.
- Better Visibility: Business Process Management software entitles automation while assuring real-time monitoring of key performance metrics. This improved transparency guides to better management and ability to change the structures and processes efficiently while tracking outcomes.
- Compliance, Safety, and Security: An exhaustive Business Process Management guarantees that organizations conform to the standards and stay up to date with the law. It also recommends safety and security measures by appropriate documenting procedures and facilitating compliance. As a result of above procedures, organizations recommend their staff to shield the organizational assets, such as private information and physical resources from misuse, loss, or theft.
Example Scenarios of BPM
Following are some of the examples of business processes where implementing BPM will emerge in a high return on investment.
- Dynamic processes that require regulative compliance changes, like a change in customer information management should follow changes in finance or privacy laws.
- Complex business processes which requires harmony and systematization across multiple business units, divisions, functional departments, or workgroups.
- Measurable mission-critical processes which straightaway improves a crucial performance metric.
- Business processes which require more than one legacy applications for their completion.
- Business processes with exceptions which are managed manually and require quick retraction.
Hope this article of Business Process Management (BPM) was helpful in providing knowledge of the topic and related nuances in addition to its benefits to customers. | <urn:uuid:ad64a77c-5486-4426-b323-b848b5e8a26c> | CC-MAIN-2022-40 | https://networkinterview.com/what-is-bpm-business-process-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00322.warc.gz | en | 0.928207 | 1,176 | 2.8125 | 3 |
‘Quantum Repeaters’ Needed for Successful Quantum Internet
(Physics.aps.org) As researchers worldwide work toward a potential quantum internet, a major roadblock remains: How to build a device called a quantum repeater.
In the last decade or so, researchers around the world have taken big steps toward building quantum networks. While many groups have started testing small networks tens of miles in size, major obstacles, including the need to develop a key piece of hardware, lie in the way of larger quantum networks.
the same properties that make quantum networks useful present significant challenges. Ground-based networks, whether classical or quantum, often use optical fibers to direct information from place to place in the form of photons. As photons travel through a network, some will be lost over time as a result of impurities in the fibers, weakening the signal. In classical networks, devices called “repeaters” intermittently detect the signal, amplify it, and send it off again. But for information carried by photons in superpositions of states, or qubits, “it’s not possible to read the signal without perturbing it.
The key to long-distance quantum communication, researchers say, is to figure out how to build a “quantum repeater” equivalent to the existing classical one. Without a quantum repeater, a qubit would typically only be able to travel through a few miles or up to about 100 miles of fiber—far too little range for widespread networks. | <urn:uuid:526d2738-5e69-4a67-a3fe-e5fb5dd49803> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-repeaters-needed-for-successful-quantum-internet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00322.warc.gz | en | 0.919056 | 311 | 3.765625 | 4 |
A feature that Intel introduced in some of its server processors several years ago to help improve performance in some use cases brought with it a serious security weakness that researchers have discovered can be used to monitor keystrokes across a network and steal sensitive information, without the use of any malicious software.
The weakness is in the Data-Direct I/O (DDIO) feature in some Intel Xeon processors and the attack that researchers from Vrije University in Amsterdam developed allows them to leak information from the cache of a vulnerable processor. The NetCAT attack, as it’s known, can be run remotely across a network and the researchers said it could be used to steal information such as keystrokes in an SSH session as they occur.
“We show that NetCAT can break confidentiality of a SSH session from a third machine without any malicious software running on the remote server or client. The attacker machine does this by solely sending network packets to the remote server,” the researchers from VUSec wrote in their explanation of the attack.
“More precisely, with NetCAT, we can leak the arrival time of the individual network packets from a SSH session using a remote cache side channel. Why is this useful? In an interactive SSH session, every time you press a key, network packets are being directly transmitted. As a result, every time a victim you type a character inside an encrypted SSH session on your console, NetCAT can leak the timing of the event by leaking the arrival time of the corresponding network packet.”
The vulnerability that the VUSec team discovered affects Intel Xeon E5, E7, and SP processors that support DDIO and Remote Direct Memory Access (RDMA). Intel has published an advisory on the vulnerability and recommends that customers limit direct access from untrusted networks in an environments where DDIO and RDMA are enabled. DDIO is a feature Intel introduced in 2011 and it’s designed to improve server performance by allowing peripherals to write to and read from the processor’s low-level cache rather than slower traditional memory. The VUSec researchers discovered that they could exploit the way DDIO works to leak sensitive data over the network. Their attack is particularly problematic for cloud providers and data center operators, which rely on shared resources.
"In our example we launch a cache attack over the network to a target server to leak secret information."
“In our attack, we exploit the fact that the DDIO-enabled application server has a shared resource (the last-level cache) between the CPU cores and the network card. We reverse engineered important properties of DDIO to understand how the cache is shared with DDIO. We then use this knowledge to leak sensitive information from the cache of the application server using a cache side-channel attack over the network. To simplify the attack, similar in spirit to Throwhammer, we rely on Remote Direct Memory Access (RDMA) technology. RDMA allows our exploit to surgically control the relative memory location of network packets on the target server,” the researchers said.
“The attacker controls a machine which communicates over RDMA to an application server that supports DDIO and also services network requests from a victim client. NetCAT shows that attackers can successfully spy on remote server-side peripherals such as network cards to leak victim data over the network.”
In a statement, Intel officials said the risk of compromise for most customers is low.
“Intel received notice of this research and determined it to be low severity (CVSS score of 2.6) primarily due to complexity, user interaction, and the uncommon level of access that would be required in scenarios where DDIO and RDMA are typically used. Additional mitigations include the use of software modules resistant to timing attacks, using constant-time style code. We thank the academic community for their ongoing research," the statement says.
The NetCAT attack is somewhat similar to other side-channel attacks that have emerged in recent years, but it does not rely on any user interaction or require the attacker to have compromised the target machine. Rather, the attacker just needs to be able to send packets to the target machine in order to execute the NetCAT attack.
“We assume the attacker can interact with a target PCIe device on the server, such as a NIC. For the purpose of instantiating our attack in a practical scenario, we specifically assume the attacker is on the same network as the victim server and can send packets to the victim server’s NIC, thereby interacting with the remote server’s DDIO feature,” the research paper says.
“In particular, in our example we launch a cache attack over the network to a target server to leak secret information (such as keystrokes) from the connection between the server and a different client.”
CC By 2.0 license image from Dr GMC. | <urn:uuid:5180b480-8776-432c-af29-bf814fb79eee> | CC-MAIN-2022-40 | https://duo.com/decipher/netcat-attack-can-leak-data-from-some-intel-processors | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00522.warc.gz | en | 0.920301 | 1,005 | 2.59375 | 3 |
What is 5G
First of all, this new global standard – which comes after 1G, 2G, 3G and 4G networks – is going to deliver unprecedented performance and efficiency in supporting not only cell phone services, but also the Internet of Things (IoT) applications. This means that everything and everyone will be more virtually connected, opening the doors to services that were just recently seen as ahead of our time.
The 5G technology is capable of delivering much faster connectivity speeds, enhanced capacity, very low latency, lower power consumption, and increased network reliability. All of these improvements lead to connected industries and better user experience. Furthermore, 5G will be the most secure network, creating a safer space for data transmission, which is especially important for uses in areas such as healthcare and finance.
5G mobile network: applications in different industries
Currently, 5G is already the fastest growing segment in the market of wireless network infrastructure and is expected to drive global growth in the next 10 to 15 years, fostering a wide range of industries.
As a first example, 5G will be able to deliver stable and reliable connectivity for end users even in crowded spaces. More interactive and immersive live and remote events, besides gaming and entertainment experiences with virtual, augmented and extended reality, are part of the 5G revolution.
Precision agriculture will take a step ahead with a wider sensor network and reliable connection that reaches even rural and remote locations. With real-time connectivity, farmers can have more data that result in more productivity and efficiency when growing and harvesting crops, and even have autonomous farming equipment.
In the automotive industry, connected vehicles will be able to reach another level, preventing road collisions, for instance. Some driving processes can also be automated to improve fuel efficiency and to increase the safety of the driver, the passengers and the people in their surroundings.
For the logistics sector, 5G can empower IoT connectivity, providing benefits such as real-time inventory and asset tracking, ultra-high-definition surveillance, and controllable robots for moving, stacking, and organising merchandise.
Manufacturing will also be getting closer to the future by harnessing 5g. Industrial automation applications that rely on wired networks will migrate to 5G wireless connectivity, resulting in more flexible and mobile robots, more efficient production lines, predictive maintenance, more security for data transmission and safer human-machine collaboration.
Health emergency services, remote medical surgeries, smarter electricity grids, good-delivery drones, immersive remote classes, and smarter workplaces and cities are some of the other examples of uses that the 5G mobile network will enable and improve. The list goes on and on. So, it is undeniable that this technology brings many advantages to businesses and society.
5g mobile network: The next steps for multinational corporations
Companies in different fields are already partnering with mobile operators and technology providers to accelerate 5G implementation and testing for innovative projects. Because it will be the most efficient network in terms of resources, 5G will contribute to reduce energy use, consequently decreasing the costs for businesses. Moreover, market predictions indicate that the new network standard will have a huge impact on the global economy in the next decade.
To be a part of this revolution and to make the most out of 5G, multinational corporations should put a strategy in place, considering their business processes, goals, culture and their role in the communities they participate and influence. If these organisations do not plan and act quickly, they risk losing business opportunities, market share to competitors, and the chance of leading their industries in driving innovation. | <urn:uuid:651c0778-16ac-466a-a23e-44b2c75a1bc2> | CC-MAIN-2022-40 | https://www.freemove.com/magazine/5g-mobile-network-how-multinational-corporations-can-make-the-most-out-of-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00522.warc.gz | en | 0.94061 | 721 | 2.890625 | 3 |
A Test Oracle is a mechanism for determining whether the program has passed or failed a test. The use of test oracles involves comparing the output of the system under test, for a given test-case input, to the output that the oracle determines that product should have. A test oracle can be any of the following:
- a program (separate from the system under test) which takes the same input and produces the same output
- documentation that gives specific correct outputs for specific given inputs
- a documented algorithm that a human could use to calculate correct outputs for given inputs
- a human domain expert who can somehow look at the output and tell whether it is correct
- or any other way of telling that output is correct.
Additional Reading: Pen Testing is Not A Luxury And Why You Can’t Afford to Ignore It
What does this mean for an SMB?
If you aren’t proactively reinforcing your cybersecurity measures, there’s a good chance you’ll fall prey to the attacks of cybercriminals. Running a penetration test, can help you address this threat. Pen testing can strengthen your cybersecurity and help you get rid of most of your network’s security gaps. | <urn:uuid:21752c1f-b8a9-4f89-8813-e1bddf5f7b7d> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/test-oracle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00522.warc.gz | en | 0.850043 | 258 | 3.296875 | 3 |
The purpose of security checkpoints is to provide extra security in a specific function, method or business flow. An active security checkpoint on a function means that the user will have to authenticate again, in order to fulfill the function.
Please read more about the concept Security Checkpoints.
There are two development steps to create a security checkpoint. The first step is to create the registration part that defines the security checkpoint and the second step is to add the logic that invokes the security checkpoint from the correct place in the business logic. The implementation is done in PL/SQL in the Server business logic.
How does Security Checkpoint work¶
Below is a description of what ihappens when a Security Checkpoint occurs in the code.
- The client sends the business transaction to the server.
- The server code hits a Security Checkpoint then the server raises a special exception.
- Implicitly upon the exception the server rollbacks the last changes that are made in the database.
- The client framework handles the exception and brings up the Security Checkpoint authorization page.
- If the Security Checkpoint is passed the client resends the business transaction to the server.
Security Checkpoint Attributes¶
A security checkpoint has 3 attributes; a Security Checkpoint ID, a description and a message.
Security Checkpoint ID¶
Security Checkpoint ID is also referred to as Gate ID. This is a string value that identifies the security checkpoint. The ID should be unique in the system. The ID should be prefixed with the IFS component where it is defined, for example SINWOF or DOCMAN. If the checkpoint is used in several components it is also possible to use a more general prefix that describes the area.
The description is displayed to the system administrators that activates the checkpoint and monitors the checkpoint logs. It should describe the process that is protected from a business perspective and not describing the actual implementation details.
The purpose of the message is to provide useful information to system administrators who are monitoring the Security Checkpoint log. It is possible to include parameters from the PL/SQL application logic into the message. The "&" character in the message is used to indicate parameters that should be replaced by the actual value in runtime.
Example: Person &PERSON_ID approved Document Revision &DOC_CLASS - &DOC_NO - &DOC_SHEET - &DOC_REV.
The message explains what has occurred and what business object was affected (in this example, a document revision identified by &DOC_CLASS-&DOC_NO-&DOC_SHEET-&DOC_REV was approved). This information will be written to the Security Checkpoint Log each time a user runs a business flow and enters an active checkpoint and the user is authenticated successfully. Note: Avoid including text such as ”checkpoint”, ”gate”, ”authentication”, ”password” etc in the message, as it is redundant text which is common for every single security checkpoint. Also avoid including the fnduser or the transaction date; this information is always added to each log entry.
Register a Security Checkpoint¶
A security checkpoint is defined in a ".ins" file in the database source folder. The registration is done by calling the method Sec_Checkpoint_Gate_API.Register with a unique ID and a IFS Message that contains the definition of the security checkpoint.
PROCEDURE Register ( gate_id_ IN VARCHAR2, info_msg_ IN VARCHAR2)
The info_msg_ is an IFS Message that includes the description of the checkpoint, the message and a flag if it should be activated by default. Parameters used in the message must also be added. This is added as an IFS Message named PARAMETERS within the main message. The example below shows registration of a message and shows the names of the attributes used in the IFS Message.
Inserting Security Checkpoint into a business flow¶
A BSA should determine whether a security checkpoint is required from a business perspective. From a technical point of view it is important that the security checkpoint is as early in the business process as possible, since the code is run before the security checkpoint and also after the security checkpoint is passed.
The security checkpoint is invoked by calling the method Security_SYS.Security_Checkpoint.
PROCEDURE Security_Checkpoint ( gate_id_ IN VARCHAR2, msg_ IN VARCHAR2)
msg_is an IFS Message named SECURITY_CHECKPOINT. This message should contain the same parameters as registered and values set for each one of them. See example: | <urn:uuid:c6b61b42-9f11-40cf-a85e-f557ad263719> | CC-MAIN-2022-40 | https://docs.ifs.com/techdocs/21r2/050_development/027_base_server_dev/050_security/040_security_checkpoints/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00522.warc.gz | en | 0.866939 | 999 | 2.625 | 3 |
Roaming around a playground at an American school, many children can be found sporting the latest TOMS® Footwear. In styles ranging from vibrant pink flip flops to simply patterned navy sneakers, kids pound the soles of their shoes against the dirt wood chips; they race each other from the swing sets to the slides without any awareness of how truly luxurious it can be to own a pair of shoes. Alternatively, for a child living in a developing country, shoes are not readily available.
Although shoes are in the category of a necessity for all children, not all can afford a pair of shoes. Less concerned with the latest gadgets and technologies, those in developing countries simply request a pair of shoes to protect their feet, as in Argentina, where children lack access to proper footwear. This means they cannot enjoy experiences like running around on uneasy terrains when playing. Instead, their bare feet are left unprotected, prone to injuries, and likely to become infected.
Blake Mycoskie, founder of TOMS Shoes, has chosen to directly address the needs of these children. Practicing corporate social responsibility, Mycoskie developed a business model that extends beyond mere profitability and into the realm of philanthropy. With a goal to better the communities in which these children live, his company provides a pair of shoes to a child in need whenever someone purchases a pair of TOMS Shoes.
In fact, the company certainly does more than solely delivering pairs of shoes to countries in need by aiding over seventy countries to increase health, education, and economic opportunities for children and their surrounding communities.. Beginning in 2011, it has supplied thirteen countries with prescription glasses, medical aid, and surgeries, successfully giving the gift of sight to over 275,000 people. Additionally, in 2014, TOMS Roasting Co. was created as an expansion to the company. For each purchase of coffee, TOMS now provides a one-week supply of clean drinking water to someone who needs it. Finally, and most recently, TOMS has tackled the issue of unsafe maternal births by training skilled birth attendants and providing birth kits to mothers.
The multitude of contributions by TOMS to the developing world illustrates that a business can operate efficiently, as well as help those in need. By practicing responsibility, the company has gladly made it a priority to better the lives of others. As TOMS continues to grow, it clearly depicts the benefits of instilling corporate social responsibility in the foundation of a company. | <urn:uuid:d1109178-5c58-4243-8c68-ebd40b35dadc> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2015/8/12/case-studies-in-csr-toms-helping-those-in-need-one-for-one | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00522.warc.gz | en | 0.963657 | 493 | 2.671875 | 3 |
Data is no longer merely a tool used to improve business strategy. Increasingly, data is an asset that drives the growth of organizations, especially in businesses that handle large amounts of personally identifiable information (PII) such as banks and financing companies. Malicious actors target this data because of its value in committing identity theft, defrauding credit card companies, and stealing directly from bank accounts.
Data breaches occur frequently and come with a high price tag. The shift to a digital-first economy that was hastened by the pandemic has increased the average cost of a data breach, which, according to IBM Security’s Cost of a Data Breach Report 2021, went up 10% from 2020 to 2021. The average total cost of a data breach is $4.24 million, which includes the cost of restoring data and services after a breach as well as the cost of lost business.
There’s a striking difference, however, in the average cost for businesses that fully implemented security AI and automation and those that didn’t. The costs for a data breach in organizations that failed to deploy AI and automated security measures were 80% higher than the costs for those that did. These numbers highlight the fact that data security is vital for all businesses, especially those that deal with PII.
Data States and Security Vulnerabilities
Protecting data in all of its states is fundamental to effective data security. Data can change states frequently or remain in one state for its entire lifecycle. There are different surfaces of attack in each state when data is vulnerable to attack, clandestine capture, and corruption. A fully-formed data security plan needs to include automated protocols to protect data confidentiality and integrity while it’s in use, in motion, and at rest.
Data at rest
Data that isn’t currently being accessed or transferred is termed “at rest.” Data at rest is stored on a hard drive, a storage area network (SAN), or on off-site backup servers. Because the data has reached its destination, it’s considered stable compared to other data states. Methods for protecting data at rest include encryption, hierarchal password protection, secure servers, and outside data protection services.
Whatever encryption method is used should be capable of AES-256 encryption. Some cyberattacks rely on brute force or stealth and can remain undetected for long periods of time. Even attacks that rely on exfiltrating data at thresholds below the alert level can’t crack AES-256 encryption, at least not currently. However, cybercriminals are nothing if not inventive, so data security has to be baked into the entire development process at every step along the way.
Data in motion
When data is being transferred between locations or within computer systems, it’s vulnerable to attack. Data in motion — which also includes data stored in a computer’s RAM that’s ready to be accessed or data moving between cloud storage and local storage — can be protected by encrypting the data before transfer or by encrypting the passage tunnel. Again, the encryption should use AES-256 encryption for the best protection.
Data in use
The most vulnerable data state is data in use. It’s directly accessible to one or more users, so encryption at this point is essential.
Additional methods of data protection for data in use include:
-Implementing a zero trust model including authentication of all users at all phases of access
-Strong identity management protocols
-Well-maintained permission profiles
-Financial Industry Data Protection Regulations
In the U.S., the Federal Trade Commission has broad powers of enforcement to protect consumers against unfair or deceptive practices and to enforce federal privacy and data protection regulations. However, there isn’t one overarching data protection legislation. Rather, there are hundreds of various state and federal laws that apply to different industries or situations that are designed to protect PII.
The shift to a digital-first business landscape and the rise in remote working have increased data security challenges. Because of this, regulatory compliance in banking and finance security has become even more vital in recent years.
A few of the most relevant data protection regulations that apply to the financial services and banking industries include:
The Gramm-Leach-Bliley Act (GLBA)
Also called the Financial Services Modernization Act, this legislation was introduced in 1999. Compliance with GLBA is mandatory for financial institutions and requires the following:
-Financial privacy statement issued to consumers that explains what information is collected, how it’s used, and where it’s shared
-A security plan that describes how the institution will protect the client’s nonpublic information
-Safeguards against outside actors manipulating clients into giving up their personal information
The Sarbanes-Oxley Act (SOX)
Passed in 2002, SOX was designed to protect investors from fraudulent reporting after several cases of widespread fraud, including the Enron scandal. SOX compliance involves establishing internal controls to prevent fraud and abuse, protect data privacy, and hold senior managers accountable for the accuracy of financial reporting.
New York Department of Financial Services regulation (NYDFS)
The NYDFS was introduced in 2017 to address the growing threat of cybercrime to financial institutions. It applies to all organizations regulated by the Department of Financial Services and their out-of-state and overseas branches. It requires firms to assess their cybersecurity risk and implement a comprehensive plan to mitigate their risks.
The complexity of data protection and the broad surface of attack given the myriad of points of access calls for software security to be an integral part of all phases of the software supply chain. The widespread adoption of cloud computing, remote access, and open-source code has rendered the perimeter model of software security obsolete.
Instead, organizations need to approach security from a DevSecOps perspective, where security is a primary consideration at every location along the continuous integration and continuous deployment (CI/CD) pipeline. Testing code for security flaws early in the process allows for quicker detection and remediation.
There are two primary methods for testing code early in the CI/CD pipeline that examine uncompiled code for security vulnerabilities. Software composition analysis (SCA) and static application security testing (SAST) can eliminate many code flaws that result in deploying applications that are vulnerable to cyberattacks such as injection, leaks, overflows, and cross-site scripting.
Early detection and remediation help alleviate an array of security vulnerabilities including:
-Broken access control
-Execution with unnecessary privileges
-Vulnerable and outdated components
Kiuwan’s Insights Open Source (SCA) testing scans code for occurrences of open source software (OSS) which might include vulnerabilities. Open-source code is widely used because it’s efficient and allows developers to deploy applications quickly. However, unpatched security vulnerabilities in open-source code are a significant source of risk.
Additionally, without a clear understanding of the often-conflicting licensing terms of various open-source software programs, organizations can put their intellectual property rights at risk. SCA testing can resolve any licensing issues when using code snippets from OSS applications.
Code Security (SAST) combined with SCA allows developers to test and fix code earlier in the process when it can be done with less effort and expense. SAST scans for vulnerabilities before the code is executed to prevent it from deploying and becoming an exploitable weakness. SAST is compatible with all major languages and compliant with all of the most stringent security standards in the financial industry.
SAST integrates with leading DevOps tools across the software development lifecycle, so it’s easy to incorporate security into the DevOps process early on. Automated code scanning allows developers to create an action plan to reduce their security risks. Kiuwan lets teams shift left and bake security into application design. | <urn:uuid:0ab264a6-dd3e-49b0-a276-a030d3220c26> | CC-MAIN-2022-40 | https://www.kiuwan.com/developing-data-security-for-finance-banking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00522.warc.gz | en | 0.92941 | 1,644 | 2.6875 | 3 |
The information technology and information security industries are in a state of constant high alert. A series of new and advanced cybersecurity threats are being constantly unleashed by rogue elements. Hi-tech cyberattacks involve machine learning, AI, malware, phishing, and crypto technology. They place the critical data and assets of organizations and governments at high risk.
According to experts, the acute scarcity of cybersecurity professionals and experts has further aggravated the situation. They warn that the stakes are higher than ever.
Recent studies reveal an increased possibility of data disruption, distortion, and deterioration.
There is an over-dependence on poor connectivity, which creates the grounds for intentional internet outages leading to disruptions. Such developments can mean that hackers will use sophisticated tools such as ransomware to hijack the Internet of Things.
Distortion is achieved by the deliberate and planned spread of misinformation. Hackers often use bots and automated sources to achieve their nefarious goals. This results in a lack of trust in the integrity of information.
Some organizations are unable to control and secure their own information because of rapid advances in intelligent technologies. They also have to deal with conflicting demands posed by advancing national security and privacy regulations.
Let’s have a closer look at the most significant cybersecurity risks for data in 2022.
Phishing Becomes More Sophisticated
Phishing attacks carry out through cautiously directed digital messages. They lead people into clicking on a link. The click can introduce malware or provide easy access to sensitive data. Also, in recent years, phishing has become more sophisticated.
Not many employees are aware of email phishing or the high risk involved in clicking on links that appear suspicious. Likewise, hackers are raising the stakes by using machine learning and artificial intelligence to create and distribute fake messages that sound convincing. They use this modus operandi to gain entry into databases unlawfully.
Such attacks enable hackers to steal logins, credit card details, and other types of personal financial information. They can also gain access to private databases loaded with critical information.
Ransomware Strategies are Constantly Evolving
Businesses around the world lose billions of dollars to ransomware attacks every year. Hackers are constantly sharpening their cyber weapons to deploy technologies that literally lock up data. Plus, the affected organization loses all access to its database as hackers hold all of the information for ransom. Additionally, the rise of cryptocurrencies like Bitcoin allows ransom demands in anonymous payments, making the job of hackers easier.
Companies are continuing their efforts to build more robust defenses to protect their data from ransomware breaches. That has not deterred hackers as they keep finding newer, more sophisticated ways of targeting businesses and high-net-worth individuals.
Cryptojacking – The Newest Threat
The cryptocurrency movement has affected cybersecurity in multiple ways. A major cybersecurity risk for data is cryptojacking, which is a cybercrime that’s catching up fast. Cybercriminals capture third-party home or work computers for the purpose of “mining.” Mining requires enormous amounts of computer processing power, which is an immensely expensive task. By surreptitiously piggybacking on other systems, they can mine BTCs and other currencies at zero cost. Businesses with crypto-jacked systems can suffer severe performance issues and costly downtime.
The cyber-physical attack is powered by the very technology used to streamline and automate critical infrastructure. So, this is the flip side of technology. This type of threat can result in hacking targeting electrical grids, water treatment facilities, transportation systems, etc.
The Internet of Things is becoming more pervasive with each passing day. The devices connected to the IoT are expected to reach a whopping 75 billion by 2025. Apart from laptops, tablets, and routers, these connected devices include other things. These are webcams, smart watches, medical equipment, automobiles, household appliances, and even home security systems.
Connected devices are helpful not only for consumers but for companies as well. They now use them to cut costs by collecting enormous volumes of insightful data. They also help streamline business processes. However, more connected devices indicate heightened cybersecurity risk. When hackers get control of IoT devices, they can run amok. The whole system can be held to ransom by overloading networks or locking down essential equipment.
Risk from Third Parties
Third parties associated with a business, such as contractors and vendors, pose a considerable risk. Additionally, most companies do not have a secure system or dedicated team to manage third-party employees. Nearly 60 percent of data breaches are believed to originate from a third party. Certainly, with cybercriminals using advanced features and tools, third-party vulnerability can become a massive threat to organizations.
Hackers are upgrading their technology continuously by exploring more sophisticated methods. Of late, they are also leveraging the power of psychology to dupe unsuspecting victims. They look closely for a weak link found in nearly all organizations, and that is human psychology. They use various media resources, including phone calls and social media, to trick people into providing access to sensitive information. These social engineers know how to exploit such weaknesses to get their hands on valuable data.
It is apparent from various research projects that cybercrime has become an epidemic of sorts. The frequency of cybercrime has escalated rapidly in recent years. Indeed, companies are struggling to hire qualified cybersecurity professionals to safeguard against the growing threat. That’s why it is hard to see this problem getting controlled in the near future.
The huge scarcity of trained and competent cybersecurity specialists is a cause of alarm. So, with a robust and digital workforce, companies seem at a better advantage to combat the sophisticated cybersecurity risks for data emanating from multiple resources. | <urn:uuid:64d0e570-03d0-4c3a-9a21-1bf02b8d3cfa> | CC-MAIN-2022-40 | https://www.baselinemag.com/security/cybersecurity-risks-of-your-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00522.warc.gz | en | 0.949269 | 1,167 | 2.984375 | 3 |
Practicing Good Cyber Hygiene to Avoid Holiday Cyber Attacks
By Dr. Bob Duhainy, Walden University Doctor of Information Technology core faculty member
While millions of holiday shoppers will be spending money on the best gifts for their loved ones, cybercriminals will be highly active due to the huge increase of online financial transactions, increasing the chances of stealing confidential information.
Security experts at Carbon Black caution that individuals can expect to see more attempted cyberattacks starting with Black Friday and continuing through the holiday shopping season. Experian also reported that 43% of consumers who had their identity stolen say it happened while shopping online during the holidays. To stay safe online during the holiday shopping season, take the following cybersecurity steps:
Understand your threats. It is important to realize the various vectors malicious actors are utilizing so you can properly defend yourself. Threat intelligence is an important service that information security professionals and everyday users should leverage for better protection. Before stepping out the door, examine current activity and take precautions to protect yourself. For example, the Cybersecurity and Infrastructure Security Agency (CISA), an entity within the Department of Homeland Security, provides free and up-to-date current activity and alerts, as well as a weekly vulnerability summary. CISA will also provide updates to state-run activities, which security professionals can digest and make useful to their organization, friends, and family.
One currently active alert from April 2018, coded under GRIZZLE STEPPE for malicious Russian cyber activity, denotes a vulnerability with Simple Network Management Protocol (SNMP) enabled network devices. Russian actors can extract device configurations, collect login credentials and impersonate privileged users, among other actions. By understanding threats to this level of detail, the appropriate mitigations can be implemented. Make threat intelligence actionable intelligence.
Zombie devices to botnets. While your local coffee shop may seem like a great place to get some online shopping done, remember that free, public wireless networks make it easier for cyber hackers to obtain your information. Make sure you use WPA-2 for authentication and 802.1x for remote access using IPsec/VPN tunnel before entering your personal information for online order. Be aware of the VPN you utilize because not all VPNs are 100% secure. As of October 2019, an advanced persistent threat (APT) was discovered to exploit vulnerabilities in Palo Alto, Fortinet and Pulse Secure products, which allow actors to collect credentials. These stolen credentials can later be used at accessing a root shell for increased privileged activity.
Metadata spoofing attacks. Cybercriminals exploit vulnerabilities of web-based applications due to vulnerabilities associated with various apps, especially when they are outdated. Keep up with the latest security updates on your computers, browsers and mobile devices. Setting your antivirus software to auto-update will also help safeguard your computer from the latest viruses.
In addition, ensure that you remain abreast of threat intelligence and immediately apply required patches to your systems. Many examples have been encountered in which patches were available but had not been installed as directed, resulting in an information assurance event or compromise. Think back to the Marriott, Target, and Office of Personnel Management hacks – each could have been prevented by appropriate patching.
Authentication attacks. Once the malware has been unleashed onto an electronic device, cybercriminals use brute force attacks to break the encrypted password saved in the form of an encrypted text. Never save your personal information – including your name, passwords, address and credit card information – using the remember me feature on shopping websites. Logging out of your account after each purchase ensures that your personal information won’t be compromised if your online retailers have a data breach. People often make the mistake of using the same password for multiple applications, even adding add a number or symbol to the original password after it expires. With an inverse correlation between security and convenience, users must carefully assess their circumstances. A trusted password manager can assist in producing strong and random passwords.
Masquerading emails. During this time of the year, email marketing campaigns are prevalent, and phishing is an area of major concern. The Anti-Phishing Working Group (APWG) reported that phishing attacks have increased to their highest levels since 2016. Simple Mail Transfer Protocol does not possess the necessary mechanisms to verify legitimate email addresses. Users cannot trust in Domain-based Message Authentication, Reporting and Conformance (DMARC) to protect them from spoofed emails. As you’re sifting through your inbox for the best holiday sales, verify the sender’s credentials before clicking any links. Even e-mail attachments and links forwarded from trusted entities may have malicious code. Get in the habit of manually typing websites in your internet browser to avoid any unforeseen cyberattacks.
In addition to phishing, malicious actors are also exploiting vishing and smishing approaches. Always be suspicious of unsolicited information requests and make sure you know where your information is going.
Use multi-factor authentication. Whenever possible, use multi-form authentication (MFA) to secure your devices and data. Use two of the three possible authentication methods: something you know, something you have and something you are. Going one step further, adaptive MFA applications can compare locational data, travel patterns, device context, and network context. By combining these results with a developed baseline, adaptive MFA can further secure your systems. Make use of MFA to make it more difficult for attackers to compromise your data and systems.
In addition to these tips, be discrete about any upcoming holiday travel plans. Do not share your location or go live on social media while you’re on vacation. Using the check-in feature makes you more vulnerable to digital and physical consequences as this lets hackers know where you are and where you aren’t. Hold off on sharing your vacation details online until after your trip is over.
Good cyber hygiene, the practice of proactive cyber safety habits, is the best way to protect your information from online criminals. Adopting these cyber safety tips during and beyond the holiday season will greatly reduce your probability of becoming a victim of cybercrime.
About the Author
Dr. Bob Duhainy, a core faculty member with Walden University’s Doctor of Information Technology program, has nearly 30 years of experience in technology and computer security. He teaches a variety of courses in data communications and computer security online. He is involved in various security-related research projects, including advanced authentication techniques interoperability, nefarious code detection, and system vulnerability assessments. Dr. Duhainy received training from the National Security Agency (NSA), Federal Bureau of Investigation (FBI), United States Secret Service (USSS), Central Intelligence Agency (CIA), Director of National Intelligence (DNI) and Department of Homeland Security (DHS) on various topics. He is also an active member of IEEE, ACM, AFCEA, Cisco Networking Academy, ISC2 and the FBI-InfraGard. | <urn:uuid:bf6956d7-d2a3-41f6-90ae-ce48114241f2> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/cyber-safety-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00522.warc.gz | en | 0.920408 | 1,413 | 2.625 | 3 |
A container is a piece of software used to virtually package and isolate applications to allow greater scalability, availability, and portability across diverse computing environments, including bare-metal systems, cloud instances, virtual machines (VMs), Linux, and select Windows and macOS operating systems.
Unlike a virtual machine, which includes both a runtime system and a guest operating machine for its application, a container includes only a runtime system, and instead relies on the operating system of the host. This reduces the memory, CPU, and storage required by the container, making it possible to support many more containers on the same infrastructure. While a virtual machine might be several gigabytes in size, a container can be as small as a few dozen megabytes.
The efficiency of containerization, also known as OS-level virtualization, makes it a popular method for ensuring the portability of applications across environments, such as from development to test, from staging to production, or from a physical server to a virtual machine in the cloud. The speed and simplicity allowed by containers are an ideal fit for DevOps.
Managing an IT portfolio, or even one application, across a multi-cloud environment requires centralized tools for global monitoring, traffic analysis, and load balancing to ensure security and optimal user experiences around the world. Container architectures along with Kubernetes container orchestration provide an additional layer of complexity.
With no need to boot up its own operating system, a containerized application can be started almost instantly—much faster than a virtual machine—and disappear just as quickly when it is no longer needed to free up host resources.
Products such as the Docker container platform and the Kubernetes container orchestration system have simplified the adoption of containerization and fueled its rapid growth. At the same time, containerization security issues have come to light, including the fact that application containers are not abstracted from the host OS on a VM, which can make it easier for security threats to access the entire system.
As containers are deployed across a cluster, organizations need to be able to ensure that the applications running within them are always secure, available, and running well. Thunder® Application Delivery Controller (ADC) optimizes the delivery and security of container-based applications and services running over public clouds or private clouds by load-balancing containers, securing communication with containers, monitoring containers and the cluster as a whole, and enabling continuous upgrades for microservices inside containers without bringing down the service.
Take this brief multi-cloud application services assessment and receive a customized report.Take the Survey | <urn:uuid:998971ad-dfdf-4138-9cf6-4683383423e2> | CC-MAIN-2022-40 | https://www.a10networks.com/glossary/what-are-containers-and-containerization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00522.warc.gz | en | 0.928123 | 517 | 2.8125 | 3 |
As you may know already, there are some security concerns when using public Wi-Fi networks such as hotspots at cafes, hotels, restaurants, and other public places.
Wi-Fi wasn't specifically developed for public access. Other than certain national hotspot providers such as T-Mobile, wireless encryption (WPA/WPA2) isn't used on hotspots. This Wi-Fi encryption isn't practical for hotspots as it is with private networks in homes and businesses. Plus the sharing aspect provided by Wi-Fi works against us on public networks; you don't want to share files with strangers.
In this article, I'll discuss exactly how to secure your computer and communications while using Wi-Fi hotspots. Though wireless networking technology isn't designed for public use, it can still be safe and secure if hotspot providers and users follow a few precautions:
Use Secure Browsing and emailing practices
Just like when on the web at home or work, you should follow basic Internet security practices while using Wi-Fi hotspots. Many of the Internet protocols and services we use day-to-day are inherently insecure by default.
The login and communications for services such as HTTP web browsing, POP3/SMTP email, IMAP email, Telnet command-line access, and FTP file transferring are not encrypted and are sent and received in clear-text.
At home and work, the communications of these clear-text services can be encrypted and secured from local Wi-Fi eavesdroppers by using WPA or WPA2 encryption.
However, most Wi-Fi hotspots don't use encryption. For this reason, you should follow the practices described in the following sections.
Use HTTPS/SSL for Sensitive Logins and Sites
Make sure that any website you log in to is using Secure Socket Layer (SSL) encryption. The URL address should begin with https instead of just http. Plus the browser should display a pad lock, green address bar, or other notification.
Secure POP3/SMTP/IMAP Email Connections with SSL
If you use an email client such as Outlook or Thunderbird with the POP3, IMAP, or SMTP protocol, you should try to use it with SSL encryption.
Whether or not you can use encryption depends upon your email server or service. If it's supported, you can set it up on your email client. If the server doesn't support it, see if you can access your mail via the web (using HTTPS/SSL), at least when using public networks.
Use SSH Instead of Telnet
If you must remotely connect to a computer or server while on a public network, use a secure remote access protocol such as SSH.
Use SFTP/SCP Instead of FTP
Though it's usually easier to use plain FTP when downloading or uploading files from servers, it's not secure. Similar to the other plain-text protocols, Wi-Fi eavesdroppers can capture the login credentials and the transferred data of FTP connections.
You should use SSL encryption with FTP connections, which must be supported by the server and the client. You might also look into using the SCP protocol. | <urn:uuid:9b069f5c-70e8-44b7-8304-62ba5572073f> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=1571984 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00522.warc.gz | en | 0.91897 | 647 | 2.859375 | 3 |
Nvidia’s Digital Twin Project to Simulate Earth to Predict Climate Change
Nvidia plans to build a digital twin of planet Earth using its Omniverse platform to forecast climate change and its worst impacts decades in advance. The company made the announcement at last week’s GTC conference.
Nvidia’s Earth-2 supercomputer would simulate Earth at meter-scale resolutions. Existing climate simulators capture imagery of every 10 to 100 square kilometers of Earth’s surface, Nvidia said, meaning attributes within smaller areas are lost. Nvidia’s vision requires processing chips far more powerful than the products available today.
The company plans to underwrite its climate change digital twins project without outside investment. It sees its Earth-2 initiative as roughly akin to its Cambridge-1 supercomputer, an artificial intelligence virtuoso dedicated to health care research.
Nvidia’s climate simulator would be underpinned by its Omniverse platform, which contains artificial intelligence-led tools downloaded by 70,000 users.
“Omniverse is a real-time simulation and collaboration platform for creating physically accurate virtual worlds and digital twins to help solve some of the world’s hardest problems,” said Richard Kerris, Nvidia’s vice president of Omniverse development.
“A digital twin is true to reality simulation of a physical world in the digital one,” Kerris told IoT World Today “We showed enterprise examples with our customers of digital twin factories with BMW, city blocks with Ericsson and forest fire environments with Lockheed Martin.”
Nvidia’s Earth-2 would use a combination of accelerated graphical processing units, deep learning models and physics-inspired neural networks to faithfully mimic various physical environments.
Nvidia already offers virtual and augmented reality rendering for its digital worlds through Omniverse, and unveiled a Pixar-like facsimile of its CEO Jensen Huang, portrayed in a digital twin of his kitchen at GTC last week. Digital twins represent a market opportunity that’s already here.
Nvidia launched the enterprise edition of Omniverse in April and announced its general release last week.
Others are busy carving themselves a foothold. In July, digital twin software firm Matterport went public at a $2.9 billion valuation. It launched an AI platform for creating digital twins on smartphones, a step toward democratizing the technology.
For Internet of Things practitioners, the upshot should be far better capabilities when building out IoT systems that require advanced or critical autonomy.
Digital twins that relay the net impact of sensor data in the physical world are nothing new. However, Nvidia believes turbocharged simulators are on track to support digital twin-led development of software for robots and self-driving cars. | <urn:uuid:a15fa3ab-6e29-4abb-930d-90fa977bc9dc> | CC-MAIN-2022-40 | https://www.iotworldtoday.com/2021/11/15/nvidias-digital-twin-project-to-simulate-earth-to-predict-climate-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00722.warc.gz | en | 0.914519 | 568 | 3.015625 | 3 |
Password Best Practices
Passwords are used to protect access to your account from unauthorized users. When coming up with passwords to various accounts, there are standards and best practices to follow so that your accounts are best protected.
- String together 4 random words. For example:
- Use a minimum of 12 characters in your passphrase. The longer your password, the better.
- Use a different password for each site you log into. This ensures that if another site is breached or your password is leaked somewhere, it can’t be used to log into another site.
- Avoid these mistakes:
- Using single dictionary words, spatial patterns (for example: qwerty, asdf), repeating letters, or sequences (for example: abcd, 1234).
- Making the first letter an uppercase.
- Substituting letters with common numbers and symbols.
- Using years, dates, zip codes.
Password management tools are helpful in storing and organizing your passwords so that you don’t have to memorize all of your unique passwords. Many enable you to sync your passphrases across multiple devices and can help you log in automatically. These password managers encrypt your password library with a master password that becomes the only thing you just need to remember.
Enabling two-factor or multi-factor authentication provides an additional layer of security to ensure that you’re the authorized user logging into your account. Not all applications provide two-factor authentication, but when it’s available, it’s in your best interest to set it up. You can enable two-factor authentication on HackerOne under your profile’s Settings > Authentication. | <urn:uuid:b62e5616-133d-49bd-b65a-0105d1e251e6> | CC-MAIN-2022-40 | https://docs.hackerone.com/hackers/passwords.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00722.warc.gz | en | 0.865949 | 351 | 3.25 | 3 |
Hollywood scriptwriters and political leaders paint vivid pictures showing the dangers of cyber-war, with degraded communications networks, equipment sabotage, and malfunctioning infrastructure. But the latest nation-state attacks appear to be aiming for the intangibles—with economic, political, and social impact. So long as the default assumption focuses on things exploding, we remain unprepared to deal with the fallout from information warfare.
The latest revelations—and indictments—about the Russian involvement with the 2016 United States presidential election highlights how digital attacks can be used against intangible targets, such as faith in social and traditional media, and confidence in the integrity of elections.
Disinformation is the latest weapon in the arsenal. The Soviets were really good at information warfare during the Cold War and the Russians today are proving to be quite adept at using the Internet to speed up the spread of false information without being obvious about their activities.
Former Defense Secretary Leon E. Panetta raised the spectre of a “cyber-Pearl Harbor,” an audacious digital attack with far-ranging consequences in the physical world, in a 2012 speech at New York’s Intrepid Sea, Air and Space Museum. “They could, for example, derail passenger trains, or even more dangerous, derail passenger trains loaded with lethal chemicals. They could contaminate the water supply in major cities, or shut down the power grid across large parts of the country,” Panetta said at the time.
Hybrid warfare blends conventional weapons, economic coercion, information operations, and cyber attacks.
Cyber-attacks can have physical impact, but many of them tend to be more disruptive than destructive. Malware can take production plants offline, shut down the electric grid and cause power outages, and disrupt operations by compromising safety controllers in industrial control systems. The list of recent nation-state attacks—which includes the Russians going after Yahoo and the Chinese in the breach of Office of Personnel Management—show that actors are targeting commercial networks and public infrastructure using a variety of different methods.
Or perhaps not. Organizations may still be thinking about physical damage or impact, but attackers appear to have moved on to less overt activities that can still sow confusion and cause chaos. The combination of traditional methods and digital attacks lets the attackers still maintain a degree of deniability.
“Thus far, however, cyber weapons seem to be oversold, more useful for signaling or sowing confusion than for physical destruction,” Joseph Nye, former assistant secretary of defense and current professor at Harvard University, recently wrote in a column on Project Syndicate. "More a support weapon than a means to clinch victory.”
New Attack Tactics
Nye wrote that hybrid warfare blends conventional weapons, economic coercion, information operations, and cyber attacks. That sounds a lot like what is currently happening, with different tools being used in variety of ways to spread false information. Focusing on physical damages and overt military exercises means attacks get missed. There is no sophisticated malware or breach to detect.
“[Cyberattacks] can be used to undermine more than banks, databases, and electrical grids—they can be used to fray the civic threads that hold together democracy itself,” New York Times reporter David Sanger wrote in The Perfect Weapon.
Even the attacks against the electric grid in the Ukraine, which led to hours-long blackouts, appear to be part of a larger Russian operation to make the Ukrainian government look weak and ineffective. The operation, widely believed to be the digital aspect of Russia’s (unofficial) war with Ukraine, includes flooding Ukrainian media outlets with Russian disinformation.
Cyber weapons seem to be oversold, more useful for signaling or sowing confusion than for physical destruction.
There are several steps to make the U.S. tougher and more resilient against disinformation—encouraging campaigns and parties to improve basic cyber hygiene such as encryption and two-factor authentication, and working with companies to shut down social media bots—but they depend on organizations changing their views on what an attack would look like. Just as in basic information security, organizations need to make the attacker’s work more costly than the benefits the attacker would get.
“Above all, the US must demonstrate that cyber attacks and manipulation of social media will incur costs and thus not remain the perfect weapon for warfare below the level of armed conflict,” Nye said. | <urn:uuid:606f1739-4fcf-4e21-adb3-3ff9b588fee2> | CC-MAIN-2022-40 | https://duo.com/decipher/disinformation-form-cyberattack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00722.warc.gz | en | 0.939258 | 904 | 2.5625 | 3 |
In the IT world, when we talk about reliable communication across any systems, TCP is the underlying protocol responsible for such connections to be successfully completed. In this article, we will discuss about basics of TCP and TCP Flags used in TCP header.
Topic of Content:
- Transmission Control Protocol
- Three-Way handshake
- List of TCP flags
Transmission Control Protocol (TCP)
Transmission Control Protocol is a transport layer protocol. It continuously receives data from the application layer and it divides the data into chunks where each chunk is a collection of bytes. TCP Segment is summation of TCP Header and Data Chunks. TCP segments are encapsulated in the IP datagram.
TCP is majorly used by applications that require guaranteed delivery of packets. It is a sliding window protocol, so it provides handling for both timeouts and retransmissions.
Related – TCP VS UDP
Establishing a TCP connection requires that both the client and server participate in a three-way handshake.
- Sender sends a SYN packet to the receiver.
- The receiver responds with a SYN/ACK packet to the sender, acknowledging the receipt of the connection request.
- The sender receives the SYN/ACK packet and responds with an ACK packet to receiver.
List of TCP flags
TCP flags are used with TCP packet transfers to indicate a particular connection state. This is required for troubleshooting purposes. Information from Flags also control how a particular connection is handled. There are few TCP flags that are commonly used –
- URG (1 bit – URGENT): The urgent flag is used to notify the receiver to process the urgent packets before processing all other packets. The receiver will be notified when all data has been received.
- ACK (1 bit – ACKNOWLEDGE): The acknowledgment flag is used to acknowledge the successful receipt of a packet. The receiver sends an ACK as well as a SYN in the second step of the three way handshake process to tell the sender for receiving its initial packet.
- PSH (1 bit – PUSH): Push flag is somewhat similar to the URG flag and tells the receiver to process these packets as they are received instead of buffering them.
- RST (1 bit – RESET): Reset the connection. It indicates the receiver to terminate the connection immediately when unrecoverable errors occur. It causes both the sides to release the connection and all its resources abnormally. The transfer of data ceases in both the directions. It may result in the loss of data that is in transit.
- SYN (1 bit – SYNCHRONIZE): It is used in first step of connection establishment phase or 3-way handshake process between the two hosts. This flag should be set to 1 when connection established between sender and receiver. This is used for synchronizing sequence number i.e. to tell the other end which sequence number they should accept.
- FIN (1 bit – FINISH): It is used to request for connection termination i.e. termination of connection between sender and receiver when there is no more data. This is the last packet sent by sender. It frees the utilised resources and gracefully terminates the connection.
Related – TCP FIN VS RST Packets
We discussed about total of 6 flags. The flags are always set to 0 or 1 and all being of 1 bit. They are used in data transmission between sender and receiver. Flags are used to instruct when transmission will start and stop. | <urn:uuid:8de38136-33c4-4a19-87f5-e2d721fd8336> | CC-MAIN-2022-40 | https://ipwithease.com/tcp-flags-all-you-want-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00722.warc.gz | en | 0.918903 | 733 | 3.859375 | 4 |
It sounds like a simple concept – so simple that many government organizations believe they’ve implemented strong security systems.
But the fact is that hackers are defeating them regularly and with alarmingly increased frequency.
Information Security Officers, or ISO’s, face several kinds of challenges in delivering protective solutions for government data and systems. In the past year, security has evolved to a new level of insight; the drivers for IT security are numerous and diverse:
• The classic pace of technology development;
• Budget reductions due to economic climate;
• Increased competitive espionage;
• A growing cadre of Internet-trained hackers and script kiddies;
• An explosion of high-speed Internet connectivity;
• The software industry doctrine of full disclosure of vulnerabilities;
• A continued shortage of skilled security practitioners;
• More complex business cases implemented in Web space;
• The establishment of privacy legislation, and
• An increased emphasis on corporate governance.
The list goes on, but these are the key influences driving secure systems and data.
At the turn of the millennium, the challenge was to implement firewalls to protect internal networks. The battle to incorporate firewall technologies is now being fought in the domain of home and SOHO computers. The high speed connections used are appealing to hackers contemplating these systems for a Trojan attack for use as mirrors or as launching pads for DDoS (Distributed Denial of Service) attacks. These compromised systems represent a significant risk to government.
As IT people discovered, firewalls did not cure all, nor did Intrusion Detection Systems (IDS). More was needed – which has turned out to be layered security.
Layered security is not a new development. At the Las Vegas BlackHat conference in 1999, Bill Cheswick, AT&T Labs Chief Scientist, likened a well-developed, layered IT security framework to real world examples, including military perimeters and the biological defences of living organisms. Layered security was a reality within well-funded critical infrastructure organizations like phone companies. Now, the IT industry is generally embracing layered security, but issues remain.
To effectively implement layered defences requires a strict definition of the environmental threat faced by the ISO. That environment is different for virtually every ISO. ISO’s in the public sector in different portfolios – health, environment, defence and others – each face divergent threats, risks, cost of risk and sensitivities with respect to their core information processes and data availability, integrity and confidentiality. However, we can begin to describe the workings of layered security, with the proviso that the relative weights of each component must be tempered with the individual risk scenario and available funding to support it. The mission is to provide a skeletal framework for developing a winning layered-security strategy. For a perspective, it is useful to review the history of IT security and develop the definition of the threat to computer security.
The threat-defining risk
The original computer operations centre of the 70s (glass house) housed mainframe computers. Security was tight, with a well-defined physical security perimeter, limiting access to the inner workings.
Computers are no longer sequestered within the walls of data centres. The walls are gone and the only real boundary is bandwidth. The Internet Protocol (IP) was originally designed for the purposes of an intimate, trusted military/university network which was subsequently hijacked by convenience and accessibility. The explosive growth of the Internet is based on a protocol that has no inherent security features and can only be classed as “not ready for prime time.” The counterculture exists today, as it always has, and its members exploit the playground with anonymity, with little fear of reprisal or consequence.
Accountability simply does not exist. Instead, anonymity prevails. According to a survey conducted by McAfee, during the single week of March 3, 2003 there were 78 newly discovered viruses, for a cumulative grand total of 66,669 viruses and worms roaming “in the wild” on the Internet. If a virus had to be stamped with the name, date of birth and SIN of the author, the introduction of new viruses would halt immediately.
The weakest link
Like most physical systems, the goal of computer security suffers from the “weakest link” problem. The integrity of an information system is only as robust as the strength of the weakest component of that system. Conceptually, this maps to the “weakest link in a chain” analogy.
The complexity of information architectures is unrivaled in today’s world. In the world of bits and bytes, an application will only work if every bit is set just right in the context of a billion or more bytes. Still, that is only in a perfect world. In the real world, the onus of secure and private data processing places additional demands on the system. Not only does it have to work, but it must be secure.
Layered security: People, process and technology
Layer 1 – the technology
Bill Cheswick’s layered security model provides a starting point in developing a security strategy. It is a flat topology that resembles a bullseye target, where the data resides at the centre, surrounded by rings of protective perimeters. The external perimeter is usually the Internet-facing firewall. Moving towards the centre, Intrusion Detection Systems monitor network traffic and the integrity of key systems files on servers. Additional Access Control Lists on internal routers and possibly more internal firewalls protect the central data and processes from the mean streets of the Internet.
A new addition to the technology layer is the honeypot, an exposed system that sits outside the firewall perimeter and serves as flypaper for attempted penetration by hackers. Honeypots resemble real operational networks, but contain contrived data and are not operationally significant. The theory is that attackers are coerced into revealing their exploit tools prior to the invasion of the real internal network, thereby providing advance warning of possible attack signatures which, if they appear on the internal network, signal an attack. The goal is not to stop intruders, but to stall them while preparing a defence. Honeypots can deliver real value in providing advance warning of network attacks, but only if skilled security practitioners are available at a moment’s notice, to analyse the logs and properly interpret the internal network events that need to be detected.
Weakness in the layered security model occurs when an external business partner, connected to the government infrastructure, is not up to standard with respect to security. Examples of these types of partners may be payment clearing houses, agents, retail kiosks, fulfillment houses, marketing partners or even district offices. If these partners require connectivity to a core database to perform transactions and are insecure, the entire layered security investment could be neutralized by a breach at that partner. This is the “weakest link” syndrome in action. ISO’s must be able to set security standards and implement reviews, audits and certification processes to ensure that security budgets do not go for naught. This assurance function is a major due diligence and standard activity that is becoming commonplace as a best practice.
Finally, the current economic climate has drained the water from the swamp, exposing pockets of security tools that were implemented as point solutions. Today, the ISO must evaluate the effectiveness of these solutions within the layered environment. Licence and maintenance cost consolidation is a priority, as resulting savings may liberate budgets for additional security measures. From a technical standpoint, organizations are seeking enterprise-level consoles that provide integration of reporting from point solutions into a consolidated, holistic view of security operations. This will be a hotbed of focus this year as ISO’s seek maximum benefit in an environment of reduced budgets and restrained spending.
Really layered security
Thus far, a flat two-dimensional layered security approach has been described, much like tree rings that involve technology.The technical controls are essential, but are certainly not a panacea. They must be complemented with policy, process and people. Truly layered security results when we add these new layers.
Policy and process
The second layer is the collection of policies and processes that are crucial in managing the technical control layer. Change management is a good example of a quality process that prevents changed technical settings from creating an unsafe condition or inadvertently breaking another unrelated process. Security technologies are relatively new and it has taken time for their management to evolve and become proven. ISO17799 is an example of the evolution of old standards to meet new challenges. The results have been informed processes for policy implementation, quality audits, business continuity plans and incident response plans. Government and the private sector are both witnessing the maturity and acceptance of all these components of this vital organizational layer.
The third layer involves human factors. People (users) invoke computation to perform business functions and people (technocrats) design the network and its security. There has been a significant increase in dedication to security awareness in the user workforce. Heralded by demand for security awareness training and witnessed by the diversity of offerings it provides, user training is gaining much needed momentum. End user security awareness training is officially on the radar screen. The driving forces are lessons learned from the unfortunate experiences of others, pending privacy legislation, a clear trend to responsibility and governance, and the threat of jail time for the negligent.
Experience and availability of security training has also assisted the ranks of network architects, administrators and security professionals in performing their tasks. These people are resources that must meet world class standards for operations knowledge. The observed trend is towards investment in education and training. The investment in education is a positive trend – employees are the face of government and help define the culture. Smart organizations that invest in their people will enjoy a clear competitive advantage, profitability and growth.
The fourth layer is strong physical security to wrap the IT infrastructure, data, processes and people. 9/11 was a wake-up call for reassessment of physical security measures. It was an unwelcome call, but the results are just short of dramatic. The external interface of many organizations now resembles barbed wire where the red carpet was prominent. Improvement is still necessary in the internal interactions where the general posture remains too casual. Hard drives go missing, compromising personal data and privacy rights, but this is an education issue and demonstrated improvement far outweighs remaining issues. Small and medium-sized businesses need help here; large corporations and government have historically solved this issue.
The fifth layer is the creation of security knowledge and wisdom. The classic hierarchy of information technology includes data, information, knowledge and wisdom. Data has no structure or context. Information is data in context, rendering it useful. Knowledge is a collection of organized information, allowing appropriate response. Wisdom is the peak, tempering knowledge with experience. Wisdom facilitates proactive capabilities.
Wisdom is the domain of the sage tribal leader or battle commander who know what will happen. IT security is touching wisdom, but it is elusive because it is a moving target. The laws of physics do not change, so physicists can aspire to build knowledge into wisdom. IT, though, is a moving train, with new technologies, software, and applications being inexorably introduced by business and the profit motive. New vulnerabilities, exploits and attack methodologies are therefore co-introduced. IT security is currently in a state of consolidation and stocktaking, at a milestone on the journey to maturation.
In the last year, security training, education, standards and venues for the peer exchange of security knowledge has been confirmed. Security practitioners are partaking in the exchange of knowledge and experience. Vexing problems continue: the basic insecure protocol, the anonymity afforded to the perpetrator, bug-laden software, the patching problem, open disclosure and greed. Nevertheless, we are better prepared to meet the challenges today. Organizations that understand layered security and recognize the importance of addressing security in all its facets will be best positioned to reduce risk in the face of prevailing threats. ISO’s who are capable of assessing the threat environment and deploying their resources to counter threats will be most successful in securing the integrity of their data and systems.
Tom Slodichak is Chief Security Officer of the information technology security provider WhiteHat Inc.
Of honeypots and hackers
The newest concept in the scheme of layered security is the honeypot. Honeypots are hacker “flypaper,” universally reviled by the hacker community. Honeypots imitate real systems, easily allowing an intruder to break in – but with no risk, as the honeypot is outside the real network and contains no real data.
An Internet-attached server acts as a bait or decoy, luring potential hackers in order to study their activities and monitor how they attempt to break into a system. The goal is to have attackers reveal their tools prior to the invasion of the actual IT infrastructure and to learn where the system may have weaknesses that need attention.
If a honeypot is successful, the intruder will have no idea that s/he is being tricked and monitored. The information collected can be used both to validate internal attack signatures and provide security personnel time to devise strategies to repel the attack on the real infrastructure.
The honeypot’s principal advantage is that it has the potential to detect new and unknown exploits for which an attack signature has not yet been published. This helps network security administrators keep their systems secure. | <urn:uuid:884aa425-c93b-43d9-bb35-73ec7ea4f0cd> | CC-MAIN-2022-40 | https://www.itworldcanada.com/article/beyond-firewallshow-to-be-careful-out-there/20293 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00722.warc.gz | en | 0.940128 | 2,724 | 2.5625 | 3 |
What is machine learning?
Machine learning (also known as ML) is the study of computer algorithms that improve automatically through more and more experience. Machine learning tends to be seen as a subset of artificial intelligence (AI) as they work hand in hand together. Machine learning algorithms build an intelligent mathematical model based on sample data, also commonly known as "training data", which allows it to create predictions or decisions without being specifically programmed to do so.
Machine learning, whether you know or not, is integrated into your day to day life. For example, when you are scrolling through Instagram, machine learning is working hard to personalise your feed to your interests and personal needs. Similarly, once you view an item on your browser, similar items will be advertised to you through multiple channels, this refines your shopping experience and recommends products that may be of interest to you.
A more in-depth example of a use of machine learning when Netflix Netflix held its first "Netflix Prize" back in 2006, the competition was to find a program to better predict user preferences and improve the accuracy on its already existing movie recommendation algorithm and service by at least 10%. This was used to enhance the user experience and ultimately get users to stay on the site/application for longer if their movie recommendations were correct and specific enough.
Machine learning is able to easily identify trends and patterns by reviewing large volumes of data and discovering specific trends and patterns that would not be apparent to humans. The technology is continuously improving and advancing which allows for continuous improvement and therefore improved accuracy and efficiency.
However, machine learning requires lots of effort and time, this is because you will need to allow the algorithms to learn and develop enough to fulfil its purpose with a high level of accuracy, this will take a lot of time and effort to set up and develop. It will also require a large number of resources, which could be costly and also time-consuming. | <urn:uuid:5da8c434-a60a-4dd0-8c4b-c3529d9f3afa> | CC-MAIN-2022-40 | https://aimagazine.com/machine-learning/what-machine-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00722.warc.gz | en | 0.969269 | 385 | 3.65625 | 4 |
Clusters are interconnected servers or computers that look like they are just one server or computer to applications and end users. In using Oracle Real Application Clusters (RAC), you can cluster your databases. These may look and work like a single database, but there are some things that are different. For one, single instance databases have a 1:1 relationship between the instance and the database itself. In Oracle Real Application Clusters, it can involve 100 instances accessing only one database.
Here is what you should know about Oracle Real Application Clusters’ architecture.
Oracle Real Application Clusters works on a shared-everything principle, which means that all the control files, data files, SPFILEs and redo logs are located in cluster-aware shared disks that are accessible to all cluster database instances. This means that database files for Oracle Real Application Clusters need to be stored in a cluster-aware storage. You can choose how you are going to configure your disk, but the storage should be cluster-aware. This means that you are limited to using:
- Automatic Storage Management
- Oracle Cluster File System
- A network file system
- Raw devices
What’s more, all nodes in the Oracle Real Application Clusters environment should be connected to a LAN so that applications and users can access that database using the Oracle Database services feature.
The Oracle Database services feature allows you to institute rules and control over how users connect to the database.
You can use a client/server configuration to allow users to access your Oracle Real Application Clusters database. These are reserved for power users who create their own searches, such as the database administrators, data miners, application users and other power users. Public networks, however, use TCP/IP or any supported software and hardware. You can access Oracle Real Application Clusters instances via the database’s default IP address or any VIP addresses.
You should also assign virtual host names and IP addresses to all your nodes. Here, virtual host names and IP addresses would be used to connect to your database instance.
The Oracle Real Application Clusters Components
The Oracle Real Application Clusters database has two or more instances that have background processes and memory structures. It will have the same memory structures and processes as a single instance database, plus those that are unique to Oracle Real Application Clusters.
If you are struggling working with Oracle Application Clusters, or if you simply want to learn more about it, call Four Cornerstone today. We have a team of certified Oracle experts who could easily teach you more about Oracle Application Clusters, its architecture, how it is designed, and, more importantly, how to deploy it. We can help you with everything that is related to Oracle and MySQL. Four Cornerstone offers database performance tuning, data warehousing, disaster recovery, remote Oracle DBA support, database development, database environment assessment and extensive MySQL support. We can also assist you with your software licensing needs.
Talk to or contact us at https://fourcornerstone.com/contact.
Article Source: Oracle
Photo courtesy of Oracle. | <urn:uuid:d1109cf3-71c9-4970-a133-b5b9b86cd27d> | CC-MAIN-2022-40 | https://fourcornerstone.com/oracle-real-application-clusters-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00722.warc.gz | en | 0.893643 | 627 | 2.828125 | 3 |
There has been much discussion and debate in the scientific community regarding the efficacy and suitability of machine learning techniques to help improve our understanding of local and global environments.
Copyright: earth.org – “Can Machine Learning Help Tackle Climate Change?”
Machine learning allows for predictive and probability-based calculations to be undertaken – which are useful tools for evaluating the benefits and costs of our actions in the present. It is useful for those active in climate science to understand the strengths and limitations of current machine learning techniques, as this results in better understanding and criticism of any published findings and conclusions.
What is Machine Learning?
Machine learning falls under the broader term Artificial Intelligence (AI), which is defined in a 2004 paper as “the science and engineering of making intelligent machines, in particular intelligent computer programs”. The true nature of ‘intelligence’ is hotly debated, but for this purpose, intelligence is artificial, in the sense that computer models are used to draw conclusions from complex datasets. Models are usually designed for research that would be impractical or excessively laborious to carry out with conventional analysis.
The diagram below illustrates how popular machine learning terms are related:
It is also important to understand the following five terms:
- An algorithm is a set of instructions (in this context, supplied to a computer) that transforms input information into output information. For example, calculating the carbon footprint of an organisation by assessing variables such as fuel or energy consumption, manufacturing processes, and any offset efforts.
- A model is the algorithmic representation of a system (such as climate or an economy). Usually, a model comprises multiple algorithms that solve a complex problem.
- Structured Data is data that is labelled, where its nature has already been determined, for example, temperature values. Classical machine learning mainly uses structured data.
- Unstructured Data is data presented in raw forms, such as images. Deep learning models can operate on both structured and unstructured data to create natural language processing and visual recognition systems. However, these require higher levels of computing power than classical machine learning methods.
- Neural networks are one of the most important computational techniques for machine learning. A neural network is a software model consisting of several connected nodes. Both the nodes and the connections are important. Below is a simple diagram of how neural networks can be structured.
Each network has inputs from either data or previous nodes, one or more hidden layers (algorithms that can modify the input), and an output. If a node’s algorithm produces a result that exceeds a set threshold value, then the output is activated. Each connection can also be assigned a weight to indicate how useful it is in predicting an overall result. Connections that are more useful in predicting a result receive a higher weight. Less useful connections are assigned a lower weight or may even be dropped.[…]
Read more: www.earth.org | <urn:uuid:aee88626-3e06-44f5-9ecf-a6d010ff0ecf> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/08/31/can-machine-learning-help-tackle-climate-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00722.warc.gz | en | 0.92484 | 588 | 3.78125 | 4 |
When setting up a time zone in Windows operating systems (Windows Server, Windows 10, etc.), you can run the tzutil.exe utility (the Windows Time Zone Utility) at the command line. This tool dates back to the Windows 7 release, and its utility executable file can be found in the %WINDIR%\System32 directory.
One of the most important key parameters of time, in addition to the current time and date, in all computer information systems is the Time Zone. The time zone on the computer is set according to the location. This will correctly display the time in your operating system.
In this guide, we’ll show you how to make the most of the TZUtil utility features to change the time zone in Microsoft Windows 10.
Step 1. Permission to Change the Time Zone
By default, changing the time zone does not require administrative privileges (compared to the time/date settings). To have this requirement in place, use local security policies for a change (Local Security Settings — secpol.msc):
Security Settings ⇨ Local Policy ⇨ User Rights Assignment
On the Policy list, we go for the Change the time zone line.
Step 2. Open Command Prompt as Administrator
Open the Command Prompt as an administrator: right-click on the Start menu and select Command Prompt (Admin).
Step 3. Check Out the Current Time Zone Using TZUtil
To see which time zone is currently set up, type tzutil /g at the command line and press Enter.
Step 4. List All Time Zones with Their Names and Identifiers
Type tzutil /l and press Enter.
Step 5. Daylight Saving Time Adjustment for a Specific Time Zone
To set the time zone with daylight time, you need to write tzutil /s “Time Zone” at the command line and press Enter. Instead of “Time Zone”, type the respective time zone.
For instance, let’s take the time zone UTC+02:00 (Vilnius, Kiev, Riga, Sofia, Tallinn, Helsinki FLE Standard Time). Run the command tzutil /s “FLE Standard Time”.
Step 6. Disable Daylight Savings Time for a Specific Time Zone
To set the time zone without daylight saving time, type tzutil /s “Time zone _dstoff” at the command line and press Enter. For example, we select UTC+02:00 (Vilnius, Kiev, Riga, Sofia, Tallinn, Helsinki FLE Standard Time). Run the command tzutil /s “FLE Standard Time _dstoff”.
Once you’ve set up the time zone, close the command line.
How to Change the Time Zone on Windows 10 with Action1 in 5 Steps
Action1’s intuitive dashboard helps optimize routine tasks, significantly scaling up IT productivity.
Step 1:After logging into the Action1 dashboard, in the Navigation panel (the left column), select Managed Endpoints and mark the endpoint on which you plan to change the time zone.
Step 2: Then click on the More Actions menu and select Run Command.
Step 3: In the box, enter the command tzutil /s “Time Zone” to set a new time zone. Replace the Time Zone parameter with a particular time zone, e.g. “FLE Standard Time”.
Step 4: In the Select Managed Endpoints window, you mark those endpoints on which you are going to change the time zone. You can add all the available endpoints or mark them one by one.
Step 5: Schedule the action (Run now/ No schedule yet/ At specific time/ Repeat) and Finish.
Action1’s Remote Management Solutions
Staying competitive in the market is always a challenge, and loud words don’t do wonders for optimizing administrative tasks and scaling up IT productivity. But actions do! With Action1’s cloud RMM solution, your IT department will timely deliver patches and updates, manage IT assets, manage endpoints remotely, and run many other complex tasks. | <urn:uuid:c2d6a88d-43e2-43e1-90dd-d39d6affd602> | CC-MAIN-2022-40 | https://www.action1.com/how-to-change-the-time-zone-in-cmd-windows-10-via-tzutil/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00722.warc.gz | en | 0.797434 | 874 | 2.640625 | 3 |
Fujitsu has announced a new technology which it claims allows the waste heat of a computer's CPU to actually cool the surrounding server room.
The seemingly impossible achievement, unveiled at the International Conference on Power and Energy Engineering in Shanghai, could spell massive savings for data centre operators: according to Fujitsu's calculations, the system reduces the total air-conditioning power requirements for an average data centre by around 20 per cent.
With electricity costs rising, HVAC - Heating, Ventilation And Cooling - costs are one of the biggest barriers to running a data centre. It's the high cost of cooling that leads to places like Dublin - not known for its heatwaves - becoming hotbeds for data centre activities.
Fujitsu's cooling system could help drop costs, or make warmer climes more suitable for data centre use - and it works in a clever manner. By grabbing waste heat from the CPUs in a server rack, it can spit out chilled water at between 15°C and 18°C.
While that might seem unlikely, Fujitsu's system uses a relatively common cooling technology: an adsorption heat pump. By taking in warm waste water into a special adsorption material and then vaporising it, an adsorption heat pump is able to bleed off heat quickly and return the water far cooler than it originally arrived.
The principle isn't new, but a stumbling block has always stood in the way of its use in data centre cooling: typical adsorption materials require the water to be significantly hotter than that found inside an average watercooling loop.
That's where Fujitsu's breakthrough comes in: a new adsorption material developed by Fujitsu Laboratories offers the same performance at temperatures of around 40°C to 55°C - around that of a watercooling loop. Using this new material along with some old-fashioned engineering, Fujitsu is able to harness the waste heat for cooling purposes.
It's not quite as simple as it sounds: for the adsorption material to work, the temperature of the water can't dip below 40°C. While the server is under load, this shouldn't be a problem - but when the load drops, so does the heat output.
To solve this problem, Fujitsu has developed a flow control system which analyses CPU load and controls just how much water flows into the adsorption heat pump in order to keep the temperature high enough to work continuously.
With Fujitsu increasing its lead in supercomputing circles - thanks largely to the K Computer, officially the world's first ten-petaflop machine - these technologies will help it gain an edge over its rivals, but there is a slight catch: it doesn't expect to have a commercial implementation ready until at least 2014. | <urn:uuid:864fc650-90c9-4b7b-a6a6-8b547940ddff> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/11/07/fujitsu-announces-cpu-adsorption-cooling-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00722.warc.gz | en | 0.946734 | 565 | 3.078125 | 3 |
When a company like Microsoft needs to fix a security flaw in one of its products, the process is normally straightforward: determine where the bug lies, change the program's source code to fix the bug, and then recompile the program. But it looks like the company had to step outside this typical process for one of the flaws it patched this Tuesday. Instead of fixing the source code, it appears that the company's developers made a series of careful changes directly to the buggy program's executable file.Bug CVE-2017-11882 is a buffer overflow in the ancient Equation Editor that comes with Office. The Equation Editor allocates a fixed-size piece of memory to hold a font name and then copies the font name from the equation file into this piece of memory. It doesn't, however, check to ensure that the font name will fit into this piece of memory. When provided with a font name that's too long, the Equation Editor overflows the buffer, corrupting its own memory, and an attacker can use this to execute arbitrary malicious code.
Normally the work to fix this would be to determine the length of the font name and create a buffer that's big enough to hold it. It's a simple enough change to make in source code. If that's not possible—there are occasional situations where a buffer can't easily be made bigger—then the next best solution is to limit the amount of data copied to it, truncating the font name if it's too long to fit. Again, this is a simple change to make in the source code.
But that doesn't appear to be what Microsoft did here.
Analysis of Microsoft's patch strongly indicates that the company didn't make changes to the source code at all. Instead, it appears that the flaw has been fixed by very carefully modifying the Equation Editor executable itself. Normally when a program is modified and recompiled, there are ripple effects from this compilation. Low-level aspects of the compiled code will change slightly; the recompiled code will use registers slightly differently, functions will be placed at different locations in memory, and so on. But none of that is in evidence here; side-by-side comparison of the fixed program and the original version shows that it's almost entirely unaltered except for a few bytes in a few functions. The only way that's likely to happen is if the bug-fixing was performed directly on the program binary itself without reference to the source code.
This is a difficult task to pull off. The fixed version includes an extra test to make sure the font name is not too long, truncating it if it is. Doing this extra test means adding extra instructions to the buggy function, but Microsoft needed to make the fix without making the function any longer to ensure that other, adjacent functions were not disturbed. To make space for the new length checking, the part of the program that copied the font name was ever so slightly deoptimized, replacing a faster routine with a slightly slower one, and freeing up a few bytes in the process.
The inspection even suggests that this isn't the first time that Microsoft has had to make such fixes; a few instructions were found to be strangely duplicated in the original, broken version of the program. This kind of thing would happen if a previous modification made the program's code slightly shorter.
A look at the Equation Editor's embedded version information also gives clues as to why Microsoft had to take this approach in the first place. It's a third-party tool, developed between 1990 and 2000 by a company named Design Science. That company still exists and is still producing equation editing software, but if we were to guess, Microsoft either doesn't have the source code at all or does not have permission to make fixes to it.
Word nowadays has its own built-in equation editing, but Equation Editor is still supported for backwards compatibility to ensure that old documents with embedded equations continue to be usable. Still, we're a little surprised that Microsoft fixed it rather than removing it entirely. It's truly a relic from another era, coming long before Microsoft's considerable investment in safe coding practices and exploit mitigation techniques. Equation Editor lacks all of the protections found in Microsoft's recent code, making its flaws much easier to exploit than those of, say, Word or Windows. This makes it something of a security liability, and we'd be amazed if this font bug is the last one to be found. | <urn:uuid:ec421899-fb0e-4033-9ccc-1f86d114f170> | CC-MAIN-2022-40 | https://arstechnica.com/gadgets/2017/11/microsoft-patches-equation-editor-flaw-without-fixing-the-source-code/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00722.warc.gz | en | 0.955692 | 902 | 2.53125 | 3 |
Whether your business is already running on a computer network or you’re thinking about taking that step, understanding how a network works is more important than you might think. Even if you hire someone set up a network for you, understanding the basics will help when it comes time to buy a new piece of networking gear. You also need to understand what the person who sets up the network – whether it’s an employee or local computer guy — is talking about so you know you’re getting the straight scoop.
We’ll try to demystify things a bit for you by outlining the basic components of a typical network and explaining how they interact with each other. Don’t think of it as networking for dummies, but rather as networking for those that just want the basics with a minimal amount of technical mumbo-jumbo.
A typical network starts with one or more computers linked together by special cabling through devices called switches. Switches connect to another kind of device called a router, and routers in turn connect to yet another device known as a broadband gateway. Since the broadband gateway is the lynch pin of many small business networks, we’ll start with it and work our way forward (or backward, depending on your point of view) from there.
These days most homes and small businesses get Internet access from either the local cable company or the phone company. A company that provides Internet access is known as an ISP, or Internet Service Provider. In order for you to gain access to their network, an ISP provides a device generally known as a broadband gateway, (though it’s also commonly referred to as either a cable modem or DSL modem depending on the type of service you have). If you think of the Internet as an interstate highway, the broadband gateway is essentially the exit to your network. Most broadband gateways are ostensibly designed to connect to a single computer; if you’ve got a multiple computers that need access, that’s where a router comes in.
The basic function of a router is to connect one network to another. Routers receive information, or traffic, from other networks and either deliver it to the correct computer or forward it on to another router for delivery. Organizations with very large networks often use multiple routers to link the networks of different departments (say, Sales, Marketing, and Human Resources), but in the context of a typical small business network, the router’s job is to act as an intermediary between the company network, and that of the ISP.
Routers designed for small business networks usually have a security device known as a firewall built into them. Firewalls work by monitoring the traffic coming into the network and scanning it for potential threats. If any questionable traffic is detected, the firewall blocks it, preventing it from entering the company network.
Switches and Cables
The way computers link to a router and to each other to form a network is through devices called switches. (We’ll set wireless networking aside for the time being, because we’ll cover that in an upcoming article.) The actual physical connections are made with cabling — commonly known as Category 5, or Cat 5– which resembles conventional phone cord. (The two are actually quite similar, though network cable is thicker and uses a larger connector.)
Computers connect to switches in a hub-and-spoke arrangement; think of it as a bicycle wheel, with the switch at the center and spokes (the cables) radiating from the center to the wheel’s rim to each individual computer. Routers designed for home and small business networks typically have the switching technology built into it, so one device performs both functions. These combination devices let you connect at least four computers, and sometimes eight.
The overwhelming majority of network switches and cables are based on a technology called Ethernet, and there are two major types of Ethernet used today. The most common type is called Fast Ethernet, which can transmit a maximum of 100 megabits of data per second. By contrast, Gigabit Ethernet offers 10 times the performance, or 1,000 megabits per second. (Because of network overhead and other factors, the actual performance you get from either version of Ethernet is usually less than half the quoted figure.)
A Digital Post Office
Now that we’ve covered the physical components of network, lets take a look at how a networks enables computers to send and receive information. This is where several acronyms that we love to hate but need to know come into play.
Just like you need a postal address in order to send mail from one location to another, computers also use addresses in order to locate each other and exchange information on a network. Computers identify and communicate with one another using IP (Internet Protocol) addresses, and every computer must have one to be part of a network.
IP addresses are numeric in nature and contain four numbers separated by periods (e.g. 18.104.22.168). Each individual number in an IP address can range from one to 254, and while computers on the same network will have similar addresses, each address must be unique. Part of a computer’s IP address refers to itself (like a house number) while the rest refers to the network a computer is on (like a street name).
Although you can assign specific IP addresses to individual computers (known as static, or unchanging, addresses), most networks use a technology called DHCP (Dynamic Configuration Host Protocol) to assign addresses automatically, greatly simplifying network management. With DHCP, a special server (a server is simply a computer or a piece of software running on a computer that performs a particular service or function) sets aside a group of IP addresses and doles them out as needed. When a computer wants to join the network, it requests an address, and the DHCP server (which is usually built into the router) issues an address to the computer.
When you want to connect a network to the Internet, you don’t just get to randomly pick the IP addresses you want to use, but must instead use ones issued by your ISP. The problem is that most ISPs only provide customers with a single IP address with which to access their network. Normally that would mean you couldn’t connect more than one computer to the Internet, but you can get around this limitation with a technology called NAT, or Network Address Translation, that allows multiple computers to access the Internet using a single IP address.
NAT, which like DHCP is built into a router, essentially creates two networks, one public and one private. The public network is the one with access to the ISP’s network and the Internet beyond, while the private network contains the company’s computers and other devices.
When Internet engineers first created NAT, they reserved a range of special IP addressed that people could access and use in the private network. Because the private network is set up using these IP addresses, computers on this network can’t communicate directly with the Internet.
With NAT, a router not only acts as an intermediary, but also as a translator between the public network and the private one. It receives information requests from the computers on the private network and in turn forward them to the Internet via the public network.
The router also tracks the activity of all the computers linked to it, so that when a computer connected to the router requests information (say, a Web page) it can deliver that Web page to the particular computer that requested it. It’s kind of like using a shopping delivery service; you tell the company what you want and where you live, and the item arrives without you having to go and get it yourself.
We hope this has helped you understand the basics of how a network works. Stay tuned for more articles explaining additional networking concepts.
Joe Moran spent six years as an editor and analyst with Ziff-Davis Publishing and several more as a freelance product reviewer. He’s also worked in technology public relations and as a corporate IT manager, and he’s currently principal of Neighborhood Techs, a technology service firm in Naples, Fla. He holds several industry certifications, including Microsoft Certified Systems Engineer (MCSE) and Cisco Certified Network Associate (CCNA).
This article was first published on SmallBusinessComputing.com. | <urn:uuid:8f248af6-cfb6-45ed-8d29-6ea63c44359d> | CC-MAIN-2022-40 | https://www.datamation.com/erp/demystifying-a-small-business-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00722.warc.gz | en | 0.94916 | 1,716 | 3.34375 | 3 |
Malware, or malicious software, is often used by the cybercriminals to cause a significant amount of damage at the victim’s end. The phrase ‘cybercriminals’ include attackers, hacktivists, group of hackers and even nation-states. The damage caused can include disrupting normal operations of a computer or a computer network, stealing information stored in the systems, bypassing access controls, or causing harm to the victim in every possible way. The victims may be individuals, businesses, organizations, and even the government and its bodies. Malware includes virus, trojan, ransomware, keyloggers, rootkits, etc. As reported by Barkly, more than 200,000 malware samples are being captured every day. Considering the seriousness of this situation and how adversely a malware attack can affect a business and its operations, appropriate security measures must be put in place by the concerned business. Having an incident response plan is one such measure which helps a business in minimizing the damages when it is under an attack. Moreover, it lays down a proper procedure so that recovery time, as well as costs, are reduced. During an incident response, malware analysis plays a vital role in helping the security team in understanding the extent of the incident along with identification of hosts or systems that have been affected or could be affected. With the help of information gathered during malware analysis, an organization can effectively mitigate the vulnerabilities and prevent any additional compromise.
Why is a Malware Analysis Performed?
A malware analysis can be performed by keeping a variety of goals in mind. It also depends upon the requirements of an organization and impact of the security incident. Some of the general goals include –
Questions Involved in a Malware Analysis
When a malware attack is being analysed, certain questions must be answered when the analysis is concluded. These questions can be –
Creating a Safe Environment for Malware Analysis
Right from the start, a malware is created with a malicious intent to cause damage or loss to the victim. So, it is definitely not logical for an analyst to perform malware analysis on a system which he or she uses for work or personal things. To solve this problem, a dedicated lab can be created with a number of computers having their own physically partitioned networks. These computers shall have a standard operating system which can be easily restored using the system image after it has been infected by a malware and an analysis has been carried out. Various tools such as Ghost, UDPcast, Truman, etc. can be used in performing malware analysis. Moreover, an analyst can also create a simulated lab environment using virtual machines. Various software are available on the Internet which can be used to create VMs (virtual machines). One of the most prominent software is VMware which has the ability to create a snapshot-tree by capturing the system state at the various point of times. With the help of these snapshots, the analyst can easily revert back to the previous state of the system. Using a simulated lab environment has its own disadvantages such as –
Types of Malware Analysis
Malware analysis is classified into two types – static and dynamic. Static techniques involve analysis of code while dynamic techniques analyse the behaviour of a malware. The behavioural analysis includes questions such as –
Both these types accomplish the same goal of explaining the working of a malware, but differences arise when it comes to the time required to carry out an analysis, tools to be used, and skill set of the personnel deployed. It is always recommended to carry out both types of analysis to get a clear view of a malware’s working and its impact on the business processes. In addition, malware analysis can also incorporate reverse engineering techniques to analyse the source code of a malware. With the help of source code, the result of behavioural analysis can be verified as well as appropriate steps can be taken to better the defences of an organization.
It can be safely stated that 2017 was the year of ransomware. Ransomware, a type of malware along with other types are prominent threats to any business. When a security incident occurs and malware is the reason behind it, malware analysis plays an integral role in incident response as one needs to know what has happened in order to take the required steps for recovery. (This is our first post in the Malware Analysis Series. The upcoming posts will talk about various techniques used in the static and dynamic analysis along with the importance of malware analysis in endpoint devices.)
GDPR is very well in place from May 25th. Whether your SIEM system is GDPR compliant or not is a question which needs immediate redressal.
Cybersecurity threats are growing exponentially and enterprises are seeking standardized and automated SOCs to address these threats... | <urn:uuid:e7280764-b291-4c9a-a936-ccc03b0baee9> | CC-MAIN-2022-40 | https://www.logsign.com/blog/malware-analysis-things-you-should-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00722.warc.gz | en | 0.951725 | 971 | 3.09375 | 3 |
In this chapter, you learn about the following:
- How MPLS provides security (VPN separation, robustness against attacks, core hiding, and spoofing protection)
- How the different Inter-AS and Carrier's Carrier models work, and how secure they are compared to each other
- Which security mechanisms the MPLS architecture does not provide
- How MPLS VPNs compare in security to ATM or Frame Relay VPNs.
VPN users have certain expectations and requirements for their VPN service. In a nutshell, they want their service to be both private and secure. In other words, they want their VPN to be as secure as with dedicated circuits while gaining the scalability benefits of a shared infrastructure. Both concepts, of privacy and security, are not black and white, and need to be defined for a real world implementation.
This chapter defines typical VPN security requirements, based on the threat model developed in the previous chapter, and discusses in detail how MPLS can fulfill them. The typical VPN security requirements are
- VPN separation (addressing and traffic)
- Robustness against attacks
- Hiding of the core infrastructure
- Protection against VPN spoofing
We also explain which security features MPLS VPNs do not provide, and compare the security capabilities of MPLS VPNs with Layer 2–based VPN services such as ATM and Frame Relay.
The most important security requirement for VPN users is typically that their traffic be kept separate from other VPN traffic and core traffic. This refers to both its traffic not being seen in other VPNs, and also other VPNs traffic or core traffic not intruding into their VPN. Referring to the threat model from the previous chapter, this section analyses a threat against a VPN, specifically intrusions into and from other VPNs.
Another requirement is that each VPN be able to use the complete IP address space without affecting or being affected by other VPNs or the core.
The service provider has the requirement that the core remain separate from the VPNs in the sense that the address space in use does not conflict with any VPN and that VPN traffic remains separate on the core from the control plane traffic on the core.
In other words, a given VPN must be completely separate from other VPNs or the core in terms of traffic separation and address space separation. We will now analyze how the standard, RFC 2547bis, meets these requirements. In the first section, we see how it achieves address space separation, and in the following section how data and control traffic are kept architecturally separate—between VPNs, but also between a VPN and the core.
Address Space Separation
To be able to distinguish between addresses from different VPNs, RFC 2547bis does not use standard IPv4 (or IPv6) addressing on the control plane for VPNs on the core. Instead, the standard introduces the concept of the VPN -IPv4 or VPN -IPv6 address family. A VPN-IPv4 address consists of an 8-byte route distinguisher (RD) followed by a 4-byte IPv4 address, as shown in Figure 3-1. Similarly, a VPN-IPv6 address consists of an 8-byte route distinguisher (RD) followed by a 16-byte IPv6 address.
Figure 3-1 Structure of VPN-IPv4 Addresses
The purpose of the RD is to allow the entire IPv4 space to be used in different contexts (for VPNs, in our example). On a given router, a single RD can define a VPN routing/forwarding instance (VRF), in which the entire IPv4 address space may be used independently.
Due to the architecture of MPLS IP VPNs, only the PE routers have to know the VPN routes. Because PE routers use VPN-IPv4 addresses exclusively for VPNs, the address space is separated between VPNs. In addition, because they use IPv4 internally in the core, which is a different address family from the VPN-IPv4 address family, the core also has independent address space from the VPNs. This provides a clear separation between VPNs, and between VPNs and the core. Figure 3-2 illustrates how different address spaces are used on an MPLS IP VPN core.
Figure 3-2 Address Planes in an MPLS VPN Network
There is one special case in this model. The attachment circuit on a PE, which connects a VPN CE, is part of the VRF of that VPN and thus belongs to the VPN. However, the address of this PE interface is part of the VPN-IPv4 address space of the VPN and therefore not accessible from other interfaces on the same PE, from other core routers, or from other VPNs.
For practical purposes, this means that address space separation between VPNs and between a VPN and the core is still perfect because this PE interface to the CE belongs to the VPN and is treated as a VPN address. However, this also means that addresses exist in the VPN that belong to a PE. Consequently, a PE can by default be reached from a VPN, which might be used to attack that PE. This is a very important case and is discussed in detail in Chapter 5, "Security Recommendations."
VPN traffic consists of VPN data plane and control plane traffic. For the sake of this discussion, both will be examined together. The VPN user's requirement is that their traffic (both types) does not mix with other VPNs' traffic or core traffic, that their packets are not sent to another VPN, and that other VPNs cannot send traffic into their VPN.
On the service provider network, this definition needs to be refined because VPN traffic will obviously have to be transported on the MPLS core. Here, we distinguish between control plane and data plane traffic, where the control plane is traffic originating and terminating within the core and the data plane contains the traffic from the various VPNs. This VPN traffic is encapsulated, typically in an LSP, and sent from PE to PE. Due to this encapsulation, the core never sees the VPN traffic. Figure 3-3 illustrates the various traffic types on the MPLS VPN core.
Figure 3-3 Traffic Separation
VPN traffic consists of traffic from and to end stations in a VPN and traffic between CEs (for example, if IPsec is implemented between the CEs).
Each interface can only belong to one single VRF, depending on its configuration. So for VPN customer "red," connected to the PE on a fast Ethernet interface, the interface command ip vrf forwarding VPN determines the VRF. Example 3-1 shows the configuration for this.
Example 3-1. VRF Configuration of an Interface
interface FastEthernet1/0 ip vrf forwarding red ip address 126.96.36.199 255.255.255.0
Traffic separation on a PE router is implemented differently, depending on the type of interface on which the packet enters the router.
- Non-VRF interface— If the packet enters on an interface associated with the global routing table (no ip vrf forwarding command), the forwarding decision is made based on the global routing table, and the packet is treated like a normal IP packet. Only core traffic uses non-VRF interfaces, thus no further separation is required. (Inter-AS and Carrier's Carrier scenarios make an exception to this rule and are discussed later in this chapter.)
If the packet enters on an interface linked to a VRF using the ip vrf forwarding
VPN command, then a forwarding decision is made based on the forwarding table (or forwarding information base, FIB) of that VRF. The next hop from a PE perspective always points to another PE router, and the FIB entry contains the encapsulation method for the packet on the core. Traffic separation between various VPNs is then achieved by encapsulating the received packet with a VPN-specific header. There are various options for how to encapsulate and forward VPN packets on the core—through a Label Switch Path (LSP), an IPsec tunnel, an L2TPv3 tunnel, or a simple IPinIP or GRE tunnel. All of the methods keep various VPNs separate, either by using different tunnels for different VPNs or by tagging each packet with a VPN-specific header. Figure 3-4 shows how packets are encapsulated within the MPLS core.
Figure 3-4 Encapsulation on the Core
P routers have no active role in keeping traffic from VPNs separate: they just connect the PE routers together through LSPs or the other methods just described. It is one of the key advantages of the MPLS VPN architecture that P routers do not keep VPN-specific information. This aids the scalability of the core, but it also helps security because by not having visibility of VPNs the P routers also have no way to interfere with VPN separation. Therefore, P routers have no impact on the security of an MPLS core.
In summary, VPN users can expect their VPN to be separate from other VPNs and the core because
- An interface on a PE (for example, the interface holding the user's attachment circuit) can only belong to a single VRF or the core.
- The attachment circuit (PE-CE link) to this interface belongs logically to the VPN of the user. No other VPN has access to it.
- On the PE the address information of the VPN is held as VPN-IPv4 addresses, making each VPN unique through unique route distinguishers. VPN-IPv4 addresses are only held on PE routers and route reflectors.
- VPN traffic is forwarded through the core through VPN-specific paths or tunnels, typically tagging each packet with a VPN-specific label.
- P routers have no knowledge of VPNs, thus they cannot interfere with VPN separation.
The service provider can expect its core to be separate from the VPNs because
- PE and P addresses are IPv4 addresses. VPNs use exclusively VPN-IPv4 addresses and cannot access PE and P routers. (Exception: The attachment circuit on the PE, which needs to be secured. See Chapter 4, "Secure MPLS VPN Designs.")
For more technical details on how VPN separation is technically implemented, please refer to RFC 2547bis. | <urn:uuid:91805119-d99c-4aae-b920-42c42493fc9b> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=418656&seqNum=4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00722.warc.gz | en | 0.934882 | 2,128 | 2.609375 | 3 |
Pipelines are an integral part of the oil and gas industry, whether in upstream, midstream or downstream operations. While considered as the safest and fastest way to transport hazardous substances, these critical assets are not error-free. Under the progressive influence of material flows and dynamic environmental conditions, pipelines are susceptible to multiple structural failures. Corrosion, cracks, leakages and debonding are among the most common issues.
Besides significant product loss, pipeline leakages can cause tremendous, irreversible impacts on the environment and wildlife while threatening worker and public safety. According to a 2018 research, in the US alone, liquid pipeline accidents cost a staggering $326 million annually. $140 million of this is attributed to environmental and remediation costs. The study also revealed that it generally took 9 hours to identify an accident and another 5 hours for operators to respond.
The Challenge of Granular Asset Visibility
Facing ever-growing pressure from more stringent regulations, price volatility, aggressive environmental movements and a paradigm shift towards renewables, the oil and gas industry is forced to change. Forward-thinking companies have increasingly embraced digitalization to better manage their assets and prevent costly spills. Pipeline monitoring isn’t new in the industry, but traditional SCADA systems cannot provide the granular asset visibility needed. When it comes to hundreds, if not thousands of kilometers long pipelines, knowing what’s happening every meter is critical.
Many pipeline systems started at remote exploration sites where terrestrial networks are unreliable, or absent altogether. Monitoring options are thus limited to labor-intensive manual checks or expensive satellite subscriptions, neither of which enables data collection at the desirable granularity. Even when available, terrestrial connectivity like cellular and Wi-Fi is power-hungry while imposing expensive data plans on endpoints. This makes the cost of implementing and maintaining a large-scale monitoring network daunting to many companies.
The Internet of Things (IoT) with new sensor and communications technologies is changing the game by making asset monitoring easier and more affordable than ever. Low power wide area networks (LPWAN), in particular, introduce a low-cost, power-efficient approach to gathering granular telemetry data. Thanks to its extensive range and star topology, LPWAN can connect massive, geographically dispersed metering points with less infrastructure required. Private LPWAN is ideal in the oil and gas context as its network coverage can be flexibly adapted to companies’ specific needs at offshore and inaccessible locations.
In concert with miniaturized, multi-sensing smart sensors, robust LPWAN unlocks a wealth of critical information pertaining to the structural health of pipelines and their operating conditions. Sensor data can be forwarded to an on-site HMI for immediate counteraction, as well as a central management system and/or a cloud platform for long-term storage and analytics.
Such an IoT-enabled monitoring network can enhance oil & gas pipeline management practices in various ways to reduce costs and downtime, minimize environmental footprint and augment safety and regulatory compliance.
Accelerate Troubleshooting and Responses
Having insights into pipeline integrity around-the-clock means that any abnormalities or deviations can be instantly reported. While dropped pressure apparently indicates a leak, other sensor parameters can help identify structural issues of pipelines much earlier – before a serious spill or fatal explosion happens. For example, ultrasonic and acoustic sensors can report abnormal sound waves that suggest crack initiation and growth alongside delamination. Likewise, magnetic sensors can detect a change in pipeline wall thickness due to corrosion.
Smart sensors can communicate not only early-stage damage but also its location and severity to identify and accelerate actions required. Minimizing elapsed time between a failure and remediation is key to minimizing material losses and contamination caused by released products. Detecting damages from the start also simplifies reparation, resulting in reduced costs and downtime associated with servicing.
Enable Advanced Maintenance Strategies
By collecting data on pipeline integrity and functioning conditions over time, failures can even be anticipated and prevented with predictive maintenance strategies. Analysis of previous malfunction modes enables the development of defect growth prediction and risk assessment models. On top of that, long-term integrity deterioration assessment assists in calculating the actual remaining service life of a pipe. This allows for diagnosis of structural bottlenecks alongside strategic planning of maintenance and part replacement to circumvent damages. Predictive maintenance not only helps avoid expensive unplanned outages but also redundant planned downtime that companies with schedule-based approaches want to prevent.
Automate Manual Tasks
An IoT-based condition monitoring network lessens the need for regular field inspection and eliminates manual data logging. Besides minimizing human errors, this contributes to saving costs and improving workers’ productivity as they can focus on more important tasks. Decreased site visits, especially to remote locations, also reduce the total time of truck trips, thereby curbing fuel usage and CO2 emission.
Optimize Asset Utilization and Future Design
IoT sensor data allows companies to analyze and understand pipeline behaviors under different external conditions including structural loads, weather changes, soil characteristics, moisture and pH levels. This information is instrumental in improving future engineering and construction practices to optimize the effective service life of a pipeline. Furthermore, for older pipes that have been in service for several decades, sensor data can validate their integrity for continued safe operations.
The advent of IoT enables unprecedented asset visibility that goes beyond the capability of conventional industrial networks. Deploying an IoT solution doesn’t necessarily require significant upfront investment or dangerous, cumbersome alterations in brownfield systems. Emerging wireless solutions like LPWAN and new sensor technologies can be easily retrofitted to IoT-enable critical legacy assets like pipelines at low costs. More importantly, the immediate impact of an IoT monitoring network on operational efficiency, safety and sustainability will soon outweigh the initial costs and advance companies’ competitive edge in the oil & gas marketplace. | <urn:uuid:d53cfee7-27bd-42fe-9e89-b91516624387> | CC-MAIN-2022-40 | https://www.iotforall.com/how-iot-is-transforming-oil-gas-pipeline-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00722.warc.gz | en | 0.903726 | 1,179 | 2.65625 | 3 |
If you‘re reading this blog, there‘s a very good chance that you‘ve searched for some information on dehashing a password. What is password dehashing? Is it possible to “dehash“ a password in the first place? Why are passwords hashed to begin with? This is what we‘re figuring out in this blog post.
What Is a Hash?
Starting off, we should probably learn what a hash is in the first place. Hashing a string is the process of transforming the string into another value. That‘s all it is – some might say that hashing is simply the process of „passing some data through a formula“ which in the end produces a result.
Everyone knows what a hash looks like:
All of the aforementioned hashes are of a different type – the first hash is of MD5 type, the second one is SHA1, and the third one is BCrypt. There are many different types of hashes available for use – starting from these defined and moving downwards towards hashes unique to a specific system, for example, vBulletin, MySQL, MyBB, Joomla, Atlassian, macOS, Redmine, and others. To see all of the available hash types, head over to Hashcat, but by now you should get the idea of what a hash is and when it‘s used.
Hashing != Encryption
One important thing to note, though, is that hashing is not encryption meaning that all hashes, as such, aren‘t reversible. Hashes cannot be decrypted, but they can be „cracked.“
Cracking a hash is essentially comparing a hashed value of text to the actual hashed value with an aim to see if the two values differ or if they are the same. In case the values are the same, the hash is considered to be „cracked“ – if they differ, the hash remains being an „uncracked hash.“ There are entire communities that hoard servers dedicated to just that – to password cracking – and the servers are then put to use in law enforcement investigations or to assist a company or a specific individual. As there are many use cases of hash cracking, there are many use cases of hashing, too – the most frequent use case is to hash passwords to protect them from prying eyes in case of a data breach or when complying with a specific privacy regulation (think GDPR and the like.)
Hashing to Protect Passwords
Password hashing is a very important practice in the information security world – if our passwords are securely hashed, an attacker cannot recognize their plain-text value by simply glancing at the hash, and even if the nefarious party tries cracking it, safe hashing algorithms (think BCrypt or Blowfish) will still reliably protect our secrets. Sure, some hashing algorithms are weaker than others – security experts wouldn‘t recommend using MD5 to protect our passwords due to the fact that this algorithm is easy to crack and is considered to be outdated – but some algorithms – like BCrypt and Blowfish – still hold their salt.
Salting a password hash is the practice of adding additional randomly generated strings at the end of the password hash, after a colon („:“) Salted hashes look like so:
You get the idea. The whole purpose of salting is to make the cracking of huge volumes of password hashes harder for an attacker – contrary to popular belief, salting doesn‘t do anything when a single password hash is being attacked, however, if the attacker is targeting hundreds, or even thousands, of those password hashes, we will certainly see a significant difference.
If you are considering whether to salt your password hashes or not, keep in mind that the bottom line of everything is this – hashes make passwords unrecognizable and in many cases hard to crack for an attacker, while a salt will make an attacker‘s job harder if he‘s attacking multiple password hashes at once.
Password Hashes and Data Breaches
As a data breach search engine, we certainly see a lot of data breaches; some data breaches, surprisingly, involve plain text passwords (did we tell you that there are entire websites dedicated to naming and shaming services that store passwords in plain text?), but some involve password hashes and, in some cases, password salts too. Over the years, we have certainly seen a lot of improvement in this scene, however, people still need to step up – if you are considering building a service where you allow people to register or login, consider using safe and slow password hashing algorithms like BCrypt or Blowfish (preferably with a salt) to be on the safe side.
However, not all businesses that fall prey to data breaches are operating on the safe side – some hash their passwords with a weak function lke the aforementioned MD5, while some don‘t hash them at all. And while password hashing isn‘t the solution to absolutely all security problems threatening the company, it can at least let you make sure the data of your customers is as safe as possible.
Password hashing is a good start, but in order to make sure that there are no threats of identity theft waiting for you or your company next door, consider implementing an API that lets your company scan through data breaches and ensure that your staff is safe both day and night. When using BreachDirectory, your company will significantly lessen the risks of data breaches and identity theft allowing you to sleep soundly.
Password hashing is an extremely important part of today‘s security landscape both on the web and elsewhere – hashed passwords provide an additional layer of protection for everyone involved: developers and the managers of the company whose application hashes the passwords can rest assured that appropriate privacy regulations are being followed, and security experts and customers of the business alike can be happy about another business that is taking their security seriously. Make sure to run a search through data breach search engines like the one built by BreachDirectory to assess your likelihood of being exposed in data breaches and implement an API into the infrastructure of your company to have the ability to scan the data of your staff and, preferably, your customers through data breaches alerting them of any anomalies, and until next time! | <urn:uuid:734a9b16-8d31-41b5-908a-46e9d1a9a1d4> | CC-MAIN-2022-40 | https://breachdirectory.com/blog/password_hash/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00122.warc.gz | en | 0.931887 | 1,446 | 3.15625 | 3 |
Oracle is, as you're almost certainly aware, the biggest database player in the world - and in a position to maintain that lead for some time yet.
However, the company is facing a problem, and that problem is known as "big data". Simply put, this is the term used to describe massive sets of unstructured (or semi-structured) data, which is a huge headache to work with in terms of database management.
Big data is on the rise - as more and more information is stored online, it's growing exponentially. According to Oracle's President Mark Hurd, the amount of digital data floating around is set to expand from 1.8 zettabytes in 2011, to some 35 zettabytes in the year 2020.
The problem for Oracle is that in dealing with this sort of voluminous and ever-expanding data, given the current economic headwinds, companies are looking towards solutions which are cost effective, and scale up swiftly and easily.
In other words, cheap servers and open source software such as NoSQL, and Hadoop for data analysis. Businesses are popping up to service these requirements, and the likes of 10gen are capitalising with its open source MongoDB (recently partnering with Red Hat to deploy across its offerings).
And the problem for Oracle is that its hardware, big data "appliance" can be linked to its software, but employs open source solutions - and it's more expensive.
So the danger is that as big data becomes bigger, firms may be further prompted to consider options outside the admittedly safe path of Oracle. That's a worry which has to be on Hurd's mind as he ponders the issue.
Source: Wall Street Journal (opens in new tab) | <urn:uuid:e0d3d8e8-af39-440b-8cfe-02ac984e9942> | CC-MAIN-2022-40 | https://www.itproportal.com/2012/04/12/oracle-faces-big-data-challenge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00122.warc.gz | en | 0.971354 | 360 | 2.875 | 3 |
Kali Linux Scan Network by nmap ping sweep
Hi there, I am happy to see you on my blog.
In this article, I am going to discuss the nmap ping sweep is used for checking live hosts in the network. Kali Linux scan network by nmap for getting information on active hosts in the network.
if you want to check out your target system then it will be your first step to getting the basic information that the target machine is alive or dead.
Live or dead means here “system is on or off, IP exists or not”, If system is not active in the network you can consider it as dead.
If the system is active in the network it is live. For attacking on the system, it must be active in the network.
What is Nmap?
“Nmap is the best network vulnerability scanning software or one of network security audit tools used to scan LAN network. In this article, I will use the Nmap network scanner to scan the network.” It is a free network discovery software.
Method 1: Ping Scanning for live host
As you know Ping command is used to check the connectivity between the hosts in network.
In ping process first system sends an ICMP packet containing 8 and 0 code indicating this packet is an echo request.
The target machine (second system) received this packet and response with another ICMP packets contains 0 code indicating an echo reply.
A successful Ping request and the response would show that the System in a network to be a “Live Host”.
Method 2: nmap Ping Sweep network Scanning
A ping sweep (otherwise called an ICMP sweep) is a fundamental system scanning strategy used to figure out which of a range of IP address guide to live hosts (Computer).
Although a single ping will let you know whether one specified host machine upon the network, a ping sweep comprises of ICMP (Internet Control Message Protocol) ECHO requests sent to multiple hosts.
In the event that a given address is live, it will give back an ICMP ECHO response.
Ping sweeps are among the more seasoned and slower strategies used to scan a network.
There are various tools that might be used to do a ping sweep, for example, fping, gping, and nmap for UNIX platform.
Namp ping sweeep technique used for scanning and test security. it is found out network vulnerability.
Method 3: IP Address Scanning Within Ranges by nmap ping sweep
Characterizing a set of targets utilizing an IP address range is truly nice. And scanning network is handled by IP address scanner nmap.
For this example, the address will be 192.168.56.x class c address range. This means that the greatest number of has that might be incorporated in the sweep is 254. To output, the greater part of the hosts, utilize the following command.
#nmap -sn 192.168.56.100-150
This same sweep might be finished utilizing the CIDR method for addressing to by utilizing the/24 postfix as takes after.
#nmap -sn 192.168.56.0/24
Method 4: List Scan by using nmap ping sweep
Nmap can additionally utilize a content record as info for the target list. Expect that the following addresses are put away in a document called targets.txt.
The scanning can be performed by using given command
#nmap –iL /Location_Target.txt
MODULE 5:- Scanning Network and Vulnerability
- Introduction of port Scanning – Penetration testing
- TCP IP header flags list
- Examples of Network Scanning for Live Host by Kali Linux
- important nmap commands in Kali Linux with Example
- Techniques of Nmap port scanner – Scanning
- Nmap Timing Templates – You should know
- Nmap options for Firewall IDS evasion in Kali Linux
- commands to save Nmap output to file
- Nmap Scripts in Kali Linux
- 10 best open port checker Or Scanner
- 10 hping3 examples for scanning network in Kali Linux
- How to Install Nessus on Kali Linux 2.0 step by step
- Nessus scan policies and report Tutorial for beginner
- Nessus Vulnerability Scanner Tutorial For beginner | <urn:uuid:010fd46e-cf9d-4aae-9b90-baf52ea4f640> | CC-MAIN-2022-40 | https://www.cyberpratibha.com/blog/network-scanning-for-live-host-by-kali-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00122.warc.gz | en | 0.878292 | 927 | 2.578125 | 3 |
Password managers, 2FA, encrypted messaging apps, antimalware, secure file sharing — people are starting to really pay attention to their online security. But there’s one more tool you can use to take your cybersecurity game one step further — a virtual private network (VPN). Read on to find out what it is, how it works, and why you should start using it.
How does VPN work?
Whenever you go online, your browser connects to your internet service provider (ISP) first. They redirect you to where you want to go, and that’s that. But this means that everything you do online goes through your ISP first, and they can see what websites you visit and how much time you spend there. Furthermore, if the site does not have an SSL/TLS protocol in place, your ISP will see everything you do on that website, what you click on, and what information you enter.
That is alarming all by itself. But unfortunately, it gets worse — ISPs have the right to give away this information about you to the authorities, sell it to advertising companies, or use it any way they want. Ever wondered why you keep seeing an ad for that pair of sneakers you googled once? That’s because your ISP shares this data with advertising companies so they can bombard you with targeted ads.
A VPN can change the situation drastically. If you connect to a VPN, it does two things before connecting to your ISP:
It redirects your internet traffic through one of its servers. A VPN provider usually has thousands of servers in all corners of the world, so you can choose one that’s closest to your home for the fastest connection. When you connect to the server, you also get it’s IP address — your real one stays hidden as long as you are using a VPN.
Once your device connects to the server, the VPN encrypts your data. It becomes impossible to read — your ISP, government agencies, and hackers are unable to see what information your device is sending or downloading. A VPN also hides your online identity by changing your IP (and, therefore, your virtual location) and shielding your activities from prying eyes.
Your data is at risk
You might think you have nothing to hide, but it’s far from the truth. You may not care about advertisers getting your data or the possibility that the government will demand to see logs of your online activity. However, you can’t be sure that your sensitive information won’t end up in criminals’ hands.
If a hacker intercepts your connection, they can see everything you do online, as well as your usernames and passwords, Social Security number, banking information, personal and work emails, and much more.
Do you have a lot of smart devices in your house? Cybercriminals can also use most of them to attack you. Think about your baby monitor, smart TV, Alexa, or smart locks. These devices have microphones and cameras that malicious actors can employ to harass you or gather valuable information that can be used to blackmail you. If you don’t have a secure home network, smart devices are a threat that should be taken seriously.
Why you should get a VPN
Getting a VPN comes with multiple benefits, but one of the main reasons people decide to use it is security — both at home and while traveling.
A VPN allows you to bypass government censorship and surveillance. Depending on where you live, this could make a world of difference. Many countries have certain internet regulations — some block particular services and websites; some don’t want you to connect to the world wide web altogether. A VPN makes these restrictions go away by changing your IP and virtual location. This way, your ISP doesn’t know where you go online, so they can’t block your access or track your actions.
Even if you don’t leave your country while you’re on a business trip and censorship is not an issue, you still need to think about cybersecurity. Airports, hotels, conference centers, coffee shops — these are the usual places that business travelers look for a Wi-Fi connection.
Open Wi-Fi hotspots are notoriously dangerous and very easy for a cybercriminal to fake. Would you stop to consider before connecting to a Wi-Fi called “BurgerKing_Airport_WiFi”? It could be a fake hotspot created by a hacker to trick people into disclosing sensitive data that can be later used for social engineering attacks.
Genuine public Wi-Fi hotspots are not safe to use since they are most often unencrypted. It makes it very easy for someone else on that same network to intercept your connection or even inject your device with malware. By using a VPN, you encrypt everything you send out and receive. Your information stays unreadable, even if someone manages to get their hands on it.
If you use streaming services frequently, you might have noticed that they do not offer the same content in different countries. But if you bought a subscription, it’s reasonable you would want to be able to watch your favorite series while traveling. A VPN allows you to stream TV programs, sporting events, and movies seamlessly. It grants you access without compromising your security.
A VPN is also very useful during popular events, like the Superbowl. When a lot of users are streaming at the same time, some ISPs start to throttle their customers’ bandwidth, which results in buffering and other connection issues. By using a VPN, you reroute and hide your connection. Therefore, your ISP is unable to see what you do online and interfere with your activities.
A VPN is, without a doubt, a gamer’s best friend. It guarantees a stable and fast connection, protects you from distributed denial of service (DDoS) attacks, and allows you to bypass geo-blocking. Often new games are released at different times in different regions of the world. With a VPN, you are guaranteed to get that new shooter game the moment it hits the virtual shelves. No more waiting for weeks and even months for it to come out where you live.
Stay safe, use a VPN
A VPN gives you an additional layer of security and anonymity — and peace of mind. With a VPN, you can rest assured that you’re the only one who knows what you do online. No more ISP or government surveillance and restrictions means you get to enjoy the internet the way it was supposed to be — open and free to everyone.
VPNs work on all popular devices — Windows, macOS, Android, iOS, Android TV, Linux, and more. You can set it up on your gaming console or even the Wi-Fi router if you want to protect your whole network all at once.
VPNs no longer require you to have a lot of technical know-how. User-friendly interfaces mean that everyone can use them to protect themselves from online threats, bypass censorship, avoid surveillance, and enjoy quality entertainment no matter where they live.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:54bcef50-0853-4abb-8146-b736925f8600> | CC-MAIN-2022-40 | https://nordpass.com/blog/what-is-vpn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00122.warc.gz | en | 0.945588 | 1,479 | 2.828125 | 3 |
In this article, we will show you how to do a basic Linux Server OS install using Ubuntu Server. Linux is an extremely popular operating system in our field. Many system builders will build their platform on Linux. As a result, having the skills and experience with any version of Linux can help you navigate those platforms. Ubuntu Server is Open Source, and available for free, and can be installed on nearly any platform, physical or virtual. This makes it a platform for lab use, as well as production.
The Install Procress
Download the media
Go to https://ubuntu.com/download/server and click on Option 2 -Manual server installation
Prepare the media
Preparing the media will depend on what you’re installing on to. If it’s a physical machine you’ll likely be going to create a bootable USB drive or if you’re going to run a virtual machine you can just mount the ISO file directly. Rufus is a great tool for creating bootable USB drives for any bootable image.
Boot from the Install Media
The first thing you’ll do is select your language. Use the arrow keys to navigate the list and then press enter to select.
Next, select your keyboard layout.
Then, select a Network Interface. In our case we only have a single network interface, named ens33, and it is connected to the network and getting a DHCP address.
A Proxy is sometimes used to connect to the Internet. All traffic is sent to a proxy address so it can be scrubbed to ensure security. If this is a home or lab network you likely do not have a proxy. Leave the line blank and press enter.
Ubuntu Archive Mirror Address is the location on the internet where Ubuntu will download updates from. Leave the default here and press enter.
Next, configure your local storage. By default (recommended) you can just use the entire disk. However, in a production environment, you may want to be more specific about partitioning the storage.
Review the Storage Configuration Summary and then use the arrow keys to navigate to Done and then press enter.
You’ll be warned that the disk will be formated and all data will be lost. Use the arrow keys to highlight and select continue by pressing Enter.
Profile Setup – Here you enter in your name, the server’s hostname, your username, and then your password. This is the first user and will also be an administrative level user with Root privileges.
Press the Space Bar to select Open SSH Server and then use the arrow keys to navigate down to, and select, Done by pressing enter. Open SSH Server will allow you to remotely access the server via SSH.
Ubuntu is now being installed. Monitor the progress here. It will take several minutes to complete.
Once the installation is complete you’ll see the Reboot Now option at the bottom of the screen. Use the arrow keys to highlight it and then press Enter.
You will be prompted to remove the installation media so that upon reboot the installation process doesn’t start all over again.
Post Installations Tasks
After the installation completes there are a few things you may want to do, such as applying updates, setting a statically assigned IP address, or adding additional users.
After rebooting you’ll be at a login prompt. Enter in the username and password that you created earlier to get going.
Download and Apply Available Updates
First, let’s download and apply package updates that may have come out since the build was created. To do that we’ll use a couple of commands. First, is ‘sudo apt update’ for you first-time Linux users let’s break that down. Sudo is short for Super User Do – basically the “run as administrator” of the Linux world. Apt is short for Apt-get or Aptitude, and is a package handler for Debian flavors of Linux. On Red Hat flavors of Linux, such as REHL – Red Hat Enterprise Linux, CentOS, and Fedora, you would use Yum as the package Handler.
This refreshes the package database and can tell you how many packages that have updates available. In the above screenshot we see 87 packages can be upgraded. To get a detailed list of available updates we can run “apt list –upgradeable.”
Here we have a list of all of the packages, in green, listed with a / and then the latest version of that package, followed by a set of square brackets with the currently installed version within.
To execute the upgrade we can run sudo apt upgrade. This will list all of the packages that have updates available, and the size on disk these updates will take to install. In this example, we see the updates will take up 399 MB of disk space.
At the prompt press Y and then enter to continue.
The package manager will go through and apply all of the available updates.
Setting a Static IP
A statically assigned IP address makes management of a remote host a little bit easier in that you’ll always know what the IP address is for that host. First let’s view the IP Address information for our host. Use the command ‘ip address’
In this example we can see this device has two network interfaces, the Loopback which is lo here, and the Ethernet Adapter, ens33. To view the IP info for a specific adapter you can use the command IP address show dev [device name]:
Newer versions of Ubuntu use netplan to manage Network Adapters. There’s a folder under /etc/ called netplan that holds YAML configuration files for each network adapter. We can modify these files and set the desired configuration.
First let’s look at the files in the /etc/netplan folder. To do this, run the command ‘ls /etc/netplan’
On this host, we only have the one file 00-installer-config.yaml. Your system may show more files depending on how many adapters are installed. Let’s open that file and change the settings. Use the command sudo nano /etc/netplan/FILE-NAME.
First, let’s start by changing the dhcpv4 key value from true to false. Use your arrow keys to navigate to that line. Then we’ll add the following, addresses, gateway4, and nameservers. Pay particular attention to spacing. YAML files will not process correctly if the spacing and indentation is not correct. Your file should look something like this:
Press control X to exit, and then press Y and Enter to confirm and save the changes you’ve made. Now let’s go refresh Net Plan to pick up the changes. We do this by running the command ‘sudo netplan apply.’ You may be prompted to enter in your password again.
There isn’t much for feedback here so let’s go ensure the changes took affect with IP address show dev [DEVICE NAME]
And now, we can see that it is, in fact, using the IP we configured in the netplan YAML file. We can further verify things are working by using the PING command to ping our local gateway, a DNS server out on the public internet, and we can verify DNS is working using the nslookup command.
Lastly, let’s add some users. Perhaps we want to add users to our lab machines so we have extra accounts we can do testing with. In an Enterprise environment it’s just a generally accepted best practice to give each user their own account. This is part of Authentication, Authorization, and Accounting. We need to know who the user is, give them the bare minimum privileges they need to do their work, and then log and verify their access to that system. If everyone shares the same user account we can’t tell the difference between when one person or another uses it.
To add a user account we’ll use the command adduser. Let’s add the rest of the AONE Co-Hosts to the server. The syntax is ‘sudo adduser username‘ Be prepared to enter in a password for the new user accounts.
The system also prompts you for some additional, but optional information. We can verify the user accounts have been added by listing all of the folders in the /home/directory, by typing ls /home/:
Here we can see I’ve created a new user account for each AONE Podcast Co-Host. Now, let’s add one of them to Super Users, or the sudo group, so they have administrator rights on the system. We do that using the usermod command with the -aG switch.
In this article we showed you how to:
1. Install Ubuntu Server
2. Complete common post-installation tasks: Applying Updates, set a Static IP, and add additional user accounts.
Ubuntu is an extremely popular platform as it is Open Source and easy to learn Linux on. There are many other flavors of Linux out there, so do some research and find one that fits you the most! | <urn:uuid:931f6649-b1e6-4205-a980-165ac0eb3acc> | CC-MAIN-2022-40 | https://artofnetworkengineering.com/2022/01/24/how-to-do-a-basic-linux-server-installation-using-ubuntu/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00322.warc.gz | en | 0.885224 | 1,923 | 2.75 | 3 |
The Metaverse was a massive topic in 2021, creating all kinds of conversations about what it is, the kind of business opportunities it represents, and how to be a part of it. The metaverse has generated a lot of attention lately, it is hard to find anyone who has not heard about it yet. To prove that the concept is here to stay, Facebook changed its name in October to Meta.
A bit of history about the Metaverse
The term was created by Neal Stephenson in the 1992 science fiction novel Snow Crash. In that book, the Metaverse was described as a virtual reality, in which people were represented in three dimensions through images or avatars.
What is the concept of the Metaverse?
“Meta” comes from Greek and means “beyond.” For some, it is a way to communicate with people via the Internet, in a completely virtual way, but in an environment that simulates the physical and real world.
Some people may consider the Metaverse not as a virtual place, but as an era where our lives are based on the digital world. The game Second Life was created about 19 years ago, and it was one of the first platforms to introduce the idea of a Metaverse, where people could create an avatar and interact with other people around the world in a digital space. Users of the game Second Life can now even invest in it.
Nowadays, there are a few video conferencing applications where you can create an avatar and a room, like your office; and you can see all your colleagues not just in a square video, but in 3D. This can be considered a kind of Metaverse because, in practice, we don’t really see the person but a digital representation of them on our screen.
What activities can be done on the Metaverse?
In this environment, we will be able to carry out different activities in the Metaverse, or in several Metaverses, interconnected or not. For example, study in a virtual classroom with your teacher on another continent and your classmates from many different countries. Watching a sports match, attending a concert, or interacting with the characters of your favorite movie.
Many of these activities are already a reality today. For example, the singer Ariana Grande held a concert in Fortnite’s Metaverse, along with other artists like Travis Scott.
Other games such as Roblox are also entering the Metaverse with their environments, creating a world of possibilities for the users.
According to developer Epic Games, “Travis Scott’s in-game performance was seen by 12.3 million live viewers. The Ariana Grande event is likely to top that and persuade many more artists to engage in the Metaverse.”
Are we in the Metaverse yet?
Currently, video games offer the closest experience to the Metaverse concept. Developers have expanded the limits of games through events and with the creation of complex virtual economies.
One of the main points that should bring companies to an exponential adoption of the Metaverse, is the possibility of creating thousands of jobs and work opportunities for people around the world regardless of their geographic location. People will be able to offer services and be paid for doing so in a virtual environment, and what’s more important, they will be “inside” the Internet instead of connecting to it.
We will undoubtedly see new applications directed to the Metaverse and its evolution, but we already know that it is not just a social network that you need to put on VR glasses to use or just a game in a virtual world. Millions of people are already participating in virtual worlds daily (and spending tens of thousands of hours per month inside them) without VR/AR devices. We should not expect a precise definition of the term “Metaverse,” especially at a time when the Metaverse has begun to exist.
The Metaverse impact on the Internet infrastructure
The Metaverse will not replace the Internet but instead, it will be built on it and transformed along the way. EdgeUno as an Internet infrastructure operator, finds various challenges in this new era. What impact will this technological evolution have on the Internet infrastructure? Is the current infrastructure ready for Metaverse applications?
One of the most relevant applications for this topic is virtual reality, which is known to be a great bandwidth consumer. To bring a level of realism to the user, the quality and latency of the network are crucial.
But it’s not just latency that matters, as VR is an application with high bandwidth consumption, the content must be hosted on an Edge Computing platform, and with infrastructure close to the end-user we will be able to provide high-capacity bandwidth along with low latency. “The concept of Edge Computing aims to reduce network latency, bringing computing power to an infrastructure close to user access.”
What technology is needed for the Metaverse?
The main reason for the evolution of the Metaverse is primarily due to advances in technology, including:
- Hardware: Extended Reality (and that’s where AR, VR, XR come in),
- Computing: The growing prevalence of cloud services
- Networks: new broadband Internet access networks such as 5G, FTTH, and low-orbit satellite networks
- Payment: more sophisticated and reliable blockchain technology for the development of applications, mainly economic, that demands trust between the parties in a distributed way
- Devices: Finally, technological evolutions of devices that have occurred in recent years, miniaturizing processors, and allowing high computational power with low energy consumption and at an affordable cost.
The Metaverse will require countless new technologies, protocols, and innovations to function. And it won’t come into existence instantly; there will be no precise time to think of a “Before and After Metaverse.” Instead, it will emerge over time as different products, services, and capabilities integrate and merge.
Who owns the Metaverse?
The Metaverse is likely to see a more accelerated transition in the coming years, with a wide range of services emerging. And this news will not be led by just one company or a few large ones, but by a combination of them with several startups trying to build experiences for the Metaverse.
Many companies are trying to position themselves to benefit from this new concept. There is no other way to genuinely do this without taking advantage of the cloud and Edge Computing technologies. This infrastructure is highly effective for making new products and services evenly and quickly available.
Another great example of what can be done within the Metaverse is Star Atlas. A blockchain-based video game, based on a massive multiplayer metaverse, that brings the user to space and uses Unreal Engine 5’s Nanite to create cinematic and visual experiences on the game.
Although Star Atlas is still a work in progress, it demonstrates that it is just the tip of the iceberg. And for technologies, video game developers and internet and infrastructure providers, the Metaverse is here, and everyone wants to be a part of it. | <urn:uuid:7c1cdba2-016c-418c-b661-0f068ddb3f66> | CC-MAIN-2022-40 | https://edgeuno.com/articles/the-metaverse-is-here-are-we-ready-for-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00322.warc.gz | en | 0.952391 | 1,444 | 2.625 | 3 |
Embedding Code Into VectorsVector representations of code
As we have stated over and over in the past, the most critical step in
our ongoing project of building a machine learning (ML) based code
classifier will be that of representing the code as vectors. In our
last article, we reviewed how this is done for
natural language. We looked at simple, though inconvenient methods, such
as one-hot and categorical encoding, which we actually used in our
first classifier attempt. We also took a
glimpse at the state of the art in vector representation of language,
which is based on neural networks, called
Recall that this is a neural network which attempts to predict neighboring words from the central word. Its dataset is made up from a corpus, and the results will be very dependent on the sense of this text. While the task is certainly useful, we don’t care so much about the predictions, but rather about the weights which make up the network, which give us an intermediate representation. These are the vectors we are looking for to represent each word:
Figure 1. The vectors are implicit in the middle layer.
word2vec is based on a more simple task: predict a vector
from that same vector. Well, isn’t that just an identity function,
mapping every object to itself? As it turns out, no, given that the
weights in a neural network are randomly or arbitrarily initialized and
will be optimized to the task in an iterative process. For this task,
with a single hidden layer, we get something similar to
called an autoencoder:
Figure 2. Autoencoder neural network via Carnegie Mellon.
Autoencoders turned out to be a foundational idea in neural network based dimension reduction, the task of representing high-dimensional objects (such as one-hot encoded words) as lower-dimensional vectors, while still retaining most of the useful information contained in it. Before that, most methods relied on matrix decompositions, but these tend to be computationally expensive and not scalable, thus not fit for representation of large codebases or natural language corpora.
Keeping these two neural network based ideas, namely that vector
representations can be obtained from middle layers in networks designed
for different, though seemingly frivolous tasks, one can begin to
understand how vector representations of code might come to be. What
will be the goal? Predicting the name of a function. What will be the
input data? Not too surprisingly, it will be the code of the function
whose name we would like to predict. In what form? That’s where the
waters get a little murky, since there are so many ways to structure
code, and so many representations to extract from it. Our readers might
already be familiar with the Abstract Syntax Tree, Version Control
git) history and the Control flow and program dependence
graphs. One can even simply choose not to
represent anything: consider the code as sequence of words without
exploiting its syntax, and represent them as bags of words, as we did in
our previous run. One could also look at
the meta-code: metrics
such as modified lines per commit, code churn, cyclomatic complexity,
all of those could be thought of as possible candidates for inputs to a
neural code classifier.
One of the most apt of such representations is the Abstract Syntax Tree
AST), which is universal in the sense that it can be taken out of
every language in existence, and could potentially be standardized so as
to eliminate the language barrier. Indeed, this is the input
representation chosen by the
code2vec authors. More specifically, they
sample some paths in the
AST at random. The labels in the training
phase, which would be the prediction targets later, are the function
names. The objective is to predict meaningful function names from the
function’s content. If the body is
return input1 + input2, it would
seem obvious to a human to call that function
add or even
However, this is not the case, as there are developers who give strange
names to their identifiers and methods, such as
perform_binary_operation1. Maybe to make their code hilariously
and keep their jobs forever, since nobody else would understand it; or
just make it sound like it does more than in reality.
The network itself to predict the function names from the randomly-taken
paths in their abstract syntax trees is a little more complicated than
an autoencoder or a single-layer
Figure 3. code2vec architecture. From Alon et al. (2018).
Notice that in this diagram, unlike most other seen here so far, the layers are the arrows, thus the objects are the intermediate products obtained after passing through the layers. Of course, we will be most interested in the green circles above, which are the code vectors we are looking for, and not so much in the final predictions. Not that the task of predicting function names is uninteresting, but it is not related to our interests. We only want the code vectors in order to pass them on to our classifier to determine the likelihood of containing security vulnerabilities.
code2vec offers pre-trained models and others that
can be further trained. So on my to-do list, the top priority now is to
figure out how this works in detail and how to remove the final layer,
the one that gives the prediction, and just keep the vectors. If that
sounds interesting, stay tuned to our blog.
U. Alon, M. Zilberstein, O. Levy, and E. Yahav. code2vec: Learning Distributed Representations of Code Proc. ACM Program. Lang., Vol. 3, No. POPL, Article 40. January 2019.
Z. Chen and M. Monperrus. A literature study of embeddings on source code. arXiv.
Ready to try Continuous Hacking?
Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying. | <urn:uuid:15068faf-bdbe-41f4-8062-7ec74f57cf42> | CC-MAIN-2022-40 | https://fluidattacks.com/blog/embed-code-vector/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00322.warc.gz | en | 0.924931 | 1,331 | 2.78125 | 3 |
High Availability (HA) firewall clusters are designed to minimize downtime for critical systems through the use of redundant systems. HA firewalls can maximize the availability of critical services using various clustering modes, such as active/active vs. active/passive. In the Active/Active mode multiple firewalls actively share the load across the cluster, while in the Active/Passive mode one firewall is a standby that becomes active when the primary firewall fails. In this article we discuss what an HA firewall is, the benefits of drawbacks of different clustering modes and how modern hyperscale network security technologies enable cloud-like elasticity and scalability for on-premises networks that require resilient systems.
The goal of a HA firewall deployment is to eliminate single points of failure within an organization’s network infrastructure. Instead of using a single firewall to protect the network, two or more firewalls are deployed in a group as a cluster.
These firewalls synchronize with one another using a heartbeat connection, which informs one firewall if the other has gone down. If this occurs, the redundant firewall can seamlessly failover existing connections, providing continuous protection without interruption.
HA firewalls can be deployed using various clustering node configurations. Some of the common configurations include:
Load balancing implies that all nodes in the system are active all of the time. Some HA node configurations perform load balancing, such as active/active configurations. However, some node configurations, such as active/passive are generally not load-balanced. At any time, at least one node in the system is not active, either because it is a backup node or a node has failed and another node has assumed its role.
In some cases, an organization may implement N+1 and similar configurations using load balancing. The redundant nodes that would usually be offline remain active and have traffic load balanced to them until a primary node goes offline. If this occurs, the “backup” node assumes its duties.
Most firewall vendors offer clustering solutions where the firewalls communicate together to form the cluster. Another option is to deploy multiple firewalls “sandwiched” between Server Load Balancers, also called Application Delivery Controllers (ADC). In this architecture, network traffic is load balanced to the group of firewalls, providing a more scalable and highly available security infrastructure.
The Server Load Balancers direct traffic equally across the firewall members of the cluster. In general load balancing provides numerous benefits, including:
Often, support for active/passive node configurations is built into a firewall solution. However, in order to implement configurations that are dependent on load balancing, an ADC must be deployed in front of and behind the firewall cluster. However, this can create additional firewall management challenges, such as asymmetric routing, managing encrypted traffic, and the scalability of the solution as the size of the cluster grows. Another challenge is the management of multiple products, i.e. the ADC and the firewalls.
Check Point offers multiple solutions for customers looking to deploy an HA firewall. If an organization wants to implement a simple HA firewall cluster with up to 5 nodes, this can be accomplished using the built-in HA and load sharing functionality described in Check Point’s firewall documentation.
Check Point Quantum Maestro is another Highly Available firewall option that is a scalable load balancing solution that does not require third party Server Load Balancers. With Maestro, multiple Next Generation Firewalls can act as a single, unified system. The entry level Maestro solution includes a Hyperscale Orchestrator plus two or three firewalls and additional firewalls can be added as needed to seamlessly scale security throughput.
One or more Maestro Hyperscale Orchestrators distribute internal and external network traffic equally across multiple firewalls managed as a single group with a common security feature set and policy, also called a Security Group.
Maestro HyperSync clustering technology provides full redundancy within a system. At the same time, traffic is balanced across all logical Security Group members, ensuring all hardware resources are fully utilized. Within a Security Group each connection is synchronized to two security group members, an Active and a Backup member, ensuring there is not a single point of failure. Maestro’s benefits include: | <urn:uuid:6331eef5-d04d-4735-acaf-e6601138414f> | CC-MAIN-2022-40 | https://www.checkpoint.com/cyber-hub/network-security/what-is-firewall/high-availability-ha-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00322.warc.gz | en | 0.916077 | 873 | 2.703125 | 3 |
One of the most widely discussed issues throughout the world today is the rapidly increasing price and demand of energy supply. At the same time, the broadening awareness of the environmental impact and depletion of fossil fuels has created a natural drive towards energy saving and renewable energy sources. A UPS is an indispensable prerequisite for a reliable power infrastructure able to achieve maximum load safeguarding and conservation. Most UPS suppliers have introduced ECO Modes of operation to further increase the levels of efficiency of the UPS. This white paper analyzes the drawbacks of ECO Mode types of operation and highlights what elements should be considered when using these modes of operation.
Brought to you by: | <urn:uuid:4c358d9b-a884-4819-8c0a-a815987c4883> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/colocation/white-paper-high-efficiency-modes-operation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00322.warc.gz | en | 0.929877 | 128 | 3.03125 | 3 |
The significance of Juneteenth and what it means from a global perspective
- Posted on June 17, 2022
- Estimated reading time 3 minutes
Juneteenth takes place on June 19, and as part of our new Inclusion & Diversity podcast series, we brought together four people from INSPIRE, our Black Employee Network to ask them what Juneteenth means to them.
The conversation was hosted by Mina Rabideaux, Global Employee Network Program Lead. The panel included: Chanelle Schneider, a content specialist and creative lead for INSPIRE; Eunice Kyereme, global communications lead for INSPIRE and an infrastructure consultant based in the northeast of the U.S.; Frederick Douglas Williams, global PR lead for INSPIRE and a UX designer in New York; and Jasmine Blackmon, a senior consultant within the delivery management talent community and INSPIRE member.
In one word, they were each asked what Juneteenth means to them? Jasmine said to her it means freedom. For Frederick, it was connection. Chanelle said for her it means history, and for Eunice it means power.
For the group, the Juneteenth holiday is about remembering and commemorating the emancipation of enslaved people in the U.S.. The holiday was first celebrated in Galveston, Texas, where on June 19, 1865, in the aftermath of the Civil War, enslaved people were informed they were free under the terms of the 1862 Emancipation Proclamation. They also hold it as an important reminder of the possibility of changing the future of the U.S.. | <urn:uuid:6259084d-751f-49f8-b9d1-b1899c42e741> | CC-MAIN-2022-40 | https://www.avanade.com/en/blogs/inside-avanade/diversity-inclusion/significance-of-juneteenth | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00322.warc.gz | en | 0.964603 | 323 | 3.296875 | 3 |
Cybersecurity has never been more important than it is today. With so many companies getting breached in different ways, one of the easiest ways to protect yourself is with a secure password. Today’s hackers have become very capable of attacking and compromising even the most trusted computing environments. However, employing strong passwords and using them properly, will help to safeguard company and personal information.
9 Guidelines for Creating Secure Passwords
Below are some guidelines to follow when creating and maintaining your passwords:
Use at least 10 Characters in your Passwords
Choose a password that is at least 10 characters long since they’re harder to crack than shorter ones. Consider using longer lengths or phrases for even greater security.
Complexity Requirements – Use the 10+4 Rule
Consider using a 10 + 4 rule. Use 10 characters mixing in upper and lower case letters, numbers, and special characters. Make sure to spread the numbers and special characters throughout the password, rather than just at the start or the end.
Avoid Personal\Public Info and Common Words
Information such as your name and address are readily available and will be a first choice for hackers when generating thousands of combinations quickly from dictionary words. Consider using random characters instead.
Avoid Sharing Passwords Across Accounts
Minimize risk and use different passwords for each account you have.
Avoid Storing Your Passwords in a Document
Documents containing your passwords should never be stored at your desk, on your computer, the network, or the cloud.
Never Share Your Credentials with Others
Keep your credentials to yourself. If someone needs your password, they should go to IT for assistance.
When Changing Passwords, Change it Completely
When changing your password, try to change your password completely vs. adding an extra character or number at the end.
Avoid Changing Your Passwords Too Frequently
A well-developed password can last for three months or more. This will discourage using passwords that leverage patterns such as Password1, Password2, and so on.
Consider Using a Password Management Tool
Applications such as LastPass and Keeper are very good for managing and/or storing passwords, and have the ability to generate strong passwords. Many applications today have the ability to sync across devices and fill in forms automatically.
Does your software have business rules in place to automate secure password best practices? Contact Liventus today to see how we can improve your software and existing security applications. | <urn:uuid:3a7fda46-a068-440f-b7c4-32a3f9e6c411> | CC-MAIN-2022-40 | https://www.liventus.com/guidelines-for-creating-secure-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00322.warc.gz | en | 0.918472 | 512 | 3.03125 | 3 |
While vulnerability scanning tools are invaluable for pentesting, you might be wondering how to use them for auditing your application and network for vulnerabilities. This is when automated vulnerability scanners come into play. They are quite powerful tools but require a bit of configuration work. This blog will look at what automated vulnerability scanning is, why it is useful, and why you need an automated vulnerability scanning tool to detect and fix security vulnerabilities.
What is Vulnerability Scanning?
Any weakness in the information system, internal control, and system processes that cybercriminals can exploit is known as vulnerability. Vulnerability scanning is performed to detect and remediate these vulnerabilities. Vulnerability scanning can be done either by the team or by automated software to manage different types of vulnerabilities.
Automated vulnerability scanning is different from manual vulnerability scanning, in which a human examines an application or system and searches for vulnerabilities.
What is Automated Vulnerability Scanning?
Automated vulnerability scanning is a type of vulnerability scanning in which systems or applications are scanned using automated tools. This process is usually performed by vulnerability management software or vulnerability management services.
Automated Vulnerability Scanning tools have the forte of auditing, logging, threat modeling, reporting, and remediation. Using an automated web vulnerability Scanner can have many advantages like:
1. Risk Assessment
Consistent scanning can help the cybersecurity team know the efficiency of the security controls over the organization’s system. But if there is a constant need to fix the bugs, the security team should be scrutinized.
2. Pro-active security
If all the applications are scanned beforehand for all the bugs, it can prevent cybercriminals from attacking the system.
3. Time management
The scanning, which should not be facile, needs to be turned to automation. This can help reduce the workload and the human hours required.
How does Automated Vulnerability Scanning Works?
Automated Vulnerability Scanning works in four different steps. Let us understand them one by one:
1. Identifying the vulnerabilities
A web application security scanner or a vulnerability scanning tool uses a vulnerability database to detect security vulnerabilities in the target system. The tool probes into different areas of the target system, based on pre-defined rules, and looks for response patterns that indicate potential web application vulnerabilities.
2. Risk evaluation
The vulnerability identified should be weighed using a scoring system to check its severity and the impacts on the system. This is usually done by using the CVSS score combined with the potential damage caused by a certain vulnerability.
The treatment of the security breach should start with prioritization. The vulnerabilities should be classified according to their score, and thereby an inventory should be created to remediate them. A comprehensive vulnerability assessment results in specific guidelines for fixing the vulnerabilities.
Any breach found, tested, and treated should be reported in an impeccable way for creating future awareness. The vulnerability scanning report should contain the details of the test cases, an executive summary for common understanding, suggestions against each vulnerability, etc.
See this Sample Vulnerability Scanning Report: Link
What is Continuous Vulnerability Scanning?
The security industry recommends frequently scanning the vulnerabilities rather than quarterly or yearly. This could make sure to act for the blind spots otherwise left in not so frequent scanning. Some vendors also offer passive scanning, which continuously monitors the network for new systems or applications. This allows the team to treat the vulnerabilities if any.
Types of Automated Vulnerability Scanning
1. External vs. Internal Vulnerability Scanning
The scanning can be performed either inside or outside the system or even for the system which is being evaluated now.
The internal network provides access to the parts of the system. The ease of access depends on the configuration and segmentation of the system. This management classifies the threats based on the data that is provided by the network.
External scanning determines the exposure of attacks to the applications which are easily accessible from the internet.
2. Authenticated vs. Non-Authenticated Scanning
A vulnerability assessment can be authenticated or non-authenticated based on the requirements. Authenticated scanning uses login credentials to get detailed and accurate information about the application and scan all the authenticated endpoints (along with authenticated).
Non-authenticated automated vulnerability scanning finds the services that are open on the internet. Non-authenticated scanning is a high-level scan that excludes all the authenticated routes of the application.
Factors to Consider While Choosing Automated Vulnerability Scanning Tool
Several factors can help us decide on the appropriate scanners. Some of the essential points to remember are:
- The tool should contain a broad number of tests so that the effective cost of scanning can be cut down to the minimum.
- The tool should be easy to use for everyone. Vulnerability testing is a niche process, it is not known by everyone besides the basics. So the tool should be such that every team member can use it.
- The tool should detect the threat in the minimum time to resolve it earliest, and the team can focus on the value-adding services.
- Ensure it can compile all the data as per the regulations and standards relevant to the organization.
- Most vulnerability scanners begin by viewing the complete web application page. The right vulnerability tool should also identify these things.
Top 5 Open Source Automated Vulnerability Scanning Tools
Open-source automated vulnerability scanning tools are one of the best ways to reduce the cost of vulnerability scanning and improve efficiency. While there are several free and paid options available, discovering the best ones can be a challenge, which is why we have a curated list of the best free, open-source tools.
3. OWASP Zap
Why Choose Astra for Automated Vulnerability Scanning?
Astra is the best solution for automated vulnerability scanning, as it comes with more than 4000 vulnerability scan rules. As the best vulnerability scanner, Astra can find and help you fix critical vulnerabilities in your web applications. Finding vulnerabilities in your website is the first step towards improved security.
Astra’s scanner is able to cover the most popular application and website vulnerabilities. This makes Astra’s scanner the best automated vulnerability scanner in the market.
Vulnerability scanning plays a vital role in the enterprise’s security. Make sure to pick up the right tool for your company before it’s too late. If implemented correctly, the tool can assess the modern security risks and provide the security team with all the essential information required to treat that security breach.
1. What is Automated Vulnerability Scanning?
Automated vulnerability scanning is a type of vulnerability scanning in which systems or applications are scanned using automated tools.
2. What is Vulnerability Scanning?
Vulnerability Scanning is a term used to describe a practice where a system is scanned for different vulnerabilities and there is a list that is created based on this scanning.
3. Is Astra’s Vulnerability Scanner a trusted solution?
The answer is YES. Astra’s vulnerability scanner is a trusted solution. The product was created by a team of IT experts and developers. The solution is used by a numb | <urn:uuid:87ccae56-318d-42be-bd0b-a61691b0e750> | CC-MAIN-2022-40 | https://www.getastra.com/blog/security-audit/automated-vulnerability-scanning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00322.warc.gz | en | 0.918555 | 1,504 | 2.640625 | 3 |
As the global population approaches 10 billion in the next 30 years, the solution to greater energy efficiency and seamless internet distribution lies in smart city IoT development. Not only will IoT technology reduce the strain of internet traffic in urban areas, but it will also allow for wider use of data transmission.
Omnipresent sensors are the key to interconnected cities, as they can be installed in traffic lights, parking meters, and even trash cans. These sensors communicate with central analysts who monitor activity and formulate infrastructure improvements. Soon, smartphones everywhere will be able to communicate with a multitude of community services to improve convenience.
Setting Sustainable Development Goals
The adoption of IoT in a smart city will involve installing smart devices across internet networks in various public resources. Smart meters, for example, provide measurements of energy consumption, while smart thermostats allow consumers to control room temperatures from remote locations.
Improving the quality of city life is possible through interconnected building blocks of smart technology. These components include Low Power Wide Area Networks (LPWANs), Bluetooth, and 5G. Solar panels and other renewable energy sources will further contribute to energy and cost savings as useful backup power when the traditional grid experiences higher demands.
Cities will reach sustainability goals much faster simply by installing smart sensors. The technology allows officials to better understand traffic flows, energy consumption and Wi-Fi usage. Driving this revolution is big data collection. According to IBM, 90 percent of today's global data has been generated just in the past few years.
Elements of Success Behind Smart Cities
The concept of making city infrastructures more digital and more intelligent began with smart meters and smart thermostats. The use of machine-to-machine (M2M) technology will play a major role in this development for utilities as well as manufacturers. Here are the four main components needed to build smart cities of the future:
- Nonstop wireless connectivity - A 24-hour "always on" internet connection must be achieved to provide constant services and data collection.
- Open data - All players within a complex system or supply chain create synergy by sharing data with each other. Combining this information with contextual data will be the key to streamlining operational efficiency.
- Reliable security - You must be able to trust a robust security system that alerts you and takes action when an intruder enters your network. Stronger encryption and ID management will ensure the right data goes to the right places.
- Flexible budget planning - Smart electric and water systems help consumers monitor their usage so they can make adjustments to consumption as a way of controlling utility bills.
Heading for a Smarter Future
Here are some of the various niches that smart technology developers have produced so far:
- Physical and Digital Protection - Utilities need to keep infrastructure and workers protected from attackers using modern digital solutions. Advanced security systems will continue to improve upon detecting intrusion in real-time and alerting security officials for both physical and online danger.
- Greater Control of Energy Efficiency - Master Energy Control (MEC) gives users remote web access to energy activity from any desktop computer or smartphone. This development gives energy users the ability to conduct fault analysis in real-time.
- Fleet Management of Transportation Resources - Operators of vehicle fleets will have more data at their fingertips to know exactly where vehicles are located and how much fuel they are consuming.
- Connected Buildings - This AI innovation interconnects equipment within a building for monitoring and control purposes. It's the key to predictive maintenance, which helps lower operating costs.
- E-Governance of Community Resources - Municipal and public services will become more connected with smart infrastructure. It will speed up services for communities based on real-time system analysis.
- Electronic Mobility - Free charging apps provided by local governments will give the public greater access and control for charging electric vehicles and other digital technology.
- Climate Changes and Monitoring - IoT devices will help collect environmental data so officials can track and evaluate air pollution for a certain region. This technology will even be applied to third world countries. | <urn:uuid:28d7022d-2357-4996-8f0d-6bf3eb7fbab4> | CC-MAIN-2022-40 | https://iotmktg.com/iot-significantly-improves-smart-city-infrastructure-security-and-sustainability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00522.warc.gz | en | 0.907966 | 819 | 3.234375 | 3 |
NASA is planning to send four satellite missions aimed at studying weather conditions, mineral dust, oceans and surface water on Earth in 2022, SpaceNews reported Wednesday.
The launch dates of Time-Resolved Observations of Precipitation Structure and Storm Intensity with a Constellation of SmallSats, Earth Surface Mineral Dust Source Investigation, Joint Polar Satellite System and Surface Water and Ocean Topography satellites were announced during an online press briefing for the annual American Geophysical Union conference.
Six TROPICS small satellites are set to launch on an Astra Space rocket in March and will provide storm data three hours faster than orbiting National Oceanic and Atmospheric Administration satellites.
EMIT will observe Earth’s mineral dust, an important part of the cloud formation and snow melting processes, from an external platform in the International Space Station.
JPSS-2, a joint NASA satellite with NOAA, is scheduled for a September liftoff onboard United Launch Alliance’s Atlas 5 rocket for observation missions using its infrared imaging radiometers, ozone mapping and profiler technologies, and microwave and infrared sounders.
SWOT, a collaborative effort between NASA and the French, Canadian and U.K. space agencies, will map surface elevation of water using two radar antennas and a mast following its launch on a SpaceX Falcon 9 rocket in November. | <urn:uuid:4b887660-c605-4aea-aad5-14261bc9dc59> | CC-MAIN-2022-40 | https://executivegov.com/2021/12/nasa-announces-four-earth-science-satellite-missions-set-for-2022-launch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00522.warc.gz | en | 0.880917 | 269 | 3.0625 | 3 |
As the future of 5G gets closer and closer to becoming a reality, we’re standing at the precipice of entering the next technological revolution just over a decade after the last one. And while downloading HD movies in a fraction of the time it currently takes is an exciting key feature, the predicted societal transformation we’re about to embark on is expected to go well beyond faster Internet.
How many phone numbers do you know by heart? For some, it’s hard to imagine that we used to know the numbers of everyone we called regularly. For others, the idea of using phones for anything but calling people is shocking.
We Live in Our Phones
Today, the iPhone—and its Android counterparts—have revolutionized how we live, think and communicate. Not only do we use our smartphones to manage nearly every aspect of our lives, but they are even being used to save lives by detecting health concerns, such as an irregular heartbeat. It’s no wonder we’re glued to them as if they are an appendage to our bodies considering just how much we depend on them. There is growing awareness and concern that we are too dependent and addicted to our phones. We even use our phones to tell us how much we’re using them and place restrictions on how much time we spend in certain addicting apps, especially those with the time-sucking infinite scroll.
However, 5G has the potential to change this phone and unlatch us from our screens by integrating more natural interactions to technology. We’re already starting to see some of the potential benefits of a 5G world with more voice and gesture commands—like Alexa, Google Home, and Siri—built into our devices. But in order to successfully create a completely seamless and invisible network that wirelessly connects all devices and interactions with those devices, data will need to be transferred, stored, and processed at rapid speeds that are not just fast but also consistent and highly reliable.
What Is 5G?
5G is the next evolution or “generation” of wireless mobile network after 4G LTE. It’s expected that 5G will become commercially available in 2020, although it will most likely take several years until we start to see the full spectrum of 5G services being offered nationwide.
5G’s Effects on Data Center Infrastructure
So how will 5G affect the infrastructure needed to make these technological advancements a reality? Connections are defined by the capacity, or bandwidth, of how much information & data can travel at nearly the speed of light via fiber. With more than 75 billion devices predicted to be connected to the Internet by 2025, significantly more data will be created, which means we need significantly more capacity to transmit that data reliably and consistently. And when it comes to 5G applications, consistency and reliability are just as important as speed.
If you asked how long it would take me to deliver 50 gallons of water to you, I would need to think about: how far away I am from the water source? how wide is the hose? how much consistent pressure is being applied to pump out the water? That’s how we have to think about some of the ways current and future network infrastructure will evolve to support 5G advancements.
– Rick Moore, Cloud Services Director
Life or Death: Importance of Speed and Consistency
Imagine a surgeon being able to perform surgery remotely via robotic arms. Network speed is necessary to ensure every move the surgeon makes is being performed in real-time. But perhaps even more important than speed in this scenario is the elimination of jitter, which could represent the thin line between life and death.
How to Eliminate ‘Jitter’
The same line of thinking can be applied to any number of scenarios like autonomous vehicles or, say, a firefighter being able to fight fires remotely. Hitting the brakes too late or experiencing a delay in the delivery of critical water supply will impart more significant results than waiting for a movie to buffer on your smartphone. Therefore, certain data will need to live on a dedicated network away from congestion from other applications. While entertainment streaming companies can’t afford to compromise user experience, trust and loyalty with inconsistent and slow network speed, regulations will need to be implemented that place higher importance on applications that affect one’s health and/or safety.
5G at the Edge
Edge computing also plays an important role in 5G. Logically, the closer data is to where it is being processed, the faster transmitting and computing that data will be. We're already starting to see more emphasis on localized, distributed computing power to support mission-critical applications with lower latency and higher resiliency. However, there will always be a need for a centralized core as the beating heart to any IT environment. After all, even the most advanced micro data centers will not be capable of handling all tasks exclusively at the edge. And with the advancements we’ll see from 5G over the next few years, network architectures will become faster and more robust, effectively bringing the edge to the core and ultimately dampening the need for powerful edge computing.
The Future of 5G
While we know that 5G will become the invisible fabric that wirelessly connects all devices and applications one day, it’s hard to predict exactly what a 5G future will look like. But what is certain is that the amount of data that will be created is going to increase significantly over the coming years and therefore more capacity with reliable, consistent, and near-instantaneous speed will be needed to power a 5G future. | <urn:uuid:2da3f620-a851-41cc-a78b-0485b2d7bf03> | CC-MAIN-2022-40 | https://www.digitalrealty.com/blog/5g-how-we-got-here-and-its-impact-on-data-center-infrastructure | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00522.warc.gz | en | 0.950284 | 1,143 | 2.578125 | 3 |
Data security is a big deal for data centers and should also be taken seriously by companies large and small, as well as everyday people like you and me. Cybercriminals use many different techniques to attack data. And because data security has grown and improved over the years, cybercriminals are now starting to use data corruption as a new technique to destroy data. Microsoft’s new technology aims to prevent this type of attack by preventing data corruption.
If you’ve been working on a project, and suddenly the file you’ve been working on isn’t accessible anymore—this could be due to data corruption. Data corruption is a damaging or deterioration of data. This can be caused by many different things including human error, hardware problems, and software error. Data corruption is when data is unusable, unreadable, and inaccessible to the user.
Data corruption can happen because of an external virus stored or installed into the target computer or device. The virus overwrites the original data and then either modify the code or deletes it altogether. Data corruption can also be the result of malfunctions in the software or hardware. For most people, this is probably why they are experiencing data corruption. Natural disasters like storms, hurricanes, and flooding can also cause data corruption.
There are a couple of different ways to tell if your data is corrupted. You’ve likely experienced at least one of these signs of data corruption. The first sign is if you are trying to find and open files or folders but they are relocated or missing. Another way you can tell if your data has been corrupted is if you receive an open file error or an invalid file format. Another sign of corrupted data is if your file was automatically renamed with nonsense characters. Many people have a format they like to use when naming the files on their computer. Many times it’s the same format to make things easier to locate. If you happen to notice one or more of your files renamed using gibberish characters your data has most likely been compromised and corrupted. Another more subtle sign of corrupted data is if your file permissions and attributes have been modified. This may not be as noticeable as the former, but it can be a sign of corrupted data.
If your computer crashes often and you can’t figure out why and there are no reports or error codes, your data may be corrupted. Another serious sign is if your computer is experiencing slow disk operations or if the disk seems to be overworking while you don’t have many things open. This can be a sign of data corruption or even worse damage to your system.
Corrupted files can be frustrating and it can happen when least expected. If your data is corrupted in the middle of a project or an assignment, it could be disastrous. Knowing how to fix corrupted files can save you in more ways than one.
The first step in the process of fixing corrupted files is to run a check disk on the hard drive. This particular tool will scan the hard drive and try to recover any bad sectors. If this works and the sectors are repaired, attempt to reopen the file. This could potentially fix the problem without any other steps.
If the previous step didn’t work, try to use the CHKDSK command. This will work similarly to the previous step but will use a command version of the process. Doing the same thing differently could fix the corrupted file.
The next step is to the SFC / scannow command. This command will attempt to find and repair the corrupt Windows system file.
Next, change the file format. This can be done in a couple of different ways. You can use a file converter application like File Converter, PDF converter, Media Converter Pro, File Commander, etc. Or you can open the file using an application that converts to other file formats. Opening a Word document within a PDF application will usually prompt file-conversion utility. Sometimes this alone can repair corrupt data.
If none of the “quick fixes” work, you can try to fix corrupted data by using file repair software. There are many different file repair software tools available both free and paid. Some of the best recovery software tools are EaseUS Data Recovery Wizard Pro, Stellar Data Recovery, CrashPlan, OnTrack EasyRecovery, Piriform Recuva, Wise Data Recovery, Paragon Backup and Recovery, MiniTool Power Data Recovery, Recover My Files Professional, and GetDataBack. Recovering data can be stressful and frustrating. But Windows has new technology that can prevent data corruption.
Windows latest technology, Kernel Data Protection, or KDP is a new technology that protects specific parts of the Windows kernel and drivers, which in turn prevents data corruption attacks. KDP is a set of Application Programming Interfaces (API) that gives users the ability to mark some kernel memory as read-only. This prevents attacks from being able to modify protected memory. KDP is also compatible supported by secured-pro PCs. It works with the current environment and enhances the security already found within the PC.
Kernel Data Protection is executed in two different ways—Static KDP and Dynamic KDP. Static KDEP enables software running in kernel mode to statically safeguard a section of its image from being meddled with by any outside individual. Dynamic KDP supports kernel-mode software to apportion and release read-only memory from a secure pool. The memory coming back from the pool can only be initialized one time. KDP limits kernels are accessed protecting the system from outside threats.
Data security is important to businesses large and small and should be important to all individuals as well. Most of the world uses computers for many different tasks, and data corruption is a stressful hassle that no one needs. Knowing how to backup your data, protect your files and restore them if something were to happen will save you a lot of time and stress. | <urn:uuid:c0066615-2561-4751-a5f5-a60b2361b1cc> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/preventing-data-corruption | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00722.warc.gz | en | 0.928859 | 1,198 | 3.328125 | 3 |
Most of the attention focused on information security today surrounds the public data breach. Almost daily we hear a new report about hundreds or thousands of records of personal information being improperly disclosed. In fact, it is the loss of private data that drives most of the regulatory environment designed to enforce security. GLBA, HIPAA at the national level, as well as dozens of state laws including CA SB 1386 and MA 201 CMR Part 17. Certainly, any organization that is concerned with information security must be concerned about data privacy.
So how does information security enforce privacy? Essentially, the idea is to make sure that private customer information is protected from improper disclosure. In healthcare, for example, protecting individual electronic personal health information (ePHI) is the focus of the both HIPAA and the extension of HiTECH. Within PCI-DSS, the credit card number and individual information is the focus of the protective controls.
So how does this translate into an information security program? In practical terms, the enforcement of customer privacy requires two key ideas: First, that the organization identify and classify all of the private data it possesses, and second, that the security program implements the highest level of protection for this sensitive data. This link is created within the Information Classification Policy.
Information Classification – or more accurately – Information Sensitivity Classification is the process of dividing data into different categories based on the need for confidentiality.
Usually, this is done using three or four categories. A common three-category scheme divides up the information like this: PUBLIC – Information that is not sensitive to the organization and can be viewed by anyone. INTERNAL — USE ONLY (Private) – Information that should only be seen by people inside of the organization, and CONFIDENTIAL – Access to this information must be tightly restricted based on the concept of need to know. Information that should only be accessed by a limited group of individuals and would cause harm of the organization if released. The famous label “TOP SECRET” was often used by the government to indicate that information may involve national security.
The idea is simple in principle. Apply more protection to the most sensitive data. Apply less protection to the least sensitive data. In practice, the idea can be very difficult to implement. Information classification requires a well-crafted set of information security policies that enable the organization to identify and label the information, and then maintain these sensitivity labels as the data moves around the organization. With so much data in so many different places, this can be quite a challenge.
In working with many organization developing information security policies, perhaps the biggest mistake we see is the failure to track and properly classify sensitive customer data. An organization may have a highly sophisticated information security program, using the latest technical wizardry such as firewalls with intrusion prevention. But if the organization cannot identify which data needs to be protected, all of the technical controls may be meaningless.
Security Policy Tip: So the take-away is this: If you go through the trouble of developing and implementing information security policies, make sure you remember the important link between privacy and information security – the information classification.
For organizations that need to develop a comprehensive Information Classification Policy, Information Security Policies Made Easy contains examples of two, three, four and five-category information classification schemes, as well as 1500 other sample information security policies. | <urn:uuid:6b72dcf7-7486-4f6c-a637-929f3635fb31> | CC-MAIN-2022-40 | https://informationshield.com/2012/09/11/information-classification-the-link-between-security-and-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00722.warc.gz | en | 0.918167 | 675 | 3.046875 | 3 |
A WAN (or wide area network) is a network of computers within a very large area, such as an entire state or country.
|Web 2.0|| |
Web 2.0 is a loosely defined intersection of website characteristics, including increased user participation, user-centered design, peer-to-peer collaboration, dynamic user-driven content and a rich online experience. As the Web 2.0 movement continues to grow, organizations are rapidly hiring talented programmers to increase their presence on the top Web 2.0 sites - and incorporate Web 2.0 elements into existing sites - to gain a competitive advantage in this fast growing and highly active population. Common Web 2.0 site structures and examples include social networks (Facebook), blogs (Mashable), wikis (Wikipedia), video sharing sites (YouTube) and social bookmarking hubs (Digg).
|web browsers|| |
A web browser is a program used to view, download, upload, surf or otherwise access documents (or pages) on the World Wide Web. Examples of popular web browsers include Internet Explorer, Mozilla Firefox, Opera, Safari and Google Chrome.
Short for "Web-based Seminar," a webinar is a presentation, lecture, workshop or seminar transmitted over the Internet. Modern webinar software has very modest connection requirements so almost anybody with a computer and an internet connection can join and participate.
WLAN is short for Wireless Local Area Network. Also known as WiFi, this is the linking of computers without wires. It is most commonly used for internet access, either in public hotspots or in private home and office networks. Connectivity is limited to between 10 and 200 meters depending on physical infrastructure.
|Work breakdown structure|| |
A work breakdown structure (WBS), in project management and systems engineering, is an oriented dissection of a project into smaller deliverable components. It defines and groups a project's individual work elements in a way that helps organize and define the total work scope of the project. | <urn:uuid:daf29e41-39ad-42a1-b9d2-ce9a53a19865> | CC-MAIN-2022-40 | https://www.itcareerfinder.com/brain-food/it-career-glossary/W.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00722.warc.gz | en | 0.895157 | 431 | 3.15625 | 3 |
More people than ever are using the London Underground; it handles around 4.8 million passenger journeys on a daily basis and 1.34 billion passengers annually. With the rise in public mobility and the need for people to keep in touch at all times, Transport for London (TfL) is under increasing pressure to provide mobile phone services on the London Underground for passengers, as well as for emergency services.
Indeed, TfL is reportedly in talks to provide mobile coverage for public safety communications. With underground locations such as the London tube network, which are heavily used by the public, robust wireless communication networks are critical for the provision of continuous communications during emergencies.
It is crucial that there is underground coverage for the TETRA two-way radio network for police, ambulance and fire personnel on the London Underground. However, its tunnel environment presents some significant challenges for radio distribution systems, which is holding back mobile operators from providing passenger and public safety communications underground.
Challenging radio environment
With 45% of the London Underground network in tunnels, this can present environmental and radio frequency (RF) challenges. Indeed, it is notorious for having a non-RF friendly environment. By virtue of the fact that much of the network is underground, receiving mobile or public safety signals can be nigh-on impossible.
The outdoor mobile network is simply not designed to penetrate subterranean spaces – and the ground literally acts as a radio wave shield, as radio signals from above simply cannot penetrate it.
Combine that with heavy construction materials, such as concrete and brick, and you have one of the most inhospitable mobile environments imaginable. Additional challenges are introduced by both changes in track elevation, and curves that radio waves are unable to bend around.
There are also the limitations with space in the tunnels and on the platforms. With 4.8 million passengers using the London Underground on a daily basis, it can get so overcrowded that the ability to gain access to the tunnels is extremely limited due to logistical and security issues – and finding available space and power for equipment can also be a real challenge.
This makes it difficult for mobile operators to install and maintain radio infrastructure. As such, they need to install equipment that will not only fit into small spaces, but that require little or no maintenance, as gaining access later can be an issue.
The tunnels in the London Underground are also known to collect dust and rubbish on a constant basis. While there are people who are employed to clean the tracks and tunnels to ensure safety on the London Underground and prevent dangers such as a fire, the dust and rubbish often clogs radiating cable, electronic equipment, fans and circuitry.
This can significantly affect the performance of RF distribution and requires constant maintenance in areas that are often hard to reach. As mentioned earlier, space limitations can cause logistical and security issues. Mobile operators should look at technology that can be sealed against dirt and dust, and therefore eliminate the need for constant maintenance.
Another challenge that mobile operators will be faced with when it comes to providing mobile services on the London Underground is that they must install technology that is able to deliver multiple services across numerous licensed RF bands, both mobile and public safety.
Indeed, mobile operators or transit authorities may require a shared radio distribution infrastructure in order to achieve efficiencies in both installation time and space requirements, as well as keeping costs low.
As such, mobile operators must use technology that is able to support multiple frequencies and that is able to be changed or upgraded without significant additional investment – not least because with the ever-changing wireless landscape and future rollout of next-generation services, such as 5G, this will be a critical requirement to meet future demands.
An underground-proof solution
Mobile operators require an underground-proof solution that is able to overcome the environmental and RF challenges that London’s tube network poses. By using small cell architecture, such as wideband distributed antenna system (DAS) technology, network engineers will be able to design a mobile network that is fit for the tunnel environment, while enhancing passenger and public safety communications underground.
Wideband DAS technology will extend wireless signal from a base station via discreet and remote antennas. Remote antennas can be deployed throughout the London Underground, which means that mobile operators will be able to provide the coverage and capacity required on the tube network.
In addition, wideband DAS solutions deliver uniform signal strength through the entire tube network because it can effectively distribute RF signals from a central radio source such as a base station through a fibre network to remote amplifiers directly adjacent to passive antennas. As such, since the signal is amplified end-to-end, the signal strength is the same at each antenna.
DAS antennas also fit well in environments where there is limited space, not least because the remote antenna units can be as small as a standard smoke detector. The antennas are also sealed off against environmental contaminants such as dirt and dust, which eliminates the cost and time involved in maintaining the equipment.
What’s more, wideband DAS technology can support multiple mobile operators and protocols, which means that they can provide access to major mobile services, such as 2G, 3G, 4G and eventually 5G, as well as public safety frequencies such as TETRA, within tunnels and on platforms.
The engineers for the London Underground would only need to deploy one set of wideband DAS electronics, which would be able to support multiple mobile operators, frequencies and services. As a result, deployment costs will be minimised and there will be minimal disruption to London Underground users.
Another benefit of using wideband DAS technology is that it can also be linked to local or remote base stations via fibre. This allows mobile operators to use existing fibre runs and RF resources to deliver the RF to the London Underground tunnel. This means that the base station can be located miles away from the DAS antenna and base station resources can be centralised to reduce CAPEX and OPEX.
Future mobile services on the London Underground
While the demand for mobile services on the London Underground is already high, with the rollout of next-generation services such as 5G and the number of passengers using the London Underground rising on a daily basis, the pressure only looks set to increase.
While passengers need and want mobile services on the London Underground, it’s also a matter of keeping passengers safe in emergencies. Public safety communications are a key driver behind TfL’s quest to provide mobile coverage and capacity on the London Underground.
Small cell architecture, such as a wideband DAS, will help mobile operators to overcome many of the challenges associated with tunnel environments that could have been holding TfL back from providing mobile services on London’s tube network. Ultimately, TfL will not only be able to provide a ubiquitous mobile service for its passengers, but for emergency service departments too.
Image Credit: Everything Possible / Shutterstock | <urn:uuid:6f5095d1-4239-4d90-bf21-e15de27da747> | CC-MAIN-2022-40 | https://www.itproportal.com/news/deploying-mobile-comms-services-on-the-london-underground/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00722.warc.gz | en | 0.950287 | 1,401 | 2.9375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.