text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
As we write this article, there are over one million workers in cybersecurity in the US. But what’s more interesting is that the number of job listings for the past few months for cybersecurity-related positions keeps on growing. In fact, according to Cyber Seek there are 600,000 openings in our country.
In other words, the talent gap keeps on getting bigger. And while there might be different reasons behind this, the lack of diversity is one of the most important ones.
A report published by The Aspen Institute estimates that only 4% of the cybersecurity workers self-identify as Hispanic, while 9% as Black, and 24% as women. But what makes things worse, and proves that this might be the key to closing the gap, is that African-Americans represent 13% of the US population and Hispanics 19%.
So how come such little percentages are part of a field that offers so many benefits to its workers? On one hand, the demographics in cybersecurity are mainly represented by white males, and on the other, there is a lack of funding options available so underserved minorities can get the right training.
Cybersecurity is aching for a more inclusive workforce, but what can we do?
The first thing is for cybersecurity training programs to acknowledge the role they play in this situation and create different ways to encourage a more diverse population to enroll with them and create high-wage career opportunities for underrepresented communities.
At CyberWarrior Foundation we have a partnership with the Department of Homeland Security and CISA to help candidates from the Northeast and Southeast regions of the US with a partial tuition grant that will allow them to kickstart their career.
Mentorship programs should be included in academies, where qualified professionals guide students and explain what it’s like to work in cybersecurity. This allows them to better grasp the abilities they need to develop, the differences between each career route, and which is the best fit for them. This program might potentially evolve into an apprenticeship option, allowing each student to put their knowledge into practice while increasing their chances of landing a full-time employment.
But not all the responsibilities rely on academies and training institutions. Organizations of all sizes have played a part as well.
They need to find ways to increase, and most important, retain diverse talent. And by this we don’t mean adding one or two Hispanics or African-Americans to their staff, but rather create an environment that makes them feel welcomed. For example, respecting traditions and holidays for each culture, hosting diversity training, and revising their policies to make sure they offer equal growth chances for everyone.
That’s why we believe it’s very important that cybersecurity leaders start focusing more on having an inclusive workforce when hiring talent for their organizations. In fact, they should choose their employees based on their ability to do a job, not on the certifications they’ve earned, and offer them the training needed so they can keep on specializing.
Far beyond helping their business be recognized as one that promotes diversity, it will boost their team with more perspectives about an issue, different approaches when crafting a solution, and even more experiences and ideas that will help them stay ahead of cyber-threats.
What other ideas do you have? Please comment and share so we can all help make this a world understands that our differences only make us stronger. | <urn:uuid:241df008-854c-4450-afe0-8ad29c7f855f> | CC-MAIN-2022-40 | https://www.cyberwarrior.com/2022/02/16/cybersecurity-is-aching-for-a-more-inclusive-workforce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00506.warc.gz | en | 0.96183 | 689 | 2.578125 | 3 |
How do Companies Report on ESG?
Companies tend to turn to widely respected standards and frameworks in environmental, social, and governance disciplines and report on their alignment with their best practices.
These ESG frameworks provide guidance on what to report on and how to report on it. It also is helpful for consumers and investors to reference these public reports in their evaluation of companies against each other. These standards also help stakeholders aggregate and audit ESG reports.
Some examples of ESG reporting standards include:
Companies may make their ESG reports accessible via their website, and sometimes their reports can be accessed on the standard or framework’s website.
Making your ESG efforts and strategy publicly available widens the reach to other stakeholders who can become aware of your ESG practices and ESG information. This allows more people to view your company reports, financial statements, and see exactly how well your business model and ESG strategy are performing.
Who is responsible for ESG reporting?
Typically, boards lead the way on ESG. But it is up to the entire organization to gather the information needed to produce the reports while acting in accordance with the ESG policies set forth for them. Many companies have entire departments of leaders dedicated to carrying out their ESG program. Job titles include VP of ESG Sustainability, ESG Controls Manager, ESG Director, or Sustainability Manager.
What are the pillars of ESG?
The pillars of ESG refer to the implications of each letter in the acronym. “ESG” stands for Environmental, Social, and Corporate Governance. These are the features that are used to measure the sustainability and ethical impact of an investment. Investors that are socially responsible use these 3 sets of criteria when screening for potential new investments.
The environmental pillar refers to how a company’s behavior is impacting the environment. How heavily do they rely on fossil fuels? What are their pollution levels? What is their carbon footprint? Did they recently launch a program to restore the planet, through planting trees or cleaning up waste? This pillar is especially important for companies in the utility, chemical, or energy sectors.
The social pillar considers a company’s behavior regarding issues like employee equality, diversity, health and wellness, product safety, training and development, animal testing, and other social and human rights issues.
The governance pillar speaks to the way in which a company operates internally. Good governance means fostering transparency and upholding a strong value system, and avoiding things like fraud and corruption.
What should be included in an ESG report?
A report that discloses ESG activities typically includes both qualitative and quantitative metrics about the company’s programs.
ESG reporting may include metrics like key performance indicators (KPIs), and other metrics including:
- Their percentage of environmental impact reduction
- How many lives they improved the health and wellbeing of
- What percentage of their products were made via sustainable sourcing
- How much they reduced waste in their production process
- Employee satisfaction survey results regarding fairness in the workplace
- Vendor spending with diverse suppliers, including minority, veteran, disabled, and women-owned businesses.
The metrics used are outlined by the ESG framework they are reporting on their alignment with. These frameworks provide shared language so that companies can report on ESG standards in a way that can be verified and understood by all stakeholders.
Related Post: What Is ESG?
LogicManager’s ESG Reporting Software
ESG is only successful when it’s incorporated into every area of the business. This is one of the reasons why Enterprise Risk Management is crucial to implement before gathering ESG data to report on. Not only does ERM promote good governance, but it also helps significantly with streamlining the reporting process.
At LogicManager, we talk about using ERM to build what’s called an “ESG bow tie.” This method provides a seamless process for gathering, centralizing and producing strong ESG reports. Learn how to build an ESG bow tie at your organization by reading our blog post here.
LogicManager also offers out-of-the-box solution packages to satisfy all your ESG reporting needs:
- Save time and resources using our taxonomy-driven AI tools. Simply identify a relevant law, standard or framework and LogicManager will automatically identify ways in which you’re already in compliance.
- Automate the ESG disclosure management and sustainability reporting processes to ensure that your organization is on track with standards like the CDP, CDSB, GRI, SBTi, TCFD, PRI, and WEF Stakeholder Capitalism Metrics.
- Our Policy Portal makes it easy to view all of the policies you have in one place that are supporting your alignment with any ESG framework.
- Leverage our Risk Maturity Model (RMM) for a way of actually measuring how strong your enterprise risk management program is, and therefore how strong your ESG program is.
- Gain access to risk assessment templates, tasks to conduct those risk assessments, and mechanisms to formalize your policies and track them systemically.
- Our testing and metrics collection capabilities automatically prompt for the necessary monitoring needed on an ongoing basis (and then automatically generate the evidence that you are doing what you say you are doing).
- Collect control evidence and documentation as you work within LogicManager’s taxonomy, building an audit trail along the way.
In Summary: How To Report On ESG
To conclude, knowing how to report on ESG can seem daunting at first. However, with the right guidance and tools it can be done effectively and is a key part of modern corporate reporting.
Maintaining your company’s transparency in the world of the see-through economy is imperative. Creating a reporting framework for your organization and their ESG performance can ensure that your business strategy is sustainable in all areas. Proven sustainability reporting frameworks can also ensure that your company is able to satisfy stakeholder demands when it comes to publicly reporting on sustainable performance and financial performance.
It’s not enough to just say you’re doing something; reporting on your efforts is a critical step in reaping the benefits of any due diligence. This applies especially to your Environmental, Social and Corporate Governance (ESG) program: you may be working in accordance with ESG best practices, standards and frameworks, but you must report on your ESG activities to prove to third-party stakeholders like investors and customers that you’re doing what you’re saying you’re doing.
Curious to see how LogicManager can help you start building an ESG reporting program today? Schedule a free, customized demo at the link below! | <urn:uuid:fdd9d5a4-585c-4b8e-a574-a59102c7fe31> | CC-MAIN-2022-40 | https://www.logicmanager.com/resources/esg/how-to-report-on-esg/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00506.warc.gz | en | 0.94458 | 1,382 | 2.5625 | 3 |
Share this post:
Cryptocurrencies may have become better known for their wild fluctuations and unwanted links to scams in recent times, but it’s worth remembering the potential that digital coins hold as a force for good. The world has over 1.7 billion people who have no access to any form of banking infrastructure, despite many of them having access to technology like mobile phones. Writing for Forbes, Robert Anzalone suggested that, “stablecoin can become a path for the unbanked to create a stable store of monetary value and exchange. If access to digital technology increases across all nations, the implication for the unbanked and poor may well be widespread crypto adoption over local and less resilient financial systems.”
Because stablecoins are tethered to tangible assets like gold or the US Dollar, they’re designed to maintain a consistent value that’s ideal for everyday usage. The idea of a currency promoting financial inclusion is one that’s been met with skepticism from financial commentators worldwide. But could the notion of bringing blockchain-based financial solutions to unbanked citizens across the world carry merit?
Bring new transparency, simplicity and efficiency to every financial transaction
The Financial Times listed a key flaw in the potential effect stablecoins can have on the unbanked based on the fact that over one-third of those listed as unbanked are in their position due to not having enough money to open a bank account. If 34 percent of the unbanked are unable to open an account, it immediately wipes off a significant chunk of the 1.7 billion-strong market. However, that still leaves a hefty number of potential adopters who could benefit from some form of digital banking. The topic of financial inclusion is still a hot one worldwide. In fact, the UN’s 17 sustainable development goals reference financial inclusion as an enabler.
After much furor surrounding its initial announcement, it’s looking increasingly likely that the currency won’t be capable of delivering on its promise to provide financial solutions to unbanked populations, but this doesn’t take blockchain out of the reckoning. Could the technology take a significant step in bridging the gap between finance and the unbanked? Let’s take a deeper look at how blockchain technology can spark a revolution in global banking:
The significance of the unbanked
There’s a widely acknowledged desire for change when it comes to looking at the needs of citizens in deprived areas across the world. Influential tech leaders like Bill Gates have readily pointed to the problem and highlighted the benefits of working on a solution. On the subject of the unbanked, Gates said, “including the poorest in the financial system increases the value of their assets, transforming the underlying economics of financial services through digital currency — helping those who live in poverty directly.”
The significance of bringing banking to the unbanked comes in the form of a vital step in taking populations out of poverty. Financial facilities and bank accounts are dependent on users carrying valid forms of identification as well as access to an infrastructure that allows them to store their money. The key problem is that this doesn’t appear to be an issue that will naturally resolve itself without some form of direct action being taken.
One of the most significant issues that affect many of those without access to banking stems from a lack of formal identification. Without valid IDs, there are very few opportunities for the unbanked to build a credit history that can pave the way for any form of loans. Without lending and the wealth that it can potentially create through production, there are very few options for unbanked citizens to break out of their debt cycles and, in turn, poverty.
Could blockchain provide a credible solution?
Banks are traditionally considered essential when it comes to bringing inclusive finance to large portions of the planet that are still unbanked today. The most deprived areas include 80 percent of sub-Saharan Africa, 67 percent of adults in the Middle East, 65 percent of Latin America, and over 870 million individuals across East and Southeastern Asia. Add to the mix the 60 million adults in Western Europe and North America who remain unbanked and the full scale of the problem becomes a little clearer.
Supplying a financial infrastructure for the unbanked is filled with risk for many banks. The associated costs with such endeavors are high, and there are no guarantees that there will be a return on investments. This is where technology like blockchain has the potential to step in where banks are fearful of taking a gamble. It’s logical to see blockchain as the current best hope for bringing financial inclusivity to parts of the world where the notion of a financial infrastructure previously seemed like something of a pipedream.
There’s little doubting blockchain’s financial pedigree. Although there are questions surrounding the arrival of new digital currencies, the technology was an instrumental player in Bitcoin’s seismic rise to prominence in 2017, and its heavy levels of accessibility mean that there’s strong potential to bring banking to deprived citizens on a large scale. The high level of security within blockchain networks means that the technology can set up financial transactions both quickly and efficiently with no need for any intermediaries to become involved for cross-border payments and transfers.
The immutable structure of the privacy blockchain can provide to customers ensures safety while making or receiving payments. As long as a customer can access a device that has the power to access eWallets, there are minimal costs attached to operating the technology. There are also plenty of wallets out there that are built to be effective on blockchains like Ethereum.
Most importantly, blockchain allows users to carry their very own tangible digital identity with them whenever they transfer money on both a domestic and international scale. The technology fundamentally excels in places that were originally considered as stumbling blocks for established banks.
Blockchain driving change
The beauty of blockchain is that it can seemingly make things that seemed impossible a generation ago become entirely logical. Bitcoin could have never taken the world by storm were it not for its blockchain framework, and now its underlying technology is being explored across a number of industries.
Statistics show a large number of communities across the world still suffer from a lack of financial infrastructure. The arrival of algorithmic stablecoins like can be utilized entirely on secure accounts and transferred internationally could be the only actionable solution we have for progress. The algorithm allows users to support the stablecoin through a recapitalization mechanism of accounts with lack of collateral, at the same time receiving a set commission. Fortunately, it’s a good one.
While there may not yet be a cryptocurrency that finally helps deprived populations to manage their finance, blockchain is more than capable of creating an environment that can finally bring inclusivity to banking.
From time to time, we invite industry thought leaders, academic experts and partners, to share their opinions and insights on current trends in blockchain to the Blockchain Pulse blog. While the opinions in these blog posts are their own, and do not necessarily reflect the views of IBM, this blog strives to welcome all points of view to the conversation.
How to get started with IBM Blockchain now | <urn:uuid:0aa76134-dc22-433c-9553-b5294a27d568> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/blockchain/2020/03/blockchain-and-the-unbanked-changes-coming-to-global-finance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00506.warc.gz | en | 0.95159 | 1,442 | 2.59375 | 3 |
TensorFlow. Sounds like something from a Guardians of the Galaxy movie, maybe something that Rocket Raccoon might use. And Caffe? What’s that? Hint: This article is about AI tools, not coffee.
By Dave Shirey
In last month’s exciting episode, we looked at what a model was and what different types of models we could find. This month, we’ll look at two tools that can be used to set up models for you that can then be used in an AI project: TensorFlow and Caffe.
Let’s start with Caffe (Convolutional Architecture for Fast Feature Embedding). Unfortunately, while everyone agrees that Caffe is simple to use (relatively speaking, of course, since nothing in AI is really simple), the different flavors of Caffe can be a bit confusing to at first.
The original Caffe, called just Caffe, is a free and open-source piece of software written in C+. Like most AI products, Caffe does not do everything. Specifically, it’s primarily oriented around image-recognition projects.
As the Caffe website so eloquently puts it, Caffe is built for expressiveness, speed, and modularity. Expressiveness? Simply put, Expressive architecture allows you to develop or configure new models based on the configuration, rather than by hard coding things in the software itself.
Because it is open source and has had over 1000 downloads, a number of users have enhanced and expanded it and have uploaded their modifications to the mother ship. The result is a product that remains near the cutting edge of AI image projects.
In addition, there is an active Caffe user group, which is very important as you move forward. It’s helpful to have a group of people using the same software who you can communicate with.
Caffe models can process 60 million images a day, averaging 1 millisecond to access the image and 4 milliseconds for the learning process. This makes Caffe one of the fastest image-recognition models currently available.
Caffe2 is the next generation of Caffe. It’s meant not to replace the original but to expand it. One of the main benefits of Caffe2 is its support for mobile in the AI process. It also provides “operators,” which are like the “layers” in Caffe but are more flexible in terms of how you can use them. Layers/operators contain the basic logic required to calculate the output that will be generated, based on the various input features. While Caffe has some of that, there’s more in Caffe2, and you also have the ability to create your own custom operators.
And, if that’s not enough, the Caffe2 website indicates that the product is being rolled into PyTouch, a Python library.
These products provide a large number of pretrained models (found in GitHub) that you can bring in and use if the shoe fits.
If there is a negative related to Caffe, it’s that it’s a little light on documentation, not surprising for an open-source product. But it does have a very large and enthusiastic set of users, and they have written a plethora of articles and blog posts designed to help you with whatever you’re struggling with.
TensorFlow is the big gorilla of this genre, having been developed by the Google Brain team for use within Google before being released to the open-source world.
It’s another product that lets you define a model, infuse it with a particular statistical process, and then start your data training process.
The home for TensorFlow, tensorflow.org, is a storehouse of information, not just about the product but about Machine Learning in general. That’s a good place to start as you begin to learn more about AI.
The real question is what types of models TensorFlow supports. That is, we saw above that Caffe specializes in image-recognition modeling. TensorFlow also does that, as well as text and voice recognition. And, of course, it allows you to do your own thing and develop a model that is unique to your situation.
What is that like (writing your own model)? Well, to be honest, it’s a lot of code, but the TensorFlow site gives you plenty of help in terms of how to do it, although there’s no doubt that it’s not for the faint-hearted (see the below code; the first is for beginners, the second for experts).
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
model.fit(x_train, y_train, epochs=5)
For more-advanced users:
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
model = MyModel()
with tf.GradientTape() as tape:
logits = model(images)
loss_value = loss(logits, labels)
grads = tape.gradient(loss_value, model.trainable_variables)
TensorFlow has a ton of documentation available on its site. There’s no shortage of info, and its user community has enhanced this documentation with many posts and articles.
So Which Do You Choose?
Of course, you must know that I’m not going to give a recommendation. Never get yourself involved in an unnecessary lawsuit, I say. Plus, it’s not an easy decision.
First, it depends on your model needs. What business problem are you trying to solve and what type of data will you be using in your training?
Second, you may want to consider the size of your endeavor. Caffe seems to be the acknowledged leader in terms of speed, although to get the maximum output you should be using GPU rather than CPU. If you don’t know the difference (as I did not), GPU is a type of chip that was originally designed for gaming and all that stuff. CPU is the more standard type of chip. Needless to say, GPU can beat the pants off of CPU, and it’s becoming increasingly important in the AI world, where speed is important. You can either build or buy GPU-based machines or use AWS to create a server for your GPU needs.
Third, it depends somewhat on your technical level. Developing models in TensorFlow is definitely much more code-oriented than Caffe, which uses an abstraction layer to let you set up your models in something that looks very much like CSS code. For example:
mean_file: "data/train_mean.binaryproto" # location of the training data mean
source: "data/train_lmdb" # location of the training samples
batch_size: 128 # how many samples are grouped into one mini-batch backend: LMDB
top: "data" è etc.
In the end, you’ll have to look carefully at what you’re trying to do, the level of technical resources you have available, and maybe your astrological sign. I mean, it can’t hurt, right? | <urn:uuid:242fabba-e939-4be4-b0c4-66cebe2b13ef> | CC-MAIN-2022-40 | https://www.mcpressonline.com/analytics-cognitive/open-source-tools-for-watson-part-3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00506.warc.gz | en | 0.914903 | 1,989 | 2.625 | 3 |
Eavesdropping in the cybersecurity world refers to the interception of communication between two parties by a malicious third party (hackers). Eavesdropping is similar to a sniffing attack, where software applications allow a hacker to steal usernames and passwords simply by observing network traffic. This often happens on Public Wi-Fi networks where it is relatively easy to spy on weak or unencrypted traffic or by putting up a fake Wi-Fi network for unsuspecting users to connect to.
In all three situations, hackers are eavesdropping on your communications seeking to steal login credentials, and other sensitive information on a user’s devices. Eavesdropping also allows hackers to listen into VoIP communications as well. Eavesdropping is often conducted by deploying “Stalkerware” onto unsuspecting users devices, often by someone you know (family member).
Source: ECPI University
Additional Reading: How Hackers Use An Ordinary Light Bulb To Spy On Conversations 80 Feet Away | <urn:uuid:dca322f3-6df8-4761-8c60-83c83a91a8ea> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/eavesdropping/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00506.warc.gz | en | 0.929487 | 198 | 3.328125 | 3 |
Wireshark is a powerful tool, but it has its limitations. Unless you have professional networking equipment, it’s hard to analyze traffic that doesn’t involve your computer.
Sometimes the easiest solution is to use tcpdump to capture traffic on the remote server, and then run Wireshark to take a look at it.
What are Wireshark and tcpdump?
While Wireshark does a great job of capturing every network packet that flows past it, in some cases you’ll need to analyze a session from a remote server. Unless you have special networking equipment, this can be difficult. Sometimes it’s easier to capture traffic on the remote server, then analyze it on your desktop.
While Wireshark does a great job of capturing every packet that flows past it, in some cases you’ll need to analyze a session from a remote server. Unless you have special networking equipment, this can be difficult. Sometimes it’s easier to capture traffic on the remote server, then analyze it on your desktop.
tcpdump is a command-line packet analyzer. It’s not as easy to use as Wireshark, but it’s just as capable of capturing traffic. Since the tcpdump command runs in a terminal mode, it’s possible to launch it through an SSH session. With the proper command-line options, you can export a tcpdump session that’s compatible with Wireshark. You can check out our tcpdump cheat sheet to learn more about installing, packet capturing, logical operations, protocols, and more.
See also: Wireshark Alternatives
Before you begin
To follow the directions in this guide, you’ll need the following:
- A remote computer with an SSH server and tcpdump installed
- Root access
- Services that generate network traffic, like Apache or node.js, running on the remote computer
- A local computer with an SSH client and Wireshark installed
The goal is to use tcpdump commands on the remote computer, through SSH, to capture network traffic. Then the captured traffic can be copied to the local computer for analysis with Wireshark.
This is useful when you don’t have physical access to the remote machine or are running it ‘headless,’ i.e. without a keyboard and monitor.
Capturing packets with tcpdump remotely through SSH
In order to capture traffic with the tcpdump command, you’ll need to connect to the remote computer through SSH. You will also need root access, otherwise the tcpdump won’t be able to capture traffic and you’ll see an error stating You don’t have permission to capture on that device.
Once you’ve connected, run the following command to start capturing traffic with tcpdump:
sudo tcpdump -s 0 -i eth0 -w tcpdump.pcap
The command-line options I’ve used to capture this session will be explained below. In short, the above command will capture all traffic on the Ethernet device and write it to a file named tcpdump.pcap in a format compatible with Wireshark.
Once you’ve finished capturing traffic, end the tcpdump session with Ctrl+C. You’ll see a short readout displaying some information about the capture session.
Before you can copy the traffic from your remote computer to the local one for analysis with Wireshark, you’ll have to change the permissions. By default, tcpdump sessions captured by the root user can’t be copied. Use this command:
sudo chmod 644 tcpdump.pcap
That will allow you to copy the file to your local machine using scp, as outlined in the next step.
Copying a tcpdump session for analysis
Once you’ve finished a capture session with the tcpdump command, you’re left with a problem. How do you copy it to the machine running Wireshark for analysis? There are a lot of ways, but I think the easiest is with scp. Since you’ve already captured network packets on a headless machine using SSH, everything you need to use scp is already installed and running.
Windows users will have to download pscp, then copy the file to C:\Windows\System32. Most Mac and Linux users already have everything they need.
In Mac or Linux, open a terminal window and run the following command to copy the session capture file:
scp firstname.lastname@example.org:/path/to/file ./
Or in Windows, open PowerShell and run this command:
pscp.exe email@example.com:/path/to/file .\
Substitute with your information where appropriate. You’ll be prompted to enter your password. The commands I used are in the screenshot above for reference.
Check to see that the file copied as expected, and you’re ready to analyze the tcpdump session with Wireshark.
Analyzing a captured tcpdump session with Wireshark
Analysis works the same as it does with any traditional Wireshark capture; the only thing you need to know is how to import the file.
Start Wireshark, then import the tcpdump captured session using File -> Open and browse for your file. You can also double-click the tcpdump capture file to open it in Wireshark, as long as it has the *.pcap file extension. If you used the -w option when you ran the tcpdump command, the file will load normally and display the traffic.
In my case, I’m running an Apache server on the remote host, and I’m interested in looking at HTTP data. I set the appropriate Wireshark view filter, and I can browse the captured frames as usual.
As a test, I’ve embedded an element in the HTML code that’s not displayed on the page. I should be able to locate it in the data stream and view it with Wireshark.
As you can see, Wireshark is able to analyze each frame and display the data just fine. The element I’ve hidden shows up in the example above. The capture process is a bit more involved when you use the tcpdump command, but everything in Wireshark works as usual.
Using command-line options for tcpdump
Most of the time, when you launch tcpdump you’ll want some control over how you capture the packets and where you store the session. You can control things like that using command-line options. These are some of the most useful command-line options for tcpdump.
The -w command-line option enables Wireshark compatible capture output. It takes a single variable, which is the tcpdump output filename. Capture logs saved using this option won’t be human-readable outside of Wireshark, since they’re stored in binary rather than ASCII.
The -C command-line option enables you to set a maximum file size in bytes. This option only works alongside -w. For example, the command tcpdump -C 1048576 -w capture.pcap specifies a maximum capture size of 1MB (1,048,576 bytes) output to the file capture.pcap.
If the session generates a larger amount of output, it will create new files to store it in. So a 3MB capture would generate capture.pcap, capture1.pcap, and capture2.pcap each with a file size of 1MB.
The -s command-line option sets a maximum packet length for each in bytes, and truncates the packet when the maximum is reached. The command tcpdump -s 0 sets an unlimited length to ensure that the whole packet content is captured.
The -i command-line option specifies which network device you’d like tcpdump to monitor. If no interface is specified, it defaults to the lowest numbered interface that is currently running.
The command-line option tcpdump -list-interfaces will print a list of all interfaces that are available for tcpdump to attach to. Note that this doesn’t start a capture session, but it will give you a list of interfaces to use with the -i option above.
The -c command-line option tells tcpdump to exit the session after capturing a specified number of packets.
The -n command-line option instructs the tcpdump command not to resolve IP addresses to hostnames. This is useful when troubleshooting websites behind a load-balancing server, and in a handful of other cases when using a hostname would give ambiguous results.
tcpdump -v | -vv | -vvv
The three command-line options, -v, -vv, and -vvv allow you to increase the verbosity of your capture session. -v will save TTL values for each packet, along with ToS information. -vv will output TTL and ToS along with additional information in NFS packets. And -vvv will log everything the first two options do, along with additional information from telnet sessions.
The -F command-line option instructs the tcpdump command to use capture filters from the specified file. More information about writing a capture file can be found in the next section.
Using capture filters for tcpdump
Capture filters let you narrow down the data that tcpdump stores in a session. They’re a helpful way to make data analysis a little easier and keep capture files small. Here are some of the most useful capture filters for tcpdump.
This filter specifies that only traffic to and from the target host should be captured. It takes an IP address or hostname as an argument.
The net filter will tell your computer to only capture traffic on a given subnet, and takes an IP address as an argument. For example, 192.168.1.0/24 specifies that traffic to or from all hosts on the subnet will be captured. Note that a subnet mask in slash notation is required.
Similar to host, this capture filter specifies that only traffic with a destination of the given host will be captured. It can also be used with net.
Like above, but this filter only captures traffic originating from the specified host or IP address. It can also be used with net.
This filter tells tcpdump to capture traffic to and from a given port number. For instance, port 443 will capture TLS traffic.
Similar to the port filter, portrange establishes a range of ports on which traffic is captured. To use the portrange filter, specify the starting port and ending port separated by a dash. For example, portrange 21-23.
The gateway filter specifies that your computer should only capture traffic that used a given hostname as a gateway. The hostname must be found in /etc/hosts.
The broadcast filter specifies that tcpdump should only capture traffic that is being broadcast to all hosts on a subnet.
This filter tells tcpdump to capture only multicast traffic on the host machine’s subnet.
Filters can be chained together using the and, or, or not operators. For instance, to capture all web traffic on a given host you could use the filter port 80 or port 443. Or you could capture all traffic on a given subnet except broadcast packets by using the filter net 192.168.1.0/24 and not broadcast.
It’s very common to use filter operators in practice since they provide an additional layer of granularity to your captures. You can capture exactly the traffic you need, without a lot of extra network chatter.
Complex expressions with multiple operators
Even more complex expressions can be built by surrounding multiple operations in single apostrophes and parentheses. For example, you can monitor all mail traffic, including SMTP, IMAP, IMAP over TLS, POP3, and POP3 over TLS, across multiple hosts and subnets, using a command like this:
tcpdump '(host 10.0.0.1 and net 192.168.1.0/24) and ((port 25 or port 143 or port 443 or port 993 or port 995))'
Complex expressions with multiple operators can be very useful, but they are typically saved to a filter file for reuse since a single typo will cause the capture to fail. Frequently, they’ll need to be prepared ahead of time and debugged.
See also: Variable Length Subnet Mask Tutorial
Using filter files for tcpdump
The filters above can be run on the command line when tcpdump is launched, but often it’s useful to build a filter file. A filter file makes it easier to reproduce filter settings between captures since it is reusable. Here are the steps to writing and using a filter file.
Write the filter file
Filter files use exactly the same notation as the command line. They don’t require any special characters or magic numbers at the top of the file.
For instance, here’s a filter file I’ve written that will capture all web traffic outbound from my Apache server to a given host. In this case, the Chromebook I’m writing on.
As long as the file is readable by the user running the tcpdump command, the program will attempt to parse everything in the filter file and use it as a valid filter. When a filter file is used along with command-line filtering, all command-line filtering will be ignored.
Instruct tcpdump to use any given filter file using the -F command-line option, followed by the path to the file. In the example above, the filter file is located in the same directory that I’m executing tcpdump in.
Here’s the raw packet capture output from the filtered session. You can see that the only packets being logged originate on port 80 or 443, and are outbound to the host at 192.168.1.181.
Once you see your filter is working as intended, capture a session to be analyzed with Wireshark using a command similar to this:
sudo tcpdump -i eth0 -s 0 -w wireshark.pcap -F filter-file
Wireshark and tcpdump
Unless you’re running a managed switch with an administration port, sooner or later you’ll need to capture traffic on a remote server. When Wireshark alone won’t do the job, Wireshark with tcpdump is a popular choice. The two work really well together and, with a few simple command-line options, tcpdump will export capture sessions that can easily be analyzed in Wireshark.
Wireshark tcpdump FAQs
How do I tcpdump an IP address?
The host filter will reduce the tcpdump to the traffic for just one IP address, use the host filter. This should be followed by the IP address. For example, tcpdump host 192.168.0.10. To reduce the output further you can specify only traffic originating at that IP address with the src option or only traffic going to that IP address with the dst option, eg: tcpdump src host 192.168.0.10.
How do I use tcpdump on a specific port?
You can select all traffic for a specific port with a filter on the tcpdump command. This method will also give you specific protocol traffic just as long as you know the port used for that protocol. The filter is port and it is possible to specify just TCP or UDP traffic with the options tcp and udp. Examples: tcpdump port 53 or tcpdump udp port 53.
What is remote packet capture protocol?
Remote Packet Capture Protocol is a program, called RPCAPD.EXE. This is implemented as a daemon and is part of WinPCap. The Remote Packet Capture Protocol daemon acts as an agent on one computer, allowing packets to be captured from it according to commands issued on another computer.
Why doesn't tcpdump capture passwords like ftp and ssh unlike Wireshark?
It is possible to capture ftp passwords with tcpdump. Run tcpdump -nn -v port ftp or ftp-data and search the output for USER to get the username and PASS to get the password. Even Wireshark won’t decrypt an SSH session, including the login credentials without already knowing the key used to encrypt the connection. Neither tcpdump or Wireshark can get the username or password for an SFTP or FTPS session. | <urn:uuid:7dce9c30-41c5-457b-b00c-24b2ec1d6520> | CC-MAIN-2022-40 | https://www.comparitech.com/net-admin/tcpdump-capture-wireshark/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00706.warc.gz | en | 0.87064 | 3,551 | 2.59375 | 3 |
Warning: Using unlicensed wireless networks could be so damaging to the safety of your data that industry experts are suggesting the need for a Surgeon General-like warning label.
Regulators, private industry and law enforcement widely agree that data networks—wireless and wireline alike—will forever be vulnerable to attack. The difference with wireless technology, some in industry say, is that users have no idea just how vulnerable it is.
“[Wireless application protocol] protection is virtually useless, and I dont think consumers are aware of it,” said Jacob Christfort, chief technology officer and vice president of Product Development at Oracle Corp. Speaking at a forum on unlicensed wireless networks Tuesday, Christfort said he could support regulation requiring network providers to display a cautionary label, such as those shown on alcohol and tobacco products.
As hotspots and other unlicensed wireless technologies proliferate, there are growing threats to data protection from information brokers, industrial spies, and “bad apple” divorce attorneys seeking data on their clients exes, according to Ted Phillip, senior associate at Booz Allen Hamilton Inc. However, wireless LAN users, often equating wireless access with traditional telephony, can have a false sense of security about the privacy of their information.
“The money part of the equation is really starting to pick up,” Phillips said at the forum sponsored by the National Telecommunications and Information Administration in Washington. “We have to face the fact that there is a growing and increasingly severe threat thats going to be out there. It actually scares me, and Ive been at this for a while.”
Apart from small regulatory fixes such as the warning label idea, industry representatives generally oppose more intrusive federal initiatives to secure wireless technologies. One reason is that the possibilities for anonymity in wireless networks make it easy for sophisticated offenders to dodge enforcement. Criminals can buy inexpensive computers and wireless access cards and then quickly dispose of them, making it impossible to track crimes.
WLANs remain a noted concern in the federal government, and the Administration has toyed with ways of promoting better security. The National Strategy to Secure Cyberspace holds agencies responsible for implementing risk management processes and security controls. The government is particularly concerned about the problem of anonymity in attacks on national security, said Paul Nicholas, director of Critical Infrastructure Protection at the Homeland Security Council.
“Attacks happen and you dont know where theyre coming from,” Nicholas said, adding that increased funding for security technology research and increased information-sharing would help.
For private enterprises, many in industry maintain that sufficient methods exist to secure WLANs as needed.
“802.11 security is not a big concern because we can deal with it if we need to for certain applications,” Oracles Christfort said. “802.11b is like walking on an unlighted street at night. You can do it, but you better know where youre going.”
Displaying a rising profile in wireless telecommunication, the IT industry is pressing the government to make more spectrum available for unlicensed use. However, the most useful bands of the electromagnetic frequencies are already being used, and the incumbents—from cellphone operators to the Department of Defense—are lobbying hard to keep what they have and get more.
“The fundamental problem we have here today is that spectrum is artificially scarce. It is mandated scarcity,” said Peter Pitsch, government affairs officer at Intel Corp.
Latest Wireless News:
Search for more stories by Caron Carlson.
Find white papers on wireless topics. | <urn:uuid:dcd3a29b-c9fc-4bec-b9e2-bcbbec85e85e> | CC-MAIN-2022-40 | https://www.eweek.com/mobile/unlicensed-wireless-license-to-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00706.warc.gz | en | 0.937301 | 736 | 2.53125 | 3 |
The term “cloud” is often misunderstood. Some think just because you may be outsourcing an Internet-accessible software application (email, FTP, etc.) to someone else, as opposed to installing licensed software on your own in-house server, that you are using “The Cloud”. Application outsourcing is called Software-as-a-Service (SaaS), but SaaS and “cloud” are not the same thing – even though many SaaS companies use a cloud as a delivery platform.
What is a Cloud?
In a nutshell, cloud computing is about one thing -- server virtualization. Virtualization software (a.k.a. “hypervisors”) separate each physical machine (“host”) into one or more virtual machines (“VM”s). Each VM consists of a virtual CPU, virtual memory, virtual storage and a “guest” operating system like Linux or Windows. To summarize,
A cloud server does the same thing as a traditional server, but it is virtual – not physical.
What is a Public Cloud?
You could say that Amazon Web Services (AWS) invented “The Cloud” in 2006 when they first released their Elastic Compute Cloud. AWS was originally targeted at testing and developing applications, as you could “spin up” a virtual server, assign it as many resources as needed (CPU, memory and storage); and you could shut it down and stop paying for it when you were done testing. As you can imagine, not having to invest capital in physical hardware is a great cost savings in development and testing environments.
So, AWS wasn’t originally targeted at production environments, but that has all changed as cloud adoption expanded. Microsoft followed AWS with its own public cloud called Azure. And, there are other cloud infrastructures such as OpenStack, originally a joint effort of Rackspace and NASA. AWS and others are each considered public clouds.
What is a Private Cloud?
A private cloud typically consists of either of the following scenarios:
- Two or more physical hosts dedicated to one company, with hypervisor technology so that company can create and manage its own virtual machines. This could be located in-house or could be co-located at a data center.
- A private pool of resources on a multi-tenant cloud. A private cloud within a multi-tenant cloud requires dedicated firewalls to create a Virtual Private Data Center (VPDC). Dedicated firewalls will handle both Internet networking and custom private networking for all VMs created from the resource pool. A VPDC is therefore completely isolated from the other cloud tenants, as well as being protected from the outside. | <urn:uuid:06dd215b-7e07-4ff9-900c-a856f972fd09> | CC-MAIN-2022-40 | https://www.ftptoday.com/why-ftp/ftp-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00706.warc.gz | en | 0.958936 | 565 | 3 | 3 |
Technology exists to enable people. Whether they use it for personal or professional reasons, people are the common link driving technology adoption.
On the other hand, while technology is often predictable, people are not. It’s easy to question why humans are the weakest link in cybersecurity, but the answer — like people — is more complex.
Why are people the weakest link in the people, process, technology chain?
Cybersecurity professionals focus on three primary categories that help them protect data: people, processes, and technology. Taking a look at each of these provides insight into why it’s easy to consider people the weakest link.
Technology, in itself, never makes mistakes. People program technology, then it does what they tell it to do. It can be verified and provide repeatable outputs, and even artificial intelligence (AI) is a series of algorithms programmed by people.
While technology may be flawed, as evidenced by security vulnerabilities in software, the fixes to those flaws have been made much easier due to objective solutions such as security patch updates.
Similar to technology, processes do not “act” on their own. They are a set of steps that people follow so that they can repeatedly achieve a consistent outcome.
When a process breaks, people can review it, find the problem, and create an immediate fix by updating it. Similar to technology again, fixing a broken process has a clear solution.
[Related Reading: How to Create a Cybersecurity Program]
Unlike technology and processes, people are complex. They think for themselves and make their own decisions. Sometimes, these are good decisions and other times they are bad ones.
People are error prone because no clear solution exists. People will make the same mistake multiple times because they are unpredictable. At the core, the inability to find a way to prevent people from making the same mistake more than once makes them the weakest link in the chain.
What are some cybersecurity risks caused by humans?
Human error risk can lead to several different types of cybersecurity concerns.
As companies adopt more cloud-based technologies, people need to create more passwords. Unfortunately, people may not always remember everything, and they dislike having to request a password reset because it decreases their productivity.
These two problems often lead to people using easy to remember passwords. Fundamentally, this means that they often default to:
- Using the same password in multiple locations
- Using passwords that include a loved one’s name or season
- Using a series of numbers like 12345
These tricks keep them from forgetting the password, but it also makes the passwords an easy target for cybercriminals.
For the same reason that people hate making new passwords, they also tend to avoid multi-factor authentication (MFA). Any additional step, whether clicking on an authentication application or waiting for a code, creates a barrier to adoption. People want quick access to their resources so that they can get their jobs done.
According to the 2021 Data Breach Investigations Report, misconfigurations accounted for the top error variety in the Miscellaneous Errors breach category. System administrators and developers can make mistakes that lead to data breaches. For example, forgetting to change a default password on a server increases the likelihood that cybercriminals will be able to gain access. Copying and pasting a configuration from one serverless function to a different one is another potential cybersecurity risk created by misconfigurations and human error risk.
What types of attacks target the human factor?
Threat actors know that human error leaves organizations at risk, and they regularly try to exploit it.
Social engineering attacks
When cybercriminals engage in social engineering attacks, they specifically focus on exploiting vulnerabilities in human nature. For example, most phishing campaigns are successful because they prey on emotions. They often invoke urgency so that people will not stop to think. In their haste, they take action against the company’s and their own best interests.
In a dictionary attack, cybercriminals try to break into a password protected device or resource by systematically trying various known weak passwords. Since lists of commonly used passwords can be easily found on the internet, these attacks are often successful.
Malware and ransomware attacks
Often, malware and ransomware attacks are successful because users fail to apply the security updates that patch common vulnerabilities and exposures (CVEs). Patches can be time-consuming, and people often wait to install them. Cybercriminals use this knowledge to look for vulnerabilities in devices, then they use them as part of their ransomware and malware attacks.
[Related Reading: How to Perform a Cybersecurity Risk Assessment]
Why are humans the weakest link despite security training and resources?
People are fallible, and they make mistakes. Training and resources may not always be adequate to give people the skills necessary. They provide awareness, but that is not the same as education.
Most cybersecurity awareness training programs fail to incorporate best educational practices. Adults learn best when the program:
- Applies to their real lives
- Offers hands-on capabilities
- Gives them a way to build on previously learned information
Most security awareness programs offer a series of videos and multiple choice tests that do not give adult learners what they need to truly learn.
Many companies fail to supplement cybersecurity awareness training with the tools that help people employ best practices. Companies may purchase a multi-factor authentication solution. However, that only solves part of the problem. Although providing password management technology is becoming more prevalent, too few organizations offer this to their employees. Meanwhile, they add more applications that require more passwords. This creates a vicious cycle driving poor password hygiene.
Remote work adds heightened challenges for companies. Remote work means that people are connecting from risky home networks. Employees will not be able to configure their network securely. Many may not even know how to change the router default password. Even virtual private networks (VPNs) are not entirely secure, evidenced by a surge in VPN attacks during the first quarter of 2021. Ultimately, people may not have the technical knowledge or experience to protect data.
Managed Detection and Response (MDR) to Overcome Human Error Risk
While human error risk may lead to data breaches, companies are still responsible for mitigating risk. MDR mitigates the likelihood of an attack by monitoring for new threats, vulnerabilities, and misconfigurations. When devices, systems, and networks are compromised, MDR provides rapid detection, notification, and response guidance.
As organizations work to reduce the impact that human error risk can have on their environments, MDR offers a way to enhance their security posture. With full coverage across cloud, network, system, application, and endpoint, Alert Logic’s MDR solution gives companies the ability to leverage threat analytics by collecting, analyzing, and enriching data for advanced threat detection and response. | <urn:uuid:f97ab177-7c02-409c-acfb-25ab598c7447> | CC-MAIN-2022-40 | https://www.alertlogic.com/blog/why-humans-weakest-link-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00706.warc.gz | en | 0.946106 | 1,418 | 2.71875 | 3 |
Thanks to the economic downturn and its lingering effects, the word “debt” predominantly carries a negative connotation. But you may have heard in the midst of balancing your budget or planning for retirement, that not all financing is created equal. In fact, some debts, experts suggest, may be good for you. But what exactly does this mean?
What Is Good Debt?
Simply put, good debt “is any debt that offers a return on the investment,” Rod Griffin, director of public education for credit bureau Experian, said. For instance, a mortgage is often considered good debt since “in normal times, [the home associated with it] has some gain in equity,” he said. Other examples of good debt can include student loans (with the return being the higher salary and improved job prospects you could command with an education) or even low-interest lines of credit you take on in order invest in stocks or retirement funds.
What Is Bad Debt?
Bad debt, on the other hand, is debt that’s going to land you in financial trouble, Griffin said. It’s any credit that you’re taking out or utilizing without a clear-cut plan of how to pay it back. Using a high-interest credit card to cover a shopping spree or taking out a payday loan to make extra holiday purchases are examples of bad debt.
Does My Credit Score Reward Good Debt?
Technically, no. Most credit scoring models do reward you for having a diverse portfolio of accounts and revolving debts (like credit cards) are often weighted more heavily than installment loans (like auto financing) because you determine how much credit you are going to use and pay off each month, Griffin said. But determining whether debt is good or bad is more of a financial management concept — not a credit scoring standard.
“Scores don’t distinguish between what we define as good debt and bad debt,” Griffin said. Instead, they look at how well you’re managing all your credit lines. Making on-time payments and keeping balances low is important whether or not there’s a long-term investment to recoup on the financing. You can see how your current debts are affecting your credit scores by viewing your free credit report summary each month on Credit.com.
How Can I Avoid Taking on Bad Debt?
First off, remember that you don’t have to take every credit offer that comes your way, Griffin said. Instead, ask yourself before formally applying if you will be able to pay off the debt, when you will be able to pay off the debt and what you will be giving up to take on the new liability.
“Look at your overall financial picture,” Griffin said. “If you don’t have a plan to pay something off, it’s probably a bad debt.”
This article originally appeared on Credit.com and was written by Jeannine Skowronski. | <urn:uuid:bd24aefe-c29e-4210-b48f-cae248d2fbb0> | CC-MAIN-2022-40 | https://adamlevin.com/2015/12/09/whats-a-good-debt/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00706.warc.gz | en | 0.957125 | 617 | 2.75 | 3 |
by Marc Pegulu
Climate change has become a major focus for organizations. Businesses and investors are looking for solutions to reduce emission rates and curb energy usage while maintaining maximum productivity. The Internet of Things (IoT) is critical to facilitating this priority shift. According to a report from the World Economic Forum, 84 percent of IoT deployments are currently addressing or have the potential to address the Sustainable Development Goals (SDGs) defined by the United Nations.
This is where efficiency begins. As businesses move toward smarter, more sustainable asset management, supply chain risk management and resiliency remain critically important. To optimize the time, energy and resources put into tracking goods, IoT technologies must fill visibility gaps associated with asset management, while also meeting economic and environmental goals that protect our planet.
Introducing new efficiencies
IoT technologies provide data, analytics and insights to improve process efficiency, increase productivity and reduce waste. This has long been the vision for IoT, but its potential has previously been limited due to challenges in building, implementing and scaling these solutions.
This has since changed. IoT sensor networks are easy to deploy, allowing organizations to gather data across applications and monitor the consumption of resources and the location of assets or people. Advances in integrating geolocation and network connectivity are eliminating coverage gaps during transport and logistics, with more accuracy than ever before.
This is not only applicable to shipping goods, though the ability to understand product location across the supply chain at any moment – from warehouse, to cargo ships to semi-trucks or airplanes – cannot be understated. Today, farmers can leverage IoT to measure weather conditions that influence crop production, track livestock health and maximize yield and resources. IoT sensors can also enable smart, sustainable food waste management solutions that reduce the amount of waste generated. This is where the low-power, long-range potential of IoT is realized.
Data and analytics are key for smoother operations
Of course, there are many IoT solutions from which to choose. If organizations cannot generate and capture the right data, then they cannot efficiently operate fiscally responsible and environmentally conscious systems. Access to data enables organizations to efficiently generate accurate insights, while saving money and improving overall organizational performance.
This can be a challenging proposition as assets vary in size, complexity, location and proximity to each other. Traditional, high bandwidth, high-energy consumption tools (which are great for video capture or the transmission of large amounts of data) are not well-equipped to address the complex challenges of asset tracking.
As a result, low power wide area networks (LPWANs) are capturing data and providing analysis, which offers organizations long range links and an extended battery lifetime. Not only does this minimize the cost of rolling out a monitoring network, but it avoids the need for regular battery replacement cycles, which can lead to extended gaps in coverage. LPWANs also allow for sensors to be placed in hard-to-reach locations to provide the necessary data – rather than being placed depending on the available power or connectivity – and feed that data back into software systems for real-time monitoring.
Organizations that know they have accurate and timely data are better positioned to make informed asset management decisions. IoT networks and battery-backed sensors can expand system potential by feeding data into machine learning databases to identify patterns, highlighting potential equipment failure and facilitating accurate resource allocation.
Meeting environmental goals
Organizations have long sought asset management and visibility to boost productivity. According to a Gartner survey, by 2025, sustainability programs will help improve resource efficiencies and supply chain resiliency. IoT solutions have the potential to catalyze social and environmental initiatives that will reduce the environmental impact of organizations. In addition, according to a recent study by IoT Analytics, 75 percent of IoT solutions support the United Nation’s sustainability development goals and long range low power solutions are a part of that playbook. By investing in the right IoT solutions, organizations can measure their impact on the environment in real time and make changes to eliminate inefficiencies.
For example, installing IoT-enabled sensors in warehouses and offices to monitor inventory levels and material location reduces asset loss and inventory waste. This also applies to energy usage or waste due to air conditioning, water leaks, miles driven to transport goods, and other micro-elements of operations that add up and contribute to poor environmental management.
These solutions directly impact profitability by enabling organizations to produce results that also drive significant benefits in the form of reduced energy and raw material consumption and reduced waste and pollution, minimizing the environmental damage caused by economic activity.
Looking toward the future
IoT can serve as a critical building block of smart environmental monitoring by maximizing efficiencies and operations. Sophisticated IoT solutions are the ideal solution to balance both fiscal and environmental goals.
Organizations know that pressure to focus on sustainability as a core pillar of their operations is increasing. Don’t get caught flat-footed; instead, start asking how the IoT technology can support your needs while also creating a more connected and sustainable planet.
Marc Pegulu is the vice president of IoT, Wireless and Sensing Products, at Semtech. | <urn:uuid:c1a62a17-b06e-4681-95ca-c3eafb7cd32d> | CC-MAIN-2022-40 | https://www.iotinnovator.com/category/industrial/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00706.warc.gz | en | 0.939122 | 1,041 | 2.859375 | 3 |
What is data anonymization? And what are the best techniques to use? In this What That Means video, Camille Morhardt talks with Kristin Ulrich, Senior Solutions Specialist at SAP for HANA architecture about data anonymization.
- Data anonymization means information is irreversibly altered and can no longer be identified directly, or indirectly
- The type of parameters or methodology applied to a data set will depend on on each individual data set
- There is always a level of risk involved with anonymization; you cannot guarantee complete anonymity
- If you are working with companies in different countries make sure you understand any data regulations laws, in some cases they are continually evolving
- And more! | <urn:uuid:eaf4c08b-1678-42cd-9915-5d441e3cc28a> | CC-MAIN-2022-40 | https://cybersecurityinside.com/video/data-anonymization-with-kristin-ulrich-intel-busines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00706.warc.gz | en | 0.891004 | 148 | 2.640625 | 3 |
Data centers rely on power for nearly everything. Losing utility power, malfunctioning hardware or an end-of-life replacement can result in thousands or even millions of dollars lost per hour. It’s why backup power is crucial in overall data center design.
What are a data center’s redundant power supplies? These infrastructure units include uninterruptible power supply (UPS) systems, standby generators and power distribution components like switchboards and lines. Because each part is vital in delivering essential power to the data center, having one or multiple duplicates of equipment can maximize availability.
Types of Data Center Power Redundancy
To understand the approaches to data center power redundancy, we need to define N. This variable can designate either the total power needs of the data center (measured in kW) or the number of non-redundant components in the power supply and distribution chain. N is the baseline capacity and means a single point of failure exists in the system.
The most common data center redundancy levels are:
- N+1 redundancy: This configuration indicates an available extra unit can support a component that momentarily needs to be serviced or replaced. N+2, by extension, is less common but provides an additional spare unit.
- 2N redundancy: 2N design keeps an entirely separate system capable of handling all of the data center’s power needs. These will often be two mirrored, identical systems but may also comprise different equipment makes and models. This configuration results in a 100% required capacity and 100% stranded capacity.
- 3N/2 redundancy: The three-makes-two approach offers more reliability than 2N and only strands about 50% capacity, so its operating costs are closer to N+1. To use this configuration effectively, the system must carefully manage the load from the power supplies in use.
- 2(N+1) redundancy: As the highest level obtainable, this configuration uses a completely separate system at the ready to switch in for backup power failure, plus provides additional components should any part in either of the two parallel systems fail.
Note that these same formulas above can serve as data cooling system configurations as well.
How to Choose a Data Center Power Redundancy Configuration
You may assume that having more backup systems is better by default. Instead, the redundancy level you need depends on multiple factors such as:
- Your budget
- Business goals
- IT Environment
- Reliability needs
In critical data center applications like health care facilities where backup power is necessary, redundancy can help mitigate wider-scale disasters from affecting operations. Certain data center certification levels, like the upper classifications of ANSI/TIA-942, require some form of redundancy. According to Uptime Institute, a vast majority of data center managers are seeking higher levels of redundancy than their baseline levels.
The larger your business and data center operations, the more likely you are to invest in a power infrastructure with more resiliency.
Contact DataSpan for Power Redundancy Management
Data center power redundancy is one of many areas of expertise we’ve gained throughout our over 45-year history by serving facilities of all sizes. We can help analyze your data center’s power needs and determine how to configure redundant power supplies for maximum uptime or cost-efficiency. | <urn:uuid:6dde6dfd-af26-44c9-9b24-5ea469227255> | CC-MAIN-2022-40 | https://dataspan.com/blog/data-center-power-redundancy-and-its-importance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00706.warc.gz | en | 0.890687 | 680 | 2.546875 | 3 |
The Bias of Analysis
Mark Twain debatably said something like, “There are three kinds of lies: lies, damned lies and analytics.”
We take for granted that analytics gives us useful, actionable insights. What we often don’t realize is how our own biases and those of others influence the answers we’re given by even the most sophisticated software and systems. Sometimes, we may be manipulated dishonestly, but, more commonly, it may be subtle and unconscious biases that creep into our analytics. The motivation behind biased analytics is manyfold. Sometimes the impartial results we expect from science are influenced by 1) subtle choices in how the data is presented, 2) inconsistent or non-representative data, 3) how AI systems are trained, 4) the ignorance, incompetence of researchers or others trying to tell the story, 5) the analysis itself.
The Presentation is Biased
Some of the lies are easier to spot than others. When you know what to look for you may more easily detect potentially misleading graphs and charts.
There are at least five ways to misleadingly display data: 1) Show a limited data set, 2). Show unrelated correlations, 3) Show data inaccurately, 4) Show data unconventionally, or 5). Show data over-simplified.
Show a limited data set
Limiting the data, or hand selecting a non-random section of the data can often tell a story that is not consistent with the big picture. Bad sampling, or cherry picking, is when the analyst uses a non-representative sample to represent a larger group.
In March 2020, Georgia’s Department of Public Health published this chart as part of its daily status report. It actually raises more questions than it answers.
One of the things that is missing is context. For example, it would be helpful to know what the percentage of the population is for each age group. Another issue with the simple-looking pie chart is the uneven age groups. The 0-17 has 18 years, 18-59 has 42, 60+ is open ended, but has around 40 years. The conclusion, given this chart alone, is that the majority of cases are in the 18-59 year old age group. The 60+ year old age group looks to be less severely affected by COVID cases. But this isn’t the whole story.
For comparison, this different data set on the CDC web site charts COVID cases by age group with the additional data on the percentage of US Population that is in each age range.
This is better. We have more context. We can see that age groups 18-29, 30-39, 40-49 all have a higher percentage of cases than the percentage of the age group in the population. There are still some uneven age groupings. Why is 16-17 a separate age group? Still this is not the whole story, but pundits have written columns, made predictions and mandates on less than this. Obviously, with COVID, there are many variables in addition to age that affect being counted as a positive case: vaccination status, availability of tests, number of times tested, comorbidities, and many others. Number of cases, itself, provides an incomplete picture. Most experts also look at Number of deaths, or percentages of deaths per 100,000 population, or case-fatalities to look at how COVID affects each age group.
Show unrelated correlations
Obviously, there is a strong correlation between US spending on science, space, and technology and the number of Suicides by hanging, strangulation and suffocation. The Correlation is 99.79%, nearly a perfect match.
Who, though, would make the case that these are somehow related, or one causes the other? There are other less extreme examples, but no less spurious. There is a similar strong correlation between Letters in Winning Word of Scripps National Spelling Bee and Number of People Killed by Venomous Spiders. Coincidence? You decide.
Another way to chart this data that may be less misleading would be to include zero on both of the Y-axes.
Show data inaccurately
From How to Display Data Badly, the US State of Georgia presented the Top 5 Counties with the Greatest Number of Confirmed COVID-19 Cases.
Looks legit, right? There is clearly a downward trend of confirmed COVID-19 cases. Can you read the X-axis? The X-axis represents time. Typically, dates will increase from left to right. Here, we see a little time travel on the X-axis:
Wait? What? The X-axis is not sorted chronologically. So, as nice as the trend might look, we can’t draw any conclusions. If the dates are ordered, the bars for the number of cases shows more of a sawtooth pattern than any kind of a trend.
The easy fix here is to sort the dates the way a calendar does.
Show data unconventionally
We’re all busy. Our brains have taught us to make quick judgements based on assumptions which have been consistent in our world. For example, every graph I have ever seen shows the x- and y- axes meeting at zero, or lowest values. Looking at this chart briefly, what conclusions can you draw about the effect of Florida’s “Stand your ground law.”? I’m ashamed to admit it, but this graph fooled me at first. Your eye is conveniently drawn to the text and arrow in the middle of the graphic. Down is up in this graph. It may not be a lie – the data is all right there. But, I have to think that it’s meant to deceive. If you haven’t seen it yet, zero on the y-axis is at the top. So, as data trends down, that means more deaths. This chart shows that the number of murders using firearms increased after 2005, indicated by the trend going down.
Show the data over-simplified
One example of over-simplification of the data can be seen when analysts take advantage of Simpson’s Paradox. This is a phenomenon that occurs when aggregated data appears to demonstrate a different conclusion than when it is separated into subsets. This trap is easy to fall into when looking at high-level aggregated percentages. One of the clearest illustrations of Simpson’s Paradox at work is related to batting averages.
Here we see that Derek Jeter has a higher overall batting average than David Justice for 1995 and 1996 seasons. The paradox comes in when we realize that Justice bested Jeter in batting average both of those years. If you look carefully, it makes sense when you realize that Jeter had roughly 4x more at-bats (the denominator) in 1996 at a .007 lower average in 1996. Whereas, Justice had roughly 10x the number of at-bats at only .003 higher average in 1995.
The presentation appears straightforward, but Simpson’s Paradox, wittingly, or unwittingly, has led to incorrect conclusions. Recently, there have been examples of Simpson’s Paradox in the news and on social media related to vaccines and COVID mortality. One chart shows a line graph comparing death rates between vaccinated and unvaccinated for people aged 10-59 years old. The chart demonstrates that the unvaccinated consistently have a lower mortality rate. What’s going on here?
The issue is similar to the one we see with batting averages. The denominator in this case is the number of individuals in each age group. The graph combines groups which have different outcomes. If we look at the older age group, 50-59 , separately, we see that the vaccinated fare better. Likewise, if we look at 10-49, we also see that the vaccinated fare better. Paradoxically, when looking at the combined set, unvaccinated appear to have a worse outcome. In this way, you’re able to make a case for opposite arguments using the data.
The Data is Biased
Data cannot always be trusted. Even in the scientific community, over a third of researchers surveyed admitted to “questionable research practices.” Another research fraud detective says, “There is very likely much more fraud in data – tables, line graphs, sequencing data [– than we are actually discovering]. Anyone sitting at their kitchen table can put some numbers in a spreadsheet and make a line graph which looks convincing.”
This first example looks like someone did just that. I’m not saying this is fraud, but as a survey, it just does not generate any data that contributes to an informed decision. It looks like the survey asked respondents about their opinion of gas station coffee, or some other relevant current event..
- Very good
I’ve cropped the Twitter post to remove references to the guilty party, but this is the actual entire chart of final results of the survey. Surveys like this are not uncommon. Obviously, any chart created from the data resulting from the responses will show the coffee in question is not to be missed.
The problem is that if you had been given this survey and didn’t find a response that fit your thinking, you would skip the survey. This may be an extreme example of how untrustworthy data can be created. Poor survey design, however, can lead to fewer responses and those who do respond have only one opinion, it’s just a matter of degree. The data is biased.
This second example of data bias is from the files of “Worst COVID 19 Misleading Graphs.”
Again, this is subtle and not completely obvious. The bar graph shows a smooth – almost too smooth – decline in the percentage of positive COVID-19 cases over time for a county in Florida. You could easily draw the conclusion that cases are declining. That’s great, the visualization accurately represents the data. The problem is in the data. So, it’s a more insidious bias because you can’t see it. It’s baked into the data. The questions that you need to ask, include, who is being tested? In other words, what is the denominator, or the population of which we are looking at a percentage. The assumption is that it is the entire population, or at least, a representative sample.
However, during this period, in this county, tests were only given to a limited number of people. They had to have COVID-like symptoms, or had traveled recently to a country on the list of hot spots. Additionally confounding the results is the fact that each positive test got counted and each negative test got counted. Typically, when an individual tested positive, they would test again when the virus had run its course and would test negative. So, in a sense, for each positive case, there is a negative test case which cancels it out. The vast majority of tests are negative and each individual’s negative tests were counted. You can see how the data is biased and not particularly useful for making decisions.
AI Input and Training is Biased
There are at least two ways in which AI can lead to biased results: starting with biased data, or using biased algorithms to process valid data.
Many of us are under the impression that AI can be trusted to crunch the numbers, apply its algorithms, and spit out a reliable analysis of the data. Artificial Intelligence can only be as smart as it is trained. If the data on which it is trained is imperfect, the results or conclusions will not be able to be trusted, either. Similar to the case above of survey bias, there are a number of ways in which data can be biased in machine learning:.
- Sample bias – the training dataset is not representative of the whole population.
- Exclusion bias – sometimes what appear to be outliers are actually valid, or, where we draw the line on what to include (zip codes, dates, etc).
- Measurement bias – the convention is to always measure from the center and bottom of the meniscus, for example, when measuring liquids in volumetric flasks or test tubes (except mercury.)
- Recall bias – when research depends on participants’ memory.
- Observer bias – scientists, like all humans, are more inclined to see what they expect to see.
- Sexist and racist bias – sex or race may be over- or under- represented.
- Association bias – the data reinforces stereotypes
For AI to return reliable results, its training data needs to represent the real world. As we’ve discussed in a previous blog article, the preparation of data is critical and like any other data project. Unreliable data can teach machine learning systems the wrong lesson and will result in the wrong conclusion. That said, “All data is biased. This is not paranoia. This is fact.” – Dr. Sanjiv M. Narayan, Stanford University School of Medicine.
An algorithm is a set of rules that accept an input and creates output to answer a business problem. They’re often well-defined decision trees. Algorithms feel like black boxes. Nobody is sure how they work, ofen, not even the companies that use them. Oh, and they’re often proprietary. Their mysterious and complex nature is one of the reasons why biased algorithms are so insidious. .
Consider AI algorithms in medicine, HR or finance which takes race into consideration. If race is a factor, the algorithm cannot be racially blind. This is not theoretical. Problems like these have been discovered in the real world using AI in hiring, ride-share, loan applications, and kidney transplants.
The bottom line is that if your data or algorithms are bad, are worse than useless, they may be dangerous. There is such a thing as an “algorithmic audit.” The goal is to help organizations identify the potential risks related to the algorithm as it relates to fairness, bias and discrimination. Elsewhere, Facebook is using AI to fight bias in AI.
People are Biased
We have people on both sides of the equation. People are preparing the analysis and people are receiving the information. There are researchers and there are readers. In any communication, there can be problems in the transmission or reception.
Take weather, for example. What does “a chance of rain” mean? First, what do meteorologists mean when they say there is a chance of rain? According to the US government National Weather Service, a chance of rain, or what they call Probability of Precipitation (PoP), is one of the least understood elements in a weather forecast. It does have a standard definition: “The probability of precipitation is simply a statistical probability of 0.01″ inch [sic] of [sic] more of precipitation at a given area in the given forecast area in the time period specified.” The “given area” is the forecast area, or broadcast area. That means that the official Probability of Precipitation depends on the confidence that it will rain somewhere in the area and the percent of the area that will get wet. In other words, if the meteorologist is confident that it is going to rain in the forecast area (Confidence = 100%), then the PoP represents the portion of the area that will receive rain.
Paris Street; Rainy Day,Gustave Caillebotte (1848-1894) Chicago Art Institute Public Domain
The chance of rain depends on both confidence and area. I did not know that. I suspect other people don’t know that, either. About 75% of the population does not accurately understand how PoP is calculated, or what it’s meant to represent. So, are we being fooled, or, is this a problem of perception. Let’s call it precipitation perception. Do we blame the weather forecaster? To be fair, there is some confusion among weather forecasters, too. In one survey, 43% of meteorologists surveyed said that there is very little consistency in the definition of PoP.
The Analysis Itself is Biased
Of the five influencing factors, the analysis itself may be the most surprising. In scientific research that results in a reviewed paper being published, typically a theory is hypothesized, methods are defined to test the hypothesis, data is collected, then the data is analyzed. The type of analysis that is done and how it is done is underappreciated in how it affects the conclusions. In a paper published earlier this year (January 2022), in the International Journal of Cancer, the authors evaluated whether results of randomized controlled trials and retrospective observational studies. Their findings concluded, that,
By varying analytic choices in comparative effectiveness research, we generated contrary outcomes. Our results suggest that some retrospective observational studies may find a treatment improves outcomes for patients, while another similar study may find it does not, simply based on analytical choices.
In the past, when reading a scientific journal article, if you’re like me, you may have thought that the results or conclusions are all about the data. Now, it appears that the results, or whether the initial hypothesis is confirmed or refuted may also depend on the method of analysis.
Another study found similar results. The article, Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results, describes how they gave the same data set to 29 different teams to analyze. Data analysis is often seen as a strict, well-defined process which leads to a single conclusion.
Despite methodologists’ remonstrations, it is easy to overlook the fact that results may depend on the chosen analytic strategy, which itself is imbued with theory, assumptions, and choice points. In many cases, there are many reasonable (and many unreasonable) approaches to evaluating data that bear on a research question.
The researchers crowd-sourced the analysis of the data and came to the conclusion that all research includes subjective decisions – including which type of analysis to use – which can affect the ultimate outcome of the study..
The recommendation of another researcher who analyzed the above study is to be cautious when using a single paper in making decisions or drawing conclusions.
Addressing Bias in Analytics
This is simply meant to be a cautionary tale. Knowledge can protect us from being taken in by scams. The more aware of possible methods a scanner might use to fool us, the less likely we are to be taken in, say, by, say, a pickpocket’s misdirection, or the smooth talk of a Ponzi play. So it is with understanding and recognizing potential biases that affect our analytics. If we are aware of potential influences, we might be able to present the story better and ultimately make better decisions. | <urn:uuid:840f68f9-9782-48c8-b12c-811461156509> | CC-MAIN-2022-40 | https://motio.com/analytics-lie/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00706.warc.gz | en | 0.946722 | 3,963 | 3.046875 | 3 |
THE GLOBAL DEPLOYMENT of the 5G mobile network, which will allow faster upload and download speeds, colossal bandwidth, and reduced latency, has been a decade in the making. Many have eagerly anticipated the enhanced speed and performance, but what's less understood is the magnitude of improvement 5G will deliver and the impact it is likely to have on our world.
For a network to be classified as 5G, it must provide a minimum of 20 Gbps for downloads and 10 Gbps for uploads. By contrast, the first iteration of 4G technology had a minimum download and upload speed of 150MB and 15MB, respectively. This massive increase in speed also comes with a remarkable decrease in latency.
At the same time, 5G’s speed and network performance require a substantial amount of data to be collected and a tremendous amount of computing power. As such, the arrival of 5G will broadly redefine operations for many businesses, as it will call for more data and workloads to be cloud-based, accelerating the adoption of cloud services and engendering late adopters to finally embrace the cloud.
Here are five predictions about the expected impact of 5G on cloud services:
- 5G will enhance digital connectivity around the globe. This will be especially true in developing countries where a lack of widespread internet infrastructure has limited connectivity to date. Put simply, 5G has the potential to level the playing field worldwide with wireless access to information and the digital economy that is cloud-based.
- 5G connections will improve the performance of current innovations such as wearable technology and other devices that have historically relied on syncing with larger devices, given their limited internal storage capacity. With the benefit of 5G’s ultra-low latency, these devices will evolve to operate independently, syncing with the cloud in real time. Cloud-based apps will also see a spike in innovation, especially those currently offering a scaled-down version of their full potential to account for today’s slower network performance.
- Cloud service providers will evolve their offerings, reach new customers, and make new investments in cloud technology. They will innovate and offer more features to mobile users, and hotspots will become faster, allowing remote workers access to cloud services even where internet connectivity is lacking. Early evidence of this is the 2019 announcements of Microsoft’s partnership with AT&T around Azure, and Amazon Web Services’ partnership with Verizon, both to advance 5G innovation. | <urn:uuid:7c9b2406-d15c-40a0-9c43-f2c48f80530e> | CC-MAIN-2022-40 | https://www.channelpronetwork.com/article/impact-5g-cloud-services | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00706.warc.gz | en | 0.949977 | 503 | 2.640625 | 3 |
Quantum News Briefs August 30: German consortium using quantum technology to enhance satellite measurement stability; Max Planck Institute physicists develop new method to drive quantum entanglement of photons
Quantum News Briefs August 30 begins with news about the German consortium using quantum technology to enhance satellite measurement stability followed by another German announcement from Max Planck Institute whose physicists have developed a new method to drive quantum entanglement of photons. Next the Briefs moves to the US Midwest where Indiana University Purdue University and the University of Notre Dame will develop industry- and government-relevant quantum technologies as part of the Center for Quantum Technologies with a grant from the National Science Foundation. And MORE.
German consortium using quantum technology to enhance satellite measurement stability
A German consortium composed of Q.ANT, Bosch, TRUMPF, and the German Aerospace Center (DLR) plans to use quantum technology to permanently enhance satellite measurement stability. The announcement from Cologne was made in PhotonicsMedia and summarized here by Quantum News Briefs.
Reliable transmission of satellite communication signals can only be achieved by constantly maintaining high-precision attitude control of satellites in their orbits. If a satellite moves out of position, the signals get weaker. Beyond their use for satellites, attitude, and position, sensors that harness quantum effects can be used for autonomous driving systems and indoor navigation technologies in factories, logistics warehouses, and other facilities.
The partners will develop space-qualified attitude sensors in a project will improve internet access, particularly in remote regions, TRUMPF said in a press release. The aim is to use these quantum technology-based sensors to achieve high-precision attitude control of miniaturized satellites. The sensors’ ability to maintain precise orientation of the satellites in relation to each other will enable high-speed data connectivity.
Quantum sensors are particularly suitable for deployment in satellites since they provide reliably accurate measurement results and excellent performance in a compact, low-weight package. This solution can keep satellites correctly oriented in space over a period of years.
The DLR hopes to launch its first miniaturized satellites equipped with quantum technology in five years. “The goal of developing European quantum sensors is to achieve greater independence from the global market,” the press release said.
“This strategic partnership shows the tremendous potential that lies in the collaborative development of pioneering technologies,” said Michael Förtsch, CEO of Q.ANT. “The deployment of quantum technology in the aerospace industry is a huge opportunity for Germany as a major industrial hub.”
Max Planck Institute physicists develop new method to drive quantum entanglement of photons; technique could be boon for quantum computers
Max Planck physicists have discovered a way to drive the quantum entanglement of photons. Staff writer Michael Irving covered for New Atlas and pointed out the technique that entangled a record number of photons could be a boon for quantum computers. Quantum News Briefs summarizes the research and its implications below.
To work best, large groups of particles need to be produced and entangled together, but this is tricky to do. So for the new study the Max Planck researchers investigated a more reliable method of quantum entanglement, and used it to successfully entangle 14 photons together – the largest group of photons entangled so far.
Their process is far more efficient than existing techniques, producing photons more than 43% of the time, or almost one photon for every two laser pulses. Fourteen entangled particles might not sound like a whole lot – scientists have managed to entangle literally trillions of atoms in a gas in previous experiments. But we’re not going to be able to harness a system like that for quantum communications or computers. Photons are far simpler to produce and use in everyday technology, and the efficiency of this new technique should be relatively simple to scale up for increased photon production.
To that end, the team says that the next step is to experiment using at least two atoms as sources.
Three research universities to collaborate with industry, government to develop quantum technologies with grant from NSF
Researchers from Indiana University (both Bloomington and IUPUI campuses), Purdue University and the University of Notre Dame will develop industry- and government-relevant quantum technologies as part of the Center for Quantum Technologies with a grant from the National Science Foundation. Purdue will serve as the lead site.
“The Center for Quantum Technologies is based on the collaboration between world experts whose collective mission is to deliver frontier research addressing the quantum technological challenges facing industry and government agencies,” said Gerardo Ortiz, Indiana University site director, scientific director of the IU Quantum Science and Engineering Center and professor of physics. “It represents a unique opportunity for the state of Indiana to become a national and international leader in technologies that can shape our future.”
The new Center for Quantum Technologies will team with member organizations from a variety of industries, including computing, defense, chemical, pharmaceutical, manufacturing and materials. The center’s researchers will develop foundational knowledge into industry-friendly quantum devices, systems and algorithms with enhanced functionality and performance.
Committed industry and government partners include Accenture, the Air Force Research Laboratory, BASF, Cummins, D-Wave, Eli Lilly, Entanglement Inc., General Atomics, Hewlett Packard Enterprise, IBM Quantum, Intel, Northrup Grumman, NSWC Crane, Quantum Computing Inc., Qrypt and Skywater Technology.
Additionally, the Center for Quantum Technologies will train future quantum scientists and engineers to fill the need for a robust quantum workforce. Students engaged with the center will take on many of the responsibilities of principal investigators, including drafting proposals, presenting research updates to members, and planning meetings and workshops.
Master equation to boost feedback control at quantum level
Physicists do not yet have an equivalent understanding of feedback control at the quantum level. Now, Foundational Questions Institute (FQXi)-funded physicists have developed a “master equation” that will help engineers understand feedback at the quantum scale. Eureka Alerts covered the findings summarized here by Quantum News Briefs.
“It is vital to investigate how feedback control can be used in quantum technologies in order to develop efficient and fast methods for controlling quantum systems, so that they can be steered in real time and with high precision,” says co-author Björn Annby-Andersson, a quantum physicist at Lund University, in Sweden.
An example of a crucial feedback-control process in quantum computing is quantum error correction. A quantum computer encodes information on physical qubits, which could be photons of light, or atoms, for instance. But the quantum properties of the qubits are fragile, so it is likely that the encoded information will be lost if the qubits are disturbed by vibrations or fluctuating electromagnetic fields. That means that physicists need to be able to detect and correct such errors, for instance by using feedback control. This error correction can be implemented by measuring the state of the qubits and, if a deviation from what is expected is detected, applying feedback to correct it.
Annby-Andersson and his colleagues have now developed a master equation, called a “Quantum Fokker-Planck equation,” that enables physicists to track the evolution of any quantum system with feedback control over time. The team tested their equation by applying it to a simple feedback model. This confirmed that the equation provides physically sensible results and also demonstrated how energy can be harvested in microscopic systems, using feedback control.
The analysis and related experiments are partially funded by a grant from the Foundational Questions Institute, FQXi. “It is a great example of a successful collaboration between two different teams based at the University of Maryland, College Park, and at Lund University,” says co-author and FQXi member Peter Samuelsson, a quantum physicist at Lund University.
Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona. | <urn:uuid:19f214d4-7438-41b1-bb30-644e32f55999> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-news-briefs-august-30-german-consortium-using-quantum-technology-to-enhance-satellite-measurement-stability-max-planck-institute-physicists-develop-new-method-to-drive-quantum-entanglement-o/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00106.warc.gz | en | 0.912897 | 1,638 | 2.640625 | 3 |
To build robust, high-performing computer vision models, you need data—and lots of it. That’s why synthetic data and augmented data tools have now been integrated into the Chooch AI Platform.
This means the AI lifecycle is now shorter, because you can train AI models faster and with less data. How? By automatically generating thousands of annotated images, and then using these synthetic images to train and deploy computer vision models.
Data augmentation takes your annotated images and transforms them: e.g. distorting them, cropping them, flipping and rotating them, adding noise, and pasting objects onto new backgrounds.
Synthetic data generation creates training data for your AI models in the form of high-quality, realistic, and highly diverse computer-generated images. With just two files of the desired object—a 3D geometry file and its corresponding texture file—you can generate hundreds or thousands of images that will significantly boost the performance of your computer vision models.
The result? Computer vision with synthetic data.
The benefits of synthetic data generation and data augmentation include:
Any object that can be modeled in 3D can also be used for synthetic data generation. Below are just a few possible use cases of synthetic data generation:
There’s just one question: how do you generate synthetic data and augmented data? With the help of visual AI platforms like Chooch, synthetic data generation and data augmentation are as simple as a few clicks.
Data augmentation: Click on the “Augmentation” button in the Chooch dashboard to augment a given dataset. You can choose from multiple options: shifting, scaling, rotating, flipping, noise, blurring, contrast, and brightness. You can use the default parameters, or fine-tune them to your liking. Finally, select the number of augmented images to generate from each source image.
Synthetic data generation: First, upload your .OBJ 3D geometry file and the associated .MTL texture file to Chooch. Then, you can specify the image background and the number of images you want to create. The Chooch platform will automatically generate images, along with their corresponding bounding box annotations, in a matter of seconds.
With Chooch AI, you can quickly generate models onto your devices so you can start testing the platform immediately. We can also create custom models for your particular use case. To get started with a drone-based AI solution, fill out the form to create an account on the Chooch AI platform now. | <urn:uuid:e225f4f8-0b08-4b62-b28b-dca9ee11ebd5> | CC-MAIN-2022-40 | https://chooch.ai/synthetic-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00106.warc.gz | en | 0.885072 | 526 | 2.671875 | 3 |
The vulnerability management lifecycle is a cybersecurity process that strengthens an organization’s capacity to foresee and react to cyberattacks.
What Is A Cybersecurity Vulnerability?
As far as IT security is concerned, a vulnerability is a weakness or a limitation that enables an attacker to access a system. Three elements must be present for a vulnerability to become a threat.
A system weakness. This is a deficiency within the network or an app. Through this weakness, a hacker is able to inflict harm on a system.
Access to the weakness. A hacker can launch the attack by using a technique or a tool.
The ability to exploit the weakness. The actual damage is inflicted when the cyberattack is conducted.
When all these three factors exist, there is an exploitable vulnerability within the system. When neglected, it is like a time bomb that can cause tremendous damage in the unfortunate event of an attack.
The Pillars Of The Vulnerability Assessment Lifecycle
Vulnerability management is a complex process that takes several steps to succeed. It typically evolves with the growth of the network.
Here are the stages of the process:
It is essential to do an inventory of all the existing assets within the network that will be regularly used in finding vulnerabilities.
After inventorying all the assets, rank their importance to the organization and determine who has access to these resources.
Locate the critical assets and double check the standards and policies for information protection. Therefore, you should assess the business processes, the applications and services, the network infrastructure map, the previous control systems, the information protection processes, etc. Update this consistently to get the full picture of vulnerabilities throughout your system.
Locate the critical assets and classify them to ensure the effectiveness of the prioritization. Prioritize the assets that can generate the most significant risks.
It is essential to categorize these assets according to business units or groups depending on how important they are to business operations.
Accomplish a proper assessment by creating a risk profile for each of your assets.
Vulnerability scans at operating system level, web server level, web application level, etc. must be performed at this phase. Prioritize the vulnerabilities, locate any wrong configuration, and pinpoint human error.
All gathered data must be compiled in a custom report that outlines the prioritized vulnerabilities. It should include step-by-step instructions that must be followed to decrease the security risk that may emerge from these vulnerabilities.
This will serve as recommendation on how to have a prompt and adequate response to any eventual problems.
Start troubleshooting with the riskiest vulnerabilities. Begin by monitoring them, address the issues causing the vulnerabilities and oversee the situation.
Sometimes, patching your software is enough to address a known vulnerability.
All the network devices must be regularly monitored to keep up with the evolving threats.
Once vulnerabilities have been identified and resolved, there must be regular follow-up audits to ensure they won’t happen again. Also, the success of the process must be reassessed.
Verification is crucial as it limits the exposure of your system to threats, reduces the attack surface, and minimizes the impact of cyberattacks.
Eventually, the verification stage is useful to check if the previous phases have been successfully implemented.
The Importance of the Vulnerability Management Lifecycle
More than ever, organizations rely on their networks and systems for conducting their daily operations, financial transactions, and reputational stability.
A chain is as strong as its weakest link, so a robust vulnerability management program along with a strong cybersecurity plan can protect your organization when the next attack occurs. Therefore, risk mitigation should be prompt and timely to avoid unnecessary expenses and reputational damage.
Regular Patches and Updates
As expected, routine checks for vulnerabilities will lead to frequent updates and patches.
Assessing the vulnerabilities will give more awareness about relevant industry regulations that organizations must comply with. It also creates a proactive strategy for risk mitigation.
Defense Against Advanced Threats
A regular vulnerability management program can provide a solid defense against advanced attacks, sealing the vulnerabilities before any exploitation happens.
The Value of Continuity
Consistency and continuity are essential to stay updated on all emerging threats.
Acting proactively is always better than constant remediation, saving resources before they are wasted on late responses.
The Advantage of Prioritization
Prioritizing the assets that can generate the most significant risks is key. This can be achieved by studying the guidelines carefully and clearly understand which vulnerabilities should be remediated first.
Trust the Experts
Unfortunately, threats are constantly evolving. It can be disastrous to leave it up to chance when cybersecurity is at stake.
Our team of experts can provide consistent intelligence towards data, software, applications, and networks to identify, investigate and respond to vulnerabilities.
StratusPointIT can provide expert assistance and recommendations in crafting policies, best practices, and specifications helping your team create a solid vulnerability management program that can withstand the harshest of cybersecurity threats. | <urn:uuid:cc81c47b-6c57-4462-b171-4a8878e2c3f5> | CC-MAIN-2022-40 | https://www.computersupport.com/itanywherelabs/vulnerability-management-lifecycle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00106.warc.gz | en | 0.924887 | 1,040 | 3.0625 | 3 |
The blockchain is a disruptive technology, and its implications are not fully understood yet. Here is how the blockchain could change the audit world.
For better or worse, the blockchain is generally associated with Bitcoin. No surprise there as it was specifically developed to support Bitcoin. However, leaving the merits of cryptocurrencies aside for the moment, it is blockchain that is now being identified as a technology that will disrupt all industries, with global companies continuing to invest in new applications.
While IBM predicts that 66 percent of all banks will have commercial blockchain products by 2020, the potential applications are not limited to finance. In fact, according to a Market and Markets report, the blockchain technology market will be worth more than $2 billion by 2021.
What makes the technology so attractive to investors? How does the blockchain work?
“The blockchain is a distributed ledger that is comprised of “blocks” that each have data. This data can be currency information, protected healthcare information, or any sensitive information, and these blocks make up the blockchain. To make this distributed ledger, the data must be mined by a person who sets up a computer to identify transactions and convert them into a digital item called a hash. When the data is converted into a hash, it then provides assurance that it occurred, non-repudiation. The hash also can provide completeness and accuracy of the transaction from one organization to another organization, or person to person,” said Avani Desai, Principal Privacy Leader and EVP of Schellman & Company, Inc., an independent security, privacy, and standards compliance assessor.
It is these qualities that make auditing an ideal use case for the blockchain.
“When you look at those three things--non-repudiation, completeness, and accuracy—those are the core principles that auditors are focused on providing reasonable assurance. So, now with the blockchain, there is no more reasonable assurance—there is 100% assurance,” said Desai.
Audits come in many forms, whether it is a financial audit, compliance and regulatory audits or security audits and blockchain tech can be used for all, claimed Schellman’s Desai.
She is certainly qualified to confirm this fact, given that she is a Certified Information Systems Security Professional (CISSP), Certified Information Systems Auditor (CISA), Certified Internal Auditor (CIA), Cloud Security Alliance (CSA) Certificate of Cloud Security Knowledge (CCSK), Certified Information Privacy Professional (CIPP), ISO 27001 Lead Auditor, and Project Management Professional (PMP).
“When you think of an audit, you are performing an examination of the financial statements—or, in the case of a compliance or regulatory audit, you’re examining a set of requirements or standards. Also, you’re typically are selecting accounts, activities, to confirm accuracy through supporting evidence. With the blockchain, that evidence is there through the transaction—then ultimately, the hash. Blockchain allows the auditor to make a judgement based on the transaction via the entire population—they don’t have to just make a judgement based on the evidence or the sample,”said Desai.
Sample testing will no longer be required but instead every transaction is verified by analyzing the entire blockchain in a manner that is currently impossible or at least labor intensive.
Data Analytics Ties It All Together
If blockchain technology is used then it will resolve problems inherent in current audit processes. Companies facing a pending audit will not be able to reverse engineer documentation in bulk to satisfy compliance, as every action is time stamped and shared with all members of the blockchain, given that editing previous entries is not possible.
“As I mentioned with the financial audit, security and compliance audits will see the introduction through data analytics, which will be necessary because the data rendered available by blockchain will be immense. The use of data visualization will allow auditors to not only provide assurance over the systems, but it will also allow consulting firms to assist with planning and decision making,” said Desai.
In fact, auditors of the future will incorporate several new skill sets to verify audit integrity.
Shifting Roles for Auditors
“The work of the auditor will transform accordingly to become more of an analyst [role] that will read and interpret the data recorded on the blockchain, while providing assurance that the blockchain itself is secure. Auditors will also ensure that the technology around the blockchain, such as the data centers and point of sales system, are understood and secure,” said Desai.
In addition, identity verification and links between responsible blockchain members are confirmed.
“Compliance auditors will also assist with handling of identities—those of either individuals, organizations, or corporations—and how to link these identities with assets. Additionally, traceability, which is defined as the potential to ensure provenance of goods as they move through the global supply chain, or to locate information, money and digital assets at any given moment, will allow auditors to trace what happens with an asset over time while, at the same time, providing proof of transactions in real time,” added Desai.
100% Audit and Compliance Assurance
As Desai pointed out, there are several problems that are difficult to solve with traditional audit methods but blockchain has the potential to solve all of them.
“One big problem was that auditors have never been able to provide 100% assurance—that is why we always hear auditors talk about “reasonable” assurance instead. Sample selection methodology is common among most audit testing, which means that auditors typically just select a sample within the audit period and hope that their sample will show if there are any issues,” said Desai.
However, blockchain technology will now render more effective audits, she added.
“For example, the balances between accounts will be observable in real time with time stamping features—whatever transaction is recorded is tagged with the exact time it was carried out. By using a hash code, a unique alphanumeric signature that corresponds to one single transaction, the work of auditors will be made much simpler by solutions that will connect the payments done by one company to the corresponding entry in their suppliers' book-keeping. The same hash will be found, both on the side of the payer and the side of the receiver, proving that a specific transaction occurred between them. Blockchain technologies will also allow auditors to access proof for all transactions with all suppliers and clients at once. Therefore, the famous accounting principle of double entry will be broadened to irrefutable proof of the corresponding entries and exits of companies interacting with the one being audited,” said Desai.
When such a program is initiated, ‘fudging the numbers’ will no longer be possible and everyone involved in the blockchain is accountable for their entries. Unfortunately, despite investment in blockchain technology, audit programs of this nature are still at the theoretical stage.
Desai predicts that the next step will be the global collaboration of regulators, auditors, companies, and compliance officers, all exchanging experiences on the development and implementation of blockchain. She predicts a Blockchain Regulatory World Summit involving the global contributors to this technology.
In conclusion, when this technology is rolled out, there is no doubt that it will be more efficient and less prone to manipulation but what about the role of the auditor?
Impact on the Auditing Profession
“Blockchain technologies are still in their infancy and have not yet reached the maturity we see with other technologies being used in these processes. Because it’s so new, it’s difficult right now to implement this technology for operation on a larger scale,” said Desai.
She also recommends that auditing professionals take action.
“Right now is the ideal time for those auditors who want to be at the forefront of their profession to learn the details of how blockchain will change their audit testing and programs—not only that, but they should also begin contributing to blockchain’s development. Because this technology will also eventually change requisite job skills, this interdisciplinary new job description will be an opportunity for professionals to be both competent in auditing and programming,” said Desai.
Good advice for all who expect blockchain to impact their industry and bear in mind that the blockchain is considered disruptive technology that will impact all industries. It is NOT limited to those involved in cryptocurrencies or fintech. Consider possible applications for your company. Are you prepared to take on the challenge of a new methodology for multiple departments? | <urn:uuid:7b7f6a47-fb59-4840-9598-0eee75d4b78f> | CC-MAIN-2022-40 | https://www.ipswitch.com/blog/how-blockchain-technology-will-change-the-audit-world | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00106.warc.gz | en | 0.945908 | 1,749 | 2.75 | 3 |
Dee Bee What - Know the difference
dB should be used when comparing 2 different output powers or adding to a known power level. For example
"This antenna provides 2dB more gain than this other antenna"
It is a relative term which means that you use it when there is another output power to relate to.
You cannot for example state
"My radio outputs 20dB of Power"
To do so would lose you a very large number of "geek points"
dB is a logarithmic measurement which means that every 3dB you add to an output power doubles the total output power and every 3dB you subtract from an output power halves the total output power. Adding 10dB increases the total output power by 10 times and deducting 10dB decreases the total output power by the same
dBm is used when you want to reference the output power of a radio. It is a known scale (with 0dBm being equal to 1mW). For example
"This radio outputs a whopping 50dBm and thus Ofcom are likely to come round and beat you with a large stick"*
dBm also follows the same logarithmic scale as dB so knowing that 0dBm is 1mW we know that 3dBm is 2mW and 6dBm is 4mW etc. The "m"; indicates that it is a reference against a milliwatt reading.
dBi is most often used when referencing Antenna gain (note this is gain rather than output power as the antenna itself has no output power it just increases the output power of the radio). For example
"I have a 20dBm radio and I add a 10dBi Antenna to it, this gives me 30dBm of total output power"
dBi is referenced to an Isotopic source which is a theoretic perfect omnidirectional antenna (an example of this is the Sun). It is used as it is a constant value that can be used for comparisons. The "I" indicates that it is a reference against an isotropic antenna.
dBd is used when comparing the amount of gain an antenna has against a Diapole antenna. A diapole antenna has a gain 2.15 dBi. This means that a 0dBd gain antenna has 2.15dBi of gain
*Please note, as far as I am aware it is not common practice for Ofcom to beat people with sticks | <urn:uuid:3272f113-90d9-46c1-ae8f-2b96d66f17ca> | CC-MAIN-2022-40 | https://www.digitalairwireless.com/articles/blog/dee-bee-what-know-difference | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00106.warc.gz | en | 0.937202 | 517 | 3.28125 | 3 |
The island of Ta’u has gone green. And it’s not just because of the tropical foliage. Ta’u now boasts its own microgrid, thanks to technology from Tesla and SolarCity. The massive million-dollar project allows the island to run almost completely on solar power.
Ta’u is one of the 7 American Samoan Islands and one of 3 making up the National Park of American Samoa. Visitors enjoy lush greenery and breathtaking sea cliffs. Rustic trails wind through the volcanic remnant’s rocky terrain.
Before the microgrid was installed, Ta’u relied on diesel generators. However, the island’s isolated location northwest of Fiji made it vulnerable to shipping irregularities. Imported fossil fuel was a costly and unreliable energy source. When shipments fell behind schedule, authorities rationed power, sometimes restricting energy use to mornings and afternoons.
5,328 SolarCity panels now save the island millions of gallons of diesel fuel per year. Each generator burned 300 gallons of fuel per day; That’s 109,500 gallons of diesel fuel per year PER GENERATOR! The residents now enjoy clean energy, reliable power, and stable fuel costs. In addition, the entire array of panels (1.4 Megawatts) recharges in only 7 hours and a field of 60 Tesla Powerpacks stores the energy. Even with cloudy skies, the system can power the island for 3 days. | <urn:uuid:0f154b69-218b-4c96-9ca9-3f4fd1422fa3> | CC-MAIN-2022-40 | https://davidpapp.com/tag/solar/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00106.warc.gz | en | 0.91194 | 293 | 2.828125 | 3 |
Boeing and NASA are making progress on developing a spacecraft to taxi astronauts to and from the International Space Station, NASA announced Thursday.
NASA said Boeing tested the crew space transportation spacecraft‘s parachute and landing systems Wednesday.
Parachute demonstrations occurred by lifting the CST-100 crew capsule 14,000 feet above the Delamar Dry Lake Bed in Nevada with a helicopter.
CST-100 initiated a parachute deployment sequence and descended to the ground, while cushioned by six inflated air bags.
Capsules are intended to be reusable and seat up to seven people once complete.
Roger Krone, president of Boeing's network and space systems unit, said earlier this week the company expects the CST-100 to be operational by 2015.
Boeing’s air taxi is scheduled to undergo additional tests in the coming months that will provide data on the system’s design.
CST-100 will conduct initial test flights on the United Launch Alliance’s Atlas V rocket and NASA said the spacecraft is designed for compatibility with multiple launch vehicles. | <urn:uuid:76f748a8-0b7c-4766-bd40-c719a4859935> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2012/05/boeing-space-taxi-landing-system-tested/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00106.warc.gz | en | 0.917102 | 218 | 2.953125 | 3 |
The term demarcation means the setting or marking of boundaries or limits. Demarcation point is the service point where the electrical-utility power transmission/distribution grid with installation governed by the NESC (National Electrical Safety Code) stops, and the electrical power distribution system for a building or other structure (premise electrical-power distribution system) with installation governed by the NEC begins.
Dielectric / Dielectric Strength.
A dielectric is an insulator. Dielectric strength relates to the maximum voltage that a given insulation can withstand without breaking down.
Difference of Potential / Potential Difference.
The voltage created between two points in an electrical system separated by an open circuit or other impedance when supply power is applied. The voltage measured across a circuit or circuit component is often referred to as the potential difference as well.
Double-Insulated / Ungrounded Tool.
Any of several ungrounded electrical tools that are constructed so that the case, normally made of a nonconductive material, contains a second type of insulation from electrical energy.
An insulating material between the plates inside of a capacitor. | <urn:uuid:07a25a48-51bf-4390-a5da-82b3419f8e6b> | CC-MAIN-2022-40 | https://electricala2z.com/glossary/electrical-engineering-terms-d/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00106.warc.gz | en | 0.894872 | 241 | 3.59375 | 4 |
K-12 schools are all rapidly moving toward “one-for-one” programs, where every student has a computer, usually a laptop. Couple this with standardized, cloud-based testing services, and you have the potential for an Internet gridlock during the testing periods. Some of the common questions we hear are:
How will all of these students using the cloud affect our internet resource?
Will there be enough bandwidth for all of those students using on-line testing?
What type of QoS should we deploy, or should we buy more bandwidth?
The good news is that most cloud testing services are designed with a fairly modest bandwidth footprint.
For example, a student connection to a cloud testing application will average around 150kbs (kilo-bits per second).
In a perfect world, a 40 megabit link could handle about 400 students simultaneously doing on-line testing as long as there was no other major traffic.
On the other hand, a video stream may average 1500kbs or more.
A raw download, such as an iOS update, may take as much as 15,000kbs, that is 100 times more bandwidth than the student taking an on-line test.
A common belief when choosing a bandwidth controller to support on-line testing is to find a tool which will specifically identify the on-line testing service and the non-essential applications, thus allowing the IT staff at the school to make adjustments giving the testing a higher priority (QoS). Yes, this strategy seems logical but there are several drawbacks:
- It does require a fairly sophisticated form of bandwidth control and can be fairly labor intensive and expensive.
- Much of the public Internet traffic may be encrypted or tunneled, and hard to identify.
- Another complication trying to give Internet traffic traditional priority is that a typical router cannot give priority to incoming traffic, and most of the test traffic is incoming (from the outside in). We detailed this phenomenon in our post about QoS and the Internet.
The key is not to make the problem more complicated than it needs to be. If you just look at the footprint of the streams coming into the testing facility, you can assume, from our observation, that all streams of 150kbs are of a higher priority than the larger streams, and simply throttle the larger streams. Doing so will insure there is enough bandwidth for the testing service connections to the students. The easiest way to do this is with a heuristic-based bandwidth controller, a class of bandwidth shapers that dynamically give priority to smaller streams by slowing down larger streams.
The other option is to purchase more bandwidth, or in some cases a combination of more bandwidth and a heuristic-based bandwidth controller, to be safe.
Please contact us for a more in-depth discussion of options.
For more information on cloud usage in K-12 schools, check out these posts:
For more information on Bandwidth Usage by Cloud systems, check out this article: | <urn:uuid:4a7c4ccd-07ae-4559-8cb3-1e3990e97f52> | CC-MAIN-2022-40 | https://netequalizernews.com/2015/06/15/does-your-school-have-enough-bandwidth-for-on-line-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00106.warc.gz | en | 0.936665 | 608 | 2.671875 | 3 |
WASHINGTON —Money is a social and legal construct underpinned by trust. Conceptions of money have evolved and money has taken many forms over the years. In North America, pre-colonial trade was often conducted in wampum, corn, and fur pelts.
Today, for the United States, whatever specific objectives may arise for a central bank digital currency (CBDC), they should be consistent with the Federal Reserve’s longstanding objectives of the safety and efficiency of the nation’s payments system, as well as monetary and financial stability. A CBDC arrangement must be in keeping with these objectives, which have guided the central bank since its establishment in 1913.
A foundational element for introducing a CBDC is understanding its purpose: What can a CBDC be used for, how it can be used, and what potential value does it provide? A recent Bank for International Settlements report highlighted a number of potential benefits for a CBDC. These include enhancing payment system resiliency, increasing payments diversity, encouraging financial inclusion, and improving cross-border payments. Research papers and other reports have referenced the potential for a CBDC to support monetary policy. It is important to consider that a CBDC that is designed to support monetary policy transmission or economic stimulus payments, for example, would be quite different than a CBDC that is designed to be an alternative to cash. Without clear objectives, it would be difficult to establish business requirements for a CBDC.
Sources: This map was compiled using data from the March 2020 BIS Quarterly Review and a 2020 working paper from the IMF, “A survey of research on retail central bank digital currency,” and supplemented through additional secondary research. CBDC activity tracking sites from organizations such as the Atlantic Council were used. Motivations were broadly determined by the authors using the public statements attributed to sources within the central banks themselves or in some cases other news sources.
Central bank interest in CBDC research and experimentation varies significantly. However, these interests generally fall into two broad categories. One set of central banks is primarily looking to address present-day challenges, while for others it is exploring future capabilities. For some jurisdictions, a CBDC is intended to address a specific problem — inefficient payment systems, weak banking infrastructure, or declining cash use — or to promote national policy goals, such as supporting payments inclusion and protecting monetary sovereignty.
For many advanced economies, the primary motivations are centered on potential payments innovation and general preparedness for a potential future state. highlights some of the central banks’ primary motivations.
Source and Map: Federal Reserve by Jess Cheng, Angela N Lawson, and Paul Wong. Select here to visit their website for this full article. | <urn:uuid:7b2ed08d-6332-4509-b25e-7d34df01a712> | CC-MAIN-2022-40 | https://bdpatoday.com/tag/financial-inclusion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00106.warc.gz | en | 0.958036 | 544 | 2.984375 | 3 |
In March 2020, the White House announced its decision to develop the National Strategy to Secure 5G in hopes of expanding modern technological use in the federal government. However, the process has been difficult for companies to follow, as 5G introduces challenges like an increased surface area, software weaknesses, and decreased visibility. The Cybersecurity and Infrastructure Security Agency (CISA), a federal agency that aims to understand, manage, and secure cyber and physical infrastructure, has introduced a five-step 5G implementation process. With its partners from the Department of Homeland Security’s Science and Technology (S&T) Directorate, as well as the Department of Defense’s Office of the Under Secretary of Defense for Research and Engineering (OUSD R&E), CISA offers federal agencies a blueprint to initiating and navigating the risk management process for authorizing 5G systems.
5G has become a priority because it is a wireless technology that combines ubiquitous connectivity and computing. By using more bandwidth and higher frequencies, 5G succeeds in carrying more data quickly and efficiently. This type of network will be critical for projects in fields such as transportation, national defense and industrial production. The technology differs from most in that it allows users to specialize coverage to specific Internet of Things (IoT) or smart devices.
Hurdles to Implementation
The first hurdle agencies have faced is an increased attack surface after installing 5G. Since connecting to the IoT would mean working in a space where security is not built in, there would be a wider range of vulnerabilities for attackers to find. Secondly, connecting to a wider network would increase the surface of the network, supply chain and software weaknesses, opening it up to more areas of attack. Finally, adding 5G can decrease the network visibility, and companies who do not utilize additional network visibility software may not gain network traffic viewership to identify abnormalities.
Benefits of 5G
Despite potential setbacks, 5G has a variety of extremely useful features that will aid federal agencies’ technology use in innovative ways. Through the technology’s low, middle and high band radio spectrum, network slicing and edge computing, 5G will provide the public sector with new features, capabilities and services. Security enhancements include:
- Shield and Encryption identification that protect information from rogue devices
- Data routing through virtual hubs that cannot be easily changed or moved that allows device compatibility with intelligent software and virtual hardware
- A stronger encryption algorithm that disincentivizes hackers from decrypting your private information
- High band radio spectrum that creates a more secure connection at quicker speeds
- A consistent user experience across network alternatives
- Effortless network securement in remote locations, which can allow users to safely conduct business remotely
- Safe network access on mobile devices that impact productivity and quick remediation capabilities
To combat security risks, CISA has put forth a strategic initiative plan for federal agencies to follow.
There are five main steps in adapting 5G technology:
- Define the federal 5G use case
- Identify the assessment boundary
- Identify security requirements
- Outline security requirements to match federal guidance
- Assess security guidance gaps and alternatives
One of the main goals of this process is to help agencies fill potential security gaps if they had applied 5G on their own. Agencies should utilize this initiative to identify important threat frameworks, 5G system security considerations, industry security specifications, federal security guidance documents and relevant methodologies to conduct cybersecurity assessments of 5G systems. The standards provide a uniform and flexible approach to 5G and help federal agencies better evaluate, understand and address security and resilience to inspire future innovation.
A Positive Future
While every new technological advancement comes potential weaknesses, they provide greater potential for ensuring the security of the nation and the economy. As a result, CISA helps users add 5G to their networks in a uniform and flexible manner, and encourages agencies and organizations to provide feedback on the 5G Security Evaluation Process. These comments will be taken into consideration for adapting the strategy and guaranteeing that the guidelines will assist every government agency in their journey to safely adapt 5G.
Carahsoft and its vendors bring together a vast variety of security experts. With our aid and solutions, we can help you gain the knowledge and software needed to implement and evaluate 5G and keep your network secure. For more information regarding the evaluation process for 5G in federal government, visit our website.
“5G Security and Resilience,” Cybersecurity and Infrastructure Security Agency. https://www.cisa.gov/5g
“What is 5G Security? Explaining the Security Benefits and Vulnerabilities of 5G Architecture,” AT&T Cybersecurity. https://cybersecurity.att.com/blogs/security-essentials/what-is-5g-security
“5G Security Evaluation Process Investigation Version 1,” Cybersecurity and Infrastructure Security Agency. https://www.cisa.gov/sites/default/files/publications/5G_Security_Evaluation_Process_Investigation_508c.pdf
“What is 5G an Why Does it Matter,” Verizon. https://www.verizon.com/about/our-company/5g/what-5g | <urn:uuid:f8915593-e392-4826-8d38-8593288055c8> | CC-MAIN-2022-40 | https://www.carahsoft.com/community/carahsoft-cisa-5g-security-blog-2022 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00106.warc.gz | en | 0.917368 | 1,106 | 2.640625 | 3 |
Digital privacy is paramount to the global community, but it must be balanced against the proliferation of digital-first crimes, including child sexual abuse, human trafficking, hate crimes, government suppression, and identity theft. The more the world connects with each other, the greater the tension between maintaining privacy and protecting those who could be victimized.
Global digital privacy
Online communication can connect and enrich people’s lives, but it is also being leveraged for malicious purposes. Bad actors can now reach a broader audience of potential victims, coordinate with others, share the most effective practices, and expand their illegal activities while being protected by a shield of online anonymity. The ability to scale harmful activities is as efficient as scaling community-building practices. The Internet has provided an environment for predators to thrive.
The challenge is to respect the rights of individuals while still allowing systematic controls to protect, dissuade and, when necessary, investigate for prosecution those who are purposefully undermining the safety of global citizens. Just as in the physical world, law enforcement is tasked with protecting people from criminals.
They require the ability to investigate crimes in a timely manner and identify suspects for prosecution. The right to privacy and the risk of being victimized are in conflict. Users, companies, and governments are intertwined and struggling to effectively understand and deal with legacy and evolving threats.
As this landscape is evolving, we wanted to start the conversation on what is the right balance of privacy and safety online.
A zero-sum game of privacy and safety
Currently, there is a perception of a zero-sum game for privacy and safety in the digital world. Expectations, regulations, and enforcement are fragmented, confusing, and inadequate. In 2009, the Child Online Protection Act (COPA) was overturned by the Supreme Court, finding that it violated first amendment rights.
The practical implications of this legislative change, coupled with Section 230 of the Communications Decency Act (CDA) of 1996, which holds that platforms are not responsible for what third-party publishers post on them, is that children are no longer protected from adult content by websites – the responsibility was transferred to their parents.
The Children’s Online Privacy Protection Act of 1998 (COPPA) is the current law that protects child data privacy online. It mandates that any company that has users under that age of 13 on their platforms must prove that the parents gave their permission (often accomplished by entering credit card information to prove identity) and can’t retain data from children under 13.
Many platforms avoid addressing these restrictions by stating no one under the age of 13 is allowed on their platforms, but they do not have practices in place for proof of identity to enforce them in a meaningful way. They usually use a “check the box if you are over 13“ honor system, so many children online end up lacking the privacy or safety protections that COPPA was meant to provide them.
Parents who are raising this generation of digital natives are digital immigrants themselves. They were young enough to adjust to the trends of social, mobile, and cloud; but they were mostly in their 20s when they gained access to it. This has left a significant knowledge gap in what cyberbullying, grooming, and sextortion tweens and teens experience.
This generation of teens is exhibiting the highest rates of mental health issues and suicides we have seen to date. This teen suicide trend is even more alarming when you factor in that, according to the Center for Disease Control, deaths from youth suicide are only part of the problem, because more young people survive suicide attempts than actually die.
Contributing author: Matthew Rosenquist, CISO, Eclipz.io. | <urn:uuid:878332c1-dc7d-445d-9eee-9fe8b5af378a> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2020/07/21/global-digital-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00306.warc.gz | en | 0.958917 | 743 | 2.90625 | 3 |
Reducing Data Centers’ Carbon Footprint
Published on April 1, 2020,
Regulations for data centers continue to evolve as time passes. From strategies to improve business efficiency to DCIM discussions, governmental mandates, and virtualization, development is becoming not just optional but essential.
As Information Technology companies strive to be evermore environmentally friendly, attention turns to greenhouse gasses and the carbon footprint.
- Energy Conservation
Data centers can accumulate more energy expenditures than needed for the data center to fully function. Nlyte Software has means to enable data centers to determine which company resources are essential to the computing needs and which are not. This allows companies to replace or remove the inefficient or stagnant resources. Resultantly, some companies find as great as a 20% increase in power and cooling efficiencies, among other results.
Specifically looking at overall server workload, capacity, and use can decrease greenhouse gas emissions. Nlyte enables reduction by aligning various functions through such means as workload virtualization and capacity increase for underutilized servers. Not only does this reduce the company’s overall e-Waste, but it also decreases the manufacturing contribution to the carbon footprint.
Although one-time changes can decrease the carbon footprint, finding sustainable solutions and continually monitoring energy use is essential to make the greatest impact. Nlyte offers solutions that allow report generation at any time in order to get an overview of the data center’s energy use. This gives directional confirmation or redirection to help make positive changes in energy reduction.
Nlyte has been able to implement these changes in a variety of situations, including with Computacenter. With more than 10,000 employees and grossing over 3 billion pounds, Computacenter is Europe’s leading independent provider of IT infrastructure services. They specialize in advising on IT strategy, implementation, management, and complexity reduction.
The company decided they needed a DCIM solution in order to plan and forecast the most effective use of new and existing data centers to meet growing demand. Nlyte was able to meet this need through their DCIM offerings. Nlyte enabled time to be saved, data to be accurate, power and utilization management to be optimized, and more.
Sam Brickett, Head of Data Center Services at Computacenter, attests to Nlyte’s DCIM impact:
With Nlyte, and its advanced analytic capabilities, we have been able to gain control over our data centres. The ability to view, model and predict our data centre power, cooling and space requirements in near real-time enables us to make the most effective use of our distributed estate” said Simon. “In addition, we can deliver our customers with services tailored to meet their exact requirements and help them to minimise their data centre energy usage and carbon footprint.”
To find out more about implementing a DCIM solution, refer to Getting Started with DCIM, DCIM Benefits versus Direct, Indirect, and Hidden Costs, or Data Center Efficiency: Strategies for Improvement. | <urn:uuid:64b4c732-8e23-46b2-9bbc-4e72c7e0cb2f> | CC-MAIN-2022-40 | https://www.nlyte.com/blog/reducing-data-centers-carbon-footprint/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00306.warc.gz | en | 0.920641 | 627 | 2.578125 | 3 |
What is a Data Center?
All data centers are essentially buildings that provides space, power and cooling for network infrastructure. They centralize a business’s IT operations or equipment, as well as store, share and manage data.
Businesses depend on the reliability of a data center to ensure that their daily IT operations are always functioning. As a result, security and reliability are often a data centers top priority.
In this technology explainer we look at the different classifications of a data center; Hyperscale, Colocation, Wholesale Colocation, Enterprise, and Telecom, and explore what they do and who they are for.
Hyperscale Data Center
- A Hyperscale (or Enterprise Hyperscale) data center is a facility owned and operated by the company it supports. This includes companies such as AWS, Microsoft, Google, and Apple.
- They offer robust, scalable applications and storage portfolio of services to individuals or businesses.
- Hyperscale computing is necessary for cloud and big data storage.
- Has anywhere from 500 Cabinets upwards, and at least 10,000sq ft. in size.
- Usually have a minimum of 5,000 servers linked with an ultra-high speed, high fiber count network.
- May use external companies on initial fit outs before maintaining internally.
- Noticeable difference from Enterprise to Hyperscale is the High Fiber Count utilized across the network.
Colocation Data Center
- Colocation Data Centers consist of one data center owner selling space, power and cooling to multiple enterprise and hyperscale customers in a specific location.
- Interconnection is a large driver for businesses. Colocation data centers offer interconnection to Software as a Service (SaaS) such as Salesforce, or Platform as a service (PaaS) like Azure. This enables businesses to scale and grow their business with minimum complexity at a low cost.
- Colocation companies offer technical guidance for companies that don’t know what they require, or want the hassle to source and deliver it.
- Other Colocation facilities have a slightly different model where chosen integrators provide the technical design, guidance and specification for migrating customers.
- Depending on the size of your network requirement, you can rent 1 Cabinet to 100 Cabinets, in some cases ¼ or ½ a cabinet is available.
- A colocation data center can house 100s if not 1000s of individual customers.
Wholesale Colocation Data Center
- Wholesale colocation data centers consist of one owner selling space, power and cooling to enterprise and hyperscale like standard colocation.
- In these instances Interconnection is not really a requirement. These facilities are used by hyperscale or large companies to hold their IT infrastructure.
- In most cases wholesale colocation provide the space, power and cooling.
- A number of wholesale colocation companies are adding standard colocation into their portfolio on the same sites where possible.
- Wholesale colocation tend to support less customers, depending on the data center size, this can be typically under 100 tenants.
- Typically the cabinet numbers range from 100 cabinets to 1000+ Cabinets.
Enterprise Data Center
- An enterprise data center is a facility owned and operated by the company it supports and is often built on site but can be off site in certain cases also.
- May have certain sections of the data center caged off to separate different sections of the business.
- Commonly outsources maintenance for the M&E but runs the white space themselves via the IT team.
- May use external companies on initial fit-outs and network installation before being maintained internally.
- Has anywhere from 10 Cabinets upwards and can be as large as 40MW+.
Telecom Data Center
- A telecom data center is a facility owned and operated by a Telecommunications or Service Provider company such as BT, AT&T or Verizon.
- These types of data centers require very high connectivity and are mainly responsible for driving content delivery, mobile services, and cloud services.
- Typically the telecom data center uses 2 post or 4 post racks, to house IT infrastructure, however cabinets are becoming more prevalent.
- Use their own staff to install and manage the sites, initial install and continual routine. A lot become lights out sites.
- Some Telco companies run the data center within a Data Centre, for example a Colocation data center.
- Telco Data Centres are now utilising space within their facilities to add additional services such as Colocation
Soon there will be another classification of data center. The Edge data center. Early indications show Edge data centers will support IoT, autonomous vehicles and move content closer to users, with 5G networks supporting much higher data transport requirements. It is expected Hyperscale and Telecom companies will largely push or compete for the emerging business. It is too early to predict the detailed shape and scale of Edge computing but we do know that some form of Edge computing will evolve and that there will be lots of fiber involved.
With different data centers come very different needs and network architecture types. The varying network architectures are all united in the want for higher speed, performance, efficiency and scalability. What is certain is that our want for greater technologies, whether it be IOT, automation, or AI, alongside our consumption of social media, and streaming services, will continually put pressure on data centers to innovate and grow as we continue to move into a more connected world. | <urn:uuid:ea8eb02d-7e14-4df4-8341-f397b71a86bc> | CC-MAIN-2022-40 | https://www.aflhyperscale.com/articles/techsplainers/understanding-different-types-of-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00306.warc.gz | en | 0.921854 | 1,119 | 2.6875 | 3 |
Proactive vs Reactive Routing Protocols
Routing protocols are the routes that help to learn dynamic routes. These protocols are organized on routers in regards with exchanging the information related with routing. Using the routing protocols in your network has many benefits like router has the ability to advertise the failing of router. Also you did not need to configure manually every route in each router in the network.
Further these routing protocols can be categorized in six various forms but we are going to talk about only two of them – reactive and proactive protocols. These both protocols are utilized in mobile Ad hoc networks for sending data to the destination from the host. This information is sent through multiple ways from source to destination that are mobile and can be located on car, bus, ship or aeroplane.
Generally, this type of network is utilized in a military field, a disaster hit area or on in area where infrastructure is demolished or does not exist. The network’s node work as the routers and transmit data from one node to another until it reaches the destination. As the data has to covered the various nodes so the routing protocol in important to deliver the data at correct location.
Comparison: Proactive vs Reactive Routing Protocols
Reactive protocol is divides in two types – Ad hoc On-Demand Distance Vector (AODV) and Temporary Ordering Routing Algorithms (TORA). In AODV routing protocol, the work of node is independent and does not carry the information of other nodes or adjacent node in the network. The process only when the data is transferred to them to maintain the route with the destination. These nodes comprise of the information of the route from which the data has to be transferred so the passing of information packet is followed by predetermined route. TORA is a very adaptive and efficient process as it works with all the shortest possible routes from source to destination. In this type of protocol, each and every node carries the information of its neighbouring nodes. It also has the ability to ensure the journey of the data, creation of route and erase the route if there is any partition within the network.
Related – AODV Routing Protocol
Destination Sequence Vector or DSDV router is utilized in this type of protocol that was designed with the algorithm of Bellmann-Ford. All the information regarding with next node is maintained in this protocol. All the nodes that are mobile have to relay its entries with the adjacent nodes. The nodes that lies in the route deliver the data packet from one node to another after the mutual agreement. So, for this purpose all the nodes have to constantly update their position in DSDV protocol to avoid the interruption in the route.
- Reactive protocol is a on demand process that means determine routes whenever needed while the proactive protocols traditional process but provides the shortest path.
- The packet data is delivered in more efficiently in the reactive protocol than in proactive protocol.
- Proactive protocols are much slower than the reactive protocols in terms of performance.
- For the different topographies, reactive protocol is more efficient and adaptive than the proactive protocols.
- For the reactive protocol, the time taken or average end to end delay by the data to reach the destination from the source is quite variable while in proactive it is constant for the a given Ad hoc network. | <urn:uuid:5f91ded6-2bb7-43aa-a869-a3a9d4fb8ec9> | CC-MAIN-2022-40 | https://networkinterview.com/proactive-vs-reactive-routing-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00306.warc.gz | en | 0.938388 | 682 | 3.328125 | 3 |
Managing the performance of database systems and applications is a significant job responsibility for DBAs. From a database perspective, there are three basic performance components that must be performed:
- Monitoring the database management system and the applications accessing it to find problems as they arise. This is typically referred to as performance monitoring.
- Analyzing performance data (logs, trace records, reports, etc.) from the system to determine the root cause of problems.
- Assembling a corrective action to implement a fix to the problems.
There are database performance software products that can aid with all three of these components. But you must be careful to fully understand the capabilities of any database performance management solution, as some simply monitor, others just analyze data or provide fixes for problems, and others deliver functionality combining all of these tasks.
You can also break down database performance management software by the category of performance issues it addresses. Database performance problems can occur in any of the following three areas:
- The DBMS itself, which must interact with other system software and hardware, requiring proper configuration to ensure it functions accurately and performs satisfactorily. Additionally, there are many database system parameters used to configure the behavior of the DBMS and the resources it has available to it. This includes criteria such as memory capacity, I/O throughput, and locking of data pages.
- The database design and schema, including database parameters, table designs, and indexing, can all impact database performance. How the data is organized must also be managed; as data is modified in the database, its efficiency will degrade. Reorganization and defragmentation are required to periodically remedy disorganized data.
- Finally, the SQL and application code itself can cause performance issues. Coding efficient SQL statements can be complicated because there are many different ways to write SQL that return the same results. But the efficiency and performance of each formulation can vary significantly. DBAs need tools that can monitor the SQL code that’s being run, show the access paths it uses, and provide guidance on how to improve the code.
Database performance tools can identify bottlenecks and points of contention, monitor workload and throughput, review SQL performance and optimization, monitor storage space and fragmentation, and view and manage your system and DBMS resource usage. Of course, a single tool is unlikely to perform all of these tasks, so you likely will need multiple tools (perhaps integrated into a functional suite) to perform all of your required database performance management tasks.
Without proactive tools that can identify problems as they occur, database performance problems are most commonly brought to the attention of the DBA by end users. The phone rings and the DBA hears a complaint that is usually vague and a bit difficult to interpret, such as, “My system is slow today” or, “My screen isn’t as fast as it used to be.” To resolve such issues, DBAs need tools that can help uncover the exact problem and identify a solution. Database performance management tools can be used to find the root cause of such problems as well as to deploy a solution to fix the problem.
Furthermore, many organizations use multiple DBMS products in production, and the same DBA team (and sometimes even the same exact DBA) will have to ensure the performance of more than one DBMS (such as Oracle and SQL Server or Db2 and PostgreSQL). But each DBMS has different interfaces, parameters, and settings that affect how it performs. Database performance tools can mitigate this complexity with intelligent interfaces that make disparate components and settings look and feel similar from DBMS to DBMS.
There are many providers of database performance management tools, including the DBMS vendors (such as IBM, Microsoft, and Oracle), large ISVs (such as BMC and CA), and a wide array of niche vendors that focus on DBA and database performance software (for example, Quest, IDERA, and Navicat).
The exact database performance management solutions you should use depend upon the database systems you utilize, the size of your organization, the amount of data managed, your service level agreements, and your budget. But managing production databases without performance tools is a recipe for failure. | <urn:uuid:4fb1ab97-e1a3-46b9-9e9b-863b0f2343ba> | CC-MAIN-2022-40 | https://www.dbta.com/Columns/DBA-Corner/Managing-Database-Performance-128940.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00306.warc.gz | en | 0.927085 | 856 | 2.515625 | 3 |
Software development involves creating, designing, documenting, and debugging code to build an application, framework, or digital utility. Software development is done primarily by software developers…..
DevOps is a software development method that emphasizes collaboration and communication between software developers and information technology (IT) professionals. What is DevOps? DevOps is a…..
As the progressive movement of the world makes evolving DevOps too. It all started a decade ago. As every development has a few stages so…..
Software development has grown to be a monstrous industry, reigning in almost $430 billion in 2021, with an estimated annual CAGR of 11.7% for the…..
Well, in a typical organization or a company, the development team is responsible for creating products, while the operations team is responsible for managing and…..
DevOps is among the most leveraged practices to improve software development and delivery. By uniting developers and operations teams, enterprises are able to achieve faster…..
Developers use many tools and techniques to manage an IT project and for its tracking. Among them, Azure DevOps and Jira are the most popular…..
DevOps is a popular term in the IT world, but what does it actually mean? In simple terms, DevOps is a set of practices that…..
DevOps to accelerate the Agile Process In today’s fast-paced world, organizations need to be effectively agile to match the ever-changing market needs and keep it…..
What is DevOps? DevOps can be described as a combination of practices, tools, and cultural philosophies, which helps foster the organization or a business’s ability…..
SAN RAMON, CA, UNITED STATES, June 21, 2022. Kovair Software, one of the leaders in software development tools and integrations, has now achieved Red Hat…..
DevOps combines development and operations to improve workflows. Traditionally, developers write programming scripts from scratch. But doing so means you risk the chance of human…..
When discussing AI in software development, we often talk about machine learning. But is this the same thing? Can machine learning replace DevOps? And can…..
The cloud has taken the business world by storm. It’s no wonder, really – the cloud offers a wide range of benefits that businesses can…..
What is DevOps? DevOps is the result of combining the tools and practices of software development and the cultural philosophies of IT operations. When done…..
DevOps helps to ensure that new features are of a consistent quality, which leads to happier customers. It’s no secret that customer satisfaction is key…..
That one aim every organization dreams of is high-profit margins and enhanced team productivity. While the fact is, every business indeed dreams about it, very…..
Changes in the economy have prompted 40% of surveyed business leaders to implement changes in their physical security strategy. With the increasing concern over physical…..
The IT sector is quickly adapting to artificial intelligence (AI) and machine learning has a significant role in this rapid improvement. Nowadays, we hear words…..
The deployment pipelines of Power BI are a (relatively) new Power BI functionality for report testing, change management, and managing collaboration. NOTE: If you want…..
Initially launched as visual studio team services, Azure DevOps is extensively built, maintained, and managed by Microsoft. Hence Azure DevOps is a fast, secure, reliable,…..
In this article, we are going to explore concepts that drive successful DevOps with test automation. We are going to comprehend DevOps and its benefits,…..
Introduction Imagine pushing a button and immediately getting value from your work. If you’ve ever been the victim of software or process debt, or simply…..
According to a 2020 data breach investigation by Verizon, there were 3,950 data breaches, and that 41% of data breaches were a result of software…..
As we are moving towards the end of 2021, the demand for constant and rapid improvements in digital solutions has gone up. To keep pace…..
Imprecise implementation or Cultural misinterpretation may have you missing the true benefits of DevOps DevOps is a huge buzzword in software engineering. Often seen as…..
Security is one of the major concerns for any enterprise. As technology advances and businesses incorporate them for scalability, they must also spend time to…..
As the working population moves to more remote places, connections become more dispersed as the corner moves further away. Simultaneously, the number of products and…..
DevOps is becoming more and more important for the IT industry. Because automation is speeding up processes, similarly small, mid, and large-sized businesses are shifting…..
As the IT industry grows, so as the need or demand for DevOps operations! It is gaining momentum in recent times and businesses are trying….. | <urn:uuid:7757e5a6-d61a-4ab7-9b02-007219bde7f5> | CC-MAIN-2022-40 | https://www.kovair.com/blog/tag/devops-implementation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00306.warc.gz | en | 0.939299 | 976 | 2.5625 | 3 |
Mutual authentication, also known as two-way authentication, is a security process in which entities authenticate each other before actual communication occurs. In a network environment, this requires that both the client and the server must provide digital certificates to prove their identities. In a mutual authentication process, a connection can occur only if the client and the server exchange, verify, and trust each other's certificates. The certificate exchange occurs by means of the Transport Layer Security (TLS) protocol. The core of this process is to make sure that clients communicate with legitimate servers, and servers cooperate only with clients who attempt access for legitimate purposes.
The mutual authentication process involves the following certificates:
Root CA certificate
Used to identify a certificate authority (CA) that signed a client's certificate. It is a self-signed certificate that meets the X.509 standard, defining the format of public key certificates. In IoT products, clients upload a root CA certificate or a certificate chain to verify that the certificates that client devices send to edge servers can be trusted.
Server SSL certificate
Used to identify edge servers to client devices over TLS and to establish a secure connection during the TLS handshake. It is the enhanced TLS certificate that you provide in your property configuration.
Client SSL certificate
Used to identify client devices to edge servers over TLS. This certificate must meet the X.509 standard, defining the format of public key certificates.
Updated 9 months ago | <urn:uuid:4d6cb47e-a25d-45cc-b371-cc0c956db369> | CC-MAIN-2022-40 | https://techdocs.akamai.com/iot-edge-connect-msg-store/docs/mutual-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00506.warc.gz | en | 0.904752 | 294 | 3.703125 | 4 |
The computer chip industry over the last couple of decades has seen its innovation stem from just a few top players like Intel, AMD, NVIDIA, and Qualcomm. In this same time span, the VC industry has shown waning interest in start-up companies that made computer chips.
The risk was just too great; how could a start-up compete with a behemoth like Intel which made the CPUs that operated more than 80% of the world’s PCs? In areas that that Intel wasn’t the dominate force, companies like Qualcomm and NVIDIA were a force for the smartphone and gaming markets.
The recent resurgence of the field of artificial intelligence (AI) has upended this status quo. It turns out that AI benefits from specific types of processors that perform operations in parallel, and this fact opens up tremendous opportunities for newcomers. The question is – are we seeing the start of a Cambrian Explosion of start-up companies designing specialized AI chips? If so, the explosion would be akin to the sudden proliferation of PC and hard-drive makers in the 1980s. While many of these companies are small, and not all will survive, they may have the power to fuel a period of rapid technological change.
There is a growing class of start-ups looking to attack the problem of making AI operations faster and more efficient by reconsidering the actual substrate where computation takes place. The graphics processing unit (GPU) has become increasingly popular among developers for its ability to handle the kinds of mathematics used in deep learning algorithms (like linear algebra) in a very speedy fashion. Some start-ups look to create a new platform from scratch, all the way down to the hardware, that is optimized specifically for AI operations. The hope is that by doing that, it will be able to outclass a GPU in terms of speed, power usage, and even potentially the actual size of the chip.
Accelerated Evolution of AI Chip Start-ups
One of the start-ups that is well-positioned to enter the battlefield with the giant chip makers is Cerebras Systems. Not much is known publicly about that nature of the chip the Los Altos-based startup is building. But the company has quietly amassed a large war chest to help it fund the capital-intensive business of building chips. In three rounds of funding, Cerebras has raised $112 million, and its valuation has soared to a whopping $860 million. Founded in 2016, with the help of 5 Nervana engineers, Cerebras is chock full of pedigreed chip industry veterans. Cerebras co-founder and CEO Andrew Feldman previously founded SeaMicro, a maker of low-power servers that AMD acquired for $334 million in 2012. After the acquisition, Feldman spent two and a half years as a corporate vice president for AMD. He then started up Cerebras along with other former colleagues from his SeaMicro and AMD days. Cerebras is still in stealth mode and has yet to release a product. Those familiar with the company say its hardware will be tailor-made for “training” deep learning algorithms. Training typically analyzes giant data sets and requires massive amounts of compute resources. […] | <urn:uuid:176c008c-13bc-41f9-b1a4-063b755c80df> | CC-MAIN-2022-40 | https://swisscognitive.ch/2018/03/27/will-artificial-intelligence-spark-a-chip-cambrian-explosion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00506.warc.gz | en | 0.961997 | 646 | 2.734375 | 3 |
Bias is preventable
Not surprisingly, consumers continue to be concerned about bias. Indeed, Genpact research found that 78 percent of consumers say it's important that companies take active measures to prevent it, and 67 percent are apprehensive about potential discrimination when robots make decisions.
And the reality is that an AI algorithm is only as good as the data it uses, and teams' unconscious biases can creep in when there's little diversity among their members.
But banks can help mitigate this issue by making sure to consider the necessary oversight when programming the solution. They can develop a series of rules to prevent the machine from making the same mistakes a human would make, like issuing a loan inappropriately or denying mortgages to people within a certain demographic segment.
And this is good news for banks and customers alike. For example, in the past, a loan officer might overlook creditworthiness if he or she knew the customer. But when trained effectively, AI can't ignore such binary decisions.
There's little denying AI's benefits to both banks and their customers. To make sure the technology can deliver its expected value, banks must develop an ethical framework of transparency and education to gain and retain their customers' trust.
This article was authored by Mark Sullivan, Global Business Leader, Banking & Capital Markets, Genpact and was first published in Money Control. | <urn:uuid:682d9005-ab8b-49b4-9177-7f1d4f0ea722> | CC-MAIN-2022-40 | https://www.genpact.com/insight/banking-customers-will-embrace-ai-if-they-trust-whats-behind-the-curtain | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00506.warc.gz | en | 0.957713 | 271 | 2.9375 | 3 |
As technology continues its blistering pace of advancement towards the next big thing, companies scramble to keep up with the demand of modern society. The arms race is never ending in the pursuit of providing customers with the services and products on which they are willing to spend their hard-earned money. DevOps is one of the new advancements made in the IT sector of business operations that has everyone abuzz.
While it may seem like a lot of hype, DevOps has proven itself to be an invaluable tool in maximizing the potential of software enterprises to build, test, and deploy their products at blazing speeds and with more success than previously found using traditional methods. DevOps has earned its place among the greats as a system for empowering software teams to reach new heights, but what exactly is DevOps?
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
The Basics of DevOps
DevOps is more of a cultural shift in how software development is approached than it is any singular system or collection of tools. Having said that, the core pursuit of DevOps systems is really just one thing: collaboration. Collaboration is at the core of everything DevOps wants to achieve. Even the name itself is just a collaboration of the words development and operations. As such, DevOps is a mentality of working together and finding new ways to collaborate better than before.
In pursuit of this, DevOps emphasizes the need for effective communication and extols the merits of transparency. Best practices, tools, and technology that enhance communication are cornerstones of an effective DevOps system. Things such as cloud computing, version control, and automation play a large role in the success of DevOps, but these tools are only a means to an end. The true secret sauce is the culture of collaboration and the leveraging of goal-oriented and customer-centric ideologies like Agile.
Agile’s Impact on Software Development
DevOps owes much of its history and success directly to Agile. In the early days of software development, development lifecycles were orders of magnitude longer than they are today. The entire process took place in siloes cut off from the users and customer-facing teams. The method that was first adopted for software development was called the “waterfall” approach, which held the idea that developers would define a customer need and then develop a single product that met that need successfully before releasing it to the public. This process caused long development cycles in which the developers had no interaction with customers, resulting in the complete lack of feedback and the creation of software that often fell short of user expectations.
The developers would work in their own silo while customer service and operations worked in their respective regions and information passed seldomly and inaccurately between these disparate teams. The result was expensive projects that would often fail to adequately service the needs of the customers. Without feedback and communication, products struggle to improve.
During the earlier days of development when waterfall methods were commonplace, there was less competition due to the various barriers of entry for the software development sector. Those barriers began to rapidly fall to the wayside as technology leaped and bounded past even the most optimistic person’s expectations and competition began to sprout from every corner of the world. The empire that is known as Google today is famously known for having begun its life in a rented garage. As more competition appeared on the scene, practices for developing better software arose and Agile proved itself as an invaluable development system.
With Agile, the development lifecycle was dramatically sped up and the process involved customer interactions throughout. The focus shifted from comprehensive documentation to creating operational software as quickly as possible so it could be tested and iterated upon rapidly. Agile also brought about the idea of software as a service (SaaS), where there is no truly “finished product” as the software is constantly being improved upon. Agile turned enormous projects into smaller, bite-sized deliverables that would be churned out on a regular basis with constant feedback.
And Then Came DevOps
As Agile is an approach to development that focused on interaction, so too is DevOps. With DevOps, collaboration with teammates, customers, and executives should become second nature. The main application of Agile methodologies is the utilization of automation for testing and building to embrace the ideals of continuous integration and continuous delivery. DevOps also embraces the power of automation but takes it a step further by emphasizing the importance of creating cross-discipline teams and using automation to empower collaboration between team members and other teams.
DevOps can be seen as an extension of Agile where communication is used as the tool to pull everything together into a fully functioning development unit. DevOps looks to fill the gaps within enterprises by promoting transparency and understanding for everyone – not just developers. Having a deeper understanding of each stage of the process empowers people with the knowledge of how to best perform their own role as well as how to aid others in performing theirs.
While Agile focuses on creating and testing new software, DevOps takes it a step further and looks to successfully create, test, and deploy the software. This distinction is important because it’s the role operations plays in the whole scheme. Agile looks to put developers into teams to speed up development while DevOps creates teams of cross-discipline members where the whole is greater than the sum and everyone can approach the project with their unique perspective.
Agile’s main pursuit is speed while DevOps is accuracy. Combining both successfully results in the best of both worlds where teams rapidly build, test, and deploy updates that are stable and marked improvements on past builds. DevOps achieves this through the combination of cultural shifts in mentality and tools that empower the new ideology. Creating a DevOps team isn’t an overnight process, but the potential rewards are more than worth the effort.
DevOps: Solutions for You
If DevOps sounds like a good fit for your organization’s needs but you want to make sure you get it right the first time, BMC is the IT solution partner you need. Read more about how automation and DevOps systems can help increase the rate at which you deploy products with BMC’s free eBook: Automate Cloud and DevOps Initiatives.
BMC offers IT Cost Management solutions that help you optimize your spend on IT resources such as cloud services. Check out BMC’s free eBook on determining the true cost of clouds to get a better idea for how much implementation will cost your enterprise. DevOps leverages the power of various tools to create the best environment for teamwork and collaboration.
In addition to IT management resources, BMC offers consulting and deployment services. BMC expert consultants are available to work with you to bring their knowledge and expertise to your organization. BMC also provides custom-tailored Deployment Services for your organization to tackle the unique challenges you face. When partnering with BMC, you get:
- Faster service delivery: Agile releases that keep up with rapid demand
- Visibility across data: Ensure compliance and data accuracy
- Cost-effective service: Increased productivity and performance
- Experienced DevOps professionals: Equip you with the tools you need for success
- Conversion or upgrade: Seamless modernization or total replacement
- All tailored for the specific needs of your organization.
Download or view the Solution Implementation Overview online to learn more about how BMC Consulting Services can help you. Learn more about how DevOps and Application Deployment best practices can enable your teams to create better software faster than ever before. Then contact the experts at BMC to learn more about how Agile and DevOps practices work together for enhanced building, testing, and deployment success. | <urn:uuid:c682e3f1-fa36-4ccf-95c2-303e7822b3c9> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/devops-agile/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00506.warc.gz | en | 0.957419 | 1,567 | 2.640625 | 3 |
Current monitoring methods either don’t have the capacity to scale globally, or simply don’t have the required resolutions––and fine-scale data is often not within reach.
Copyright by Mark Minevich, www.forbes.com
The world’s biodiversity status is in crisis mode––and Covid-19 has only exacerbated this reality. Covid has served as a stark reminder that negative interactions with species can directly impact our lives. As of 1970, the world has seen a significant 68% average decline of birds, amphibians, mammals, fish, and reptiles. Just in the Americas alone, natural ecosystems provide humans an estimated $24 trillion worth of economic value every year, equivalent to the entire gross domestic product. As wildlife changes occur, all ecosystems become less resilient and are more at risk. Without resilient ecosystems, agriculture, water and wildlife-based tourism are left in significantly vulnerable shape.
Current monitoring methods either don’t have the capacity to scale globally, or simply don’t have the required resolutions––and fine-scale data is often not within reach. Governments and administrations have been slow to install measures; meanwhile, private sector employers desire the competitive advantages that come with ‘green’ credentials, but don’t always know how to contribute effectively, leading to green-washing and wasted resources. In addition, employees prefer to work for companies with a good environmental record and welcome a chance to volunteer and participate, but engagement is often symbolic or short-term.
Traditionally, researchers have been working tediously by completing manual tasks including identifying specific animals from photo shoots for population studies to classifying the camera photos gathered by field workers. We need to pool together a smarter global effort led by the UN, public and private sectors to bring about accurate, data driven current, global maps and hotspots of species numbers and distributions to develop prescriptive global conservation strategies. If we are to save our world’s biodiversity, now more than ever, it is time to mobilize fully-integrated AI and machine learning solutions for wildlife conservation––and to ensure that these solutions are sustainable for decades to come.
Unlike the domains of finance, science, healthcare and the like, wildlife conservation is often left in the dark when it comes to advanced AI solutions. Nevertheless, there are global pioneering organizations and startups that are working towards real use cases in AI for Good applications to bring about resilient biodiversity. For example, the World Wildlife Fund (WWF) is working with Intel to apply AI to monitoring and protecting Siberian tigers in northeastern China. According to the International Union for Conservation of Nature (IUCN), the South China Tiger is a “critically endangered” species. Intel’s Movidius visual device, combined with the company’s back-end analysis and recognition platform, are leveraged by the WWF to track the habits of tigers, collect data on them, and use this information to help restore their wildlife resilience. On the topic of visual recognition, as per Synced, “Although image recognition is the most widely applied AI tech in wildlife conservation, researchers and startups have also leveraged other tech to create devices and systems to protect animals in more proactive ways. PAWS (protection assistance for wildlife securities) is an AI tool designed to help rangers in the fight against poachers. It collects historical data of poaching activities and suggests patrol routes according to where poaching is most likely to occur.”
In addition to Intel corporation, companies like WildTrack are also pioneering data driven biodiversity solutions. According to WildTrack, the organization’s “AI-enabled Footprint Identification Technology (FIT) offers a cost-effective and non-invasive tool to collect, analyze and distribute data on species numbers and distribution at the scale and resolution required.” Moreover, WildTrack champions the approach of data democratization. According to the British Ecological Society, democratizing data collection to include environmental supporters is a huge unexploited opportunity. Ecotourists, local communities (especially those with expert indigenous trackers, e.g. current partners in Botswana, Germany, Israel, & Namibia), outdoor enthusiasts, schools and universities could collect data across borders. Because of WildTrack FIT’s interactive interface, the company has the ability to encourage direct engagement in conservation principles. This addresses the importance of interactive digital assets and tools in the creation of more robust ecological frameworks.
Lastly, it is important to outline that all applied AI solutions for conservation must stem from the core premise of sustainability. Sustainable solutions include the following: […] | <urn:uuid:60d693a0-f991-4dad-b37d-06ff10209d56> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/10/29/ai-for-good-is-saving-the-planet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00506.warc.gz | en | 0.926676 | 933 | 2.890625 | 3 |
Many talk about and plan for an Internet of Things (IoT) future, but they don’t truly understand the true power of its capabilities. Predictions suggest that IoT will have as much impact on human lives, governments, businesses and institutions as the harnessing of water for steam power, the discovery of electricity and the computer age had on the generations before us. In other words, IoT will, and is already beginning to, transform the world around us in a way that will impact our daily lives.
We plan for a future where a worldwide network of sensors will one day give enterprises across the globe a new way of operating. Sensors will be connected via wireless technology to computers that then analyse the data, offering more insight and visibility into how people, devices, and systems are working together. We call this the Intelligent Enterprise.
Consumers are poised and ready for such IoT innovation – eagerly buying smart and connected products like washing machines and thermostats. However, while they tactfully plan for an IoT future, they’ve failed to realise it’s already arrived. IoT is already impacting and reshaping the enterprise landscape and to stay on top of the present and prepare for the future, enterprises must figure out how to connect and manage these devices and networks. Leveraging the power of these partnerships, organisations need to identify the best tactics to benefit business processes and create new experiences for their employees, customers, partners and all stakeholders.
The new reality is that access to real-time data – from people, processes and devices through sensors connected to the internet – is revolutionising the way we interact with each other and our world. This visibility into what is going on, right now, is game-changing, and we are already seeing innovation across all of the core sectors.
The Internet of Things (IoT) is happening now
The idea of connecting the physical and digital worlds to drive innovation creates a more intelligent enterprise landscape. It requires businesses across industry sectors and market sizes to convene and partner with vendors, government, customers and academic institutions to come up with agreed-upon standard practices and guidelines so that all enterprises can connect with each other and become more intelligent.
Many industries have already begun the move to a more intelligent enterprise, for instance:
In the trucking industry, 30 percent of the average truck on U.S. roads is filled – with air. When humans load trucks, they can be inefficient. But if sensors are placed inside semi-tractor trailer trucks, workers monitoring the packing will learn how efficiently the truck has been loaded. If the amount of air transported was cut to zero and trucks were fully packed, the number of trucks on U.S. roads would drop by 10 percent, an incredible cost savings for shipping companies and a substantial reduction of carbon emissions.
The global Industrial Internet of Things (IIoT) market is predicted to reach $933.62 billion by 2025—up from $109 billion in 2016, per Grand View Research. The adoption of IIoT models has grown worldwide, stemming from the technology's ability to reduce costs and increase productivity, process automation, and time-to-market.
In the retail industry, four percent of potential revenue is lost every year because stores can’t satisfy customer demand for specific products in their inventory. What’s popular sells out, and empty or low-stocked shelves are lost opportunities to make a sale. But, if every item were tagged and tracked and restock orders were transferred in real-time to retail employees and warehouses, retailers would be able to capture a significant amount of that lost revenue. With average profit margins of two percent for most retailers, even a marginal improvement can result in major growth.
However, per the 2017 Retail Vision Study (opens in new tab) by Zebra Technologies, 70 percent of retail decision makers are ready to make changes to adopt IoT, while 72 percent of retailers plan to reinvent their supply chains with real-time visibility enabled by automation, sensors and analytics.
In the healthcare industry, the complex network of individual players – doctors, nurses, hospitals, insurance companies – makes consolidating, sharing, and analysing medical data extremely challenging. But, projects across the globe are attempting to use sensors and data analysis to improve information gathering and processing and ensure better care for patients.
According to the American Heart Association, heart disease is the No. 1 cause of death globally, leading to 17.3 million fatalities annually. That number is expected to rise to more than 23.6 million by 2030. New technologies can change those outcomes for patients.
We may also see exciting steps toward wearables and sensors in healthcare, which may soon be able to automatically transmit health data directly to doctors and therefore eliminate the need to manually enter the information into a system. This will decrease the number of face-to-face doctor visits and free up healthcare physicians to spend more quality time with each patient.
Even entertainment is changing. If you want to see what IoT can do, just look at an NFL player today. Two nickel-sized sensors embedded in each player’s shoulder pads communicate more than 15 times a second with 20 radio receivers placed throughout each NFL stadium.
Data on every player is reported live, in real-time. The sensors track how fast players are running, as well as the distance they travel and their acceleration and deceleration, among other things. Computers analyse and display the information live over game video. Football fans watching a game on screen can see the NFL’s Next Gen Stats, giving fans new visibility into the game with details that have never before been tracked.
These are just a few of the ways in which the Intelligent Enterprise is already and will continue to make an impact on nearly every industry. A sensor may seem like a small device, but married with troves of data and the ability to understand and act on it brings a new wave of technological innovation and creativity to our world and improves the lives of people who live and work in it.
For more information please visit our website (opens in new tab).
Richard Hudson, Vice President and General Manager EMEA, Zebra
Image Credit: Chesky / Shutterstock | <urn:uuid:03572670-7404-4b25-a458-224e3c0b50b7> | CC-MAIN-2022-40 | https://www.itproportal.com/features/how-iot-is-driving-innovation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00506.warc.gz | en | 0.948629 | 1,261 | 2.71875 | 3 |
The promotion of electric cars has dramatically increased the demand for lithium-ion batteries. However, cobalt and nickel, the main cathode materials for the batteries, are not abundant. If the consumption continues, it will inevitably elevate the costs in the long run, so scientists have been actively developing alternative materials.
A joint research team co-led by a scientist from City University of Hong Kong (CityU) has developed a much more stable, manganese-based cathode material. The new material has higher capacity and is more durable than the existing cobalt and nickel cathode materials – 90% of capacity is retained even when the number of charging-recharging cycles doubled.
The research team was co-led by Dr. Liu Qi, Assistant Professor in the Department of Physics (PHY) at CityU, together with scientists from Nanjing University of Science and Technology (NUST), and the Institute of Physics, Chinese Academy of Sciences (IOPCAS). Their findings have been published in the scientific journal Nature Sustainability, titled “LiMnO2 cathode stabilized by interfacial orbital ordering for sustainable lithium-ion batteries.”
Lithium-ion batteries are now widely used in cell phones and electric cars. Most of the cathode materials contain cobalt and nickel, which are both not abundant and create pollution to the environment in the exploitation process. Therefore, scientists are searching for alternative cathode materials, for example, manganese (Mn).
Among the leading manganese-based candidates, LiMnO2 is cost-effective, more environmentally friendly with larger theoretical capacity. However, it suffers from poor stability during the charging-recharging cycle.
Breaking of grains, rapid structural degradation and serious dissolution of manganese may happen. Severe capacity decay upon cycling is resulted and therefore shortens its durability, hindering the application of LiMnO2 in the commercialized lithium-ion batteries.
Jahn-Teller distortion needs to be suppressed
Dr. Liu, an expert in developing cathode materials for lithium-ion batteries, pointed out that the structural instability of manganese-based materials is mainly caused by the Jahn-Teller distortion in their atomic structure. Upon discharging, the Mn-O bond in LiMnO2 will be elongated, which is called Jahn-Teller distortion.
Since there is a long-range collinear orbital ordering of the electron orbits of the Mn3+ ions without disturbance, a strong cooperative Jahn–Teller distortion is resulted. Their atomic structures are easily distorted.
Dr. Liu and his team tackled the problem by applying interfacial engineering in the atomic structure, which disturbs the long-range collinear orbital ordering and suppresses a large scale of Jahn–Teller distortion.
Structural stability enhanced by interfacial engineering
The team prepared the spinel–layered (heterostructured) LiMnO2 via in situ electrochemical conversion from spinel Mn3O4 nanowall arrays. It is found that the electron orbits are oriented almost perpendicular to each other between the spinel and layered boundaries, resulted in the interfacial orbital ordering. “This has caused a disturbance of the long-range collinear orbital ordering, therefore Jahn–Teller distortion is suppressed,” explained Dr. Liu.
Their experiment results showed that Jahn–Teller distortion was effectively suppressed with this heterostructure design. The degrees of distortion of the layered and spinel phase was only 2.5% and 5.5% respectively, while layered LiMnO2 and spinel LiMnO2 showed much greater degrees of distortion of 18% and 16% respectively.
This implies that the heterostructured LiMnO2 exhibited much higher structural stability. The team also found that the volume changes from the spinel and layered phases counteract with each other, leading to a minimal total volume change for the material. As a result, the material exhibited superior structural stability.
Long cycle life
“The capacity of the LiCoO2 cathode material currently applied in electronic products like smartphones is about 165mAh/g, while our LiMnO2 cathode material has already achieved a capacity as high as 254.3 mAh g−1, which is much higher,” Dr. Liu elaborated. “It is difficult for commercial LiCoO2 to maintain 90% capacity even at 1,000 cycles. And our material has achieved high capacity retention of 90.4% after 2,000 cycles, demonstrating a long cycle life,” he added.
They are the first team to deploy interfacial orbital ordering to suppress the Jahn–Teller distortion. This novel method facilitated the development of sustainable Mn-rich cathode materials, in the hope of applying them in sustainable and commercialized energy storage devices.
“We look forward to cost reduction in energy storage technology which can promote the energy structure in moving towards sustainability. Our material can potentially replace the currently commercialized cobalt materials for applications such as electronics and electric cars,” concluded Dr. Liu.
Dr. Liu, Dr. Gu Lin, the researcher from IOPCAS, and Professor Xia Hui from NUST are the corresponding authors of the paper. The co-first authors are postdoc Zhu Xiaohui from NUST, Dr. Meng Fanqi and Dr. Zhang Qinghua from IOPCAS. Other team members included Dr. Zhu He, Postdoctoral Fellow from PHY at CityU, as well as collaborating researchers come from NUST, Sun Yat-Sen University, and Argonne National Laboratory, U.S..
BU-205: Types of Lithium-ion
Become familiar with the many different types of lithium-ion batteries.
Lithium-ion is named for its active materials; the words are either written in full or shortened by their chemical symbols. A series of letters and numbers strung together can be hard to remember and even harder to pronounce, and battery chemistries are also identified in abbreviated letters.
For example, lithium cobalt oxide, one of the most common Li-ions, has the chemical symbols LiCoO2 and the abbreviation LCO. For reasons of simplicity, the short form Li-cobalt can also be used for this battery. Cobalt is the main active material that gives this battery character. Other Li-ion chemistries are given similar short-form names. This section lists six of the most common Li-ions. All readings are average estimates at time of writing.
Lithium-ion batteries can be designed for optimal capacity with the drawback of limited loading, slow charging and reduced longevity. An industrial battery may have a moderate Ah rating but the focus in on durability. Specific energy only provides part of battery performance. See also BU-501a: Discharge Characteristics of Li-ion that compares energy cells with power cells.
Lithium Cobalt Oxide(LiCoO2) — LCO
Its high specific energy makes Li-cobalt the popular choice for mobile phones, laptops and digital cameras. The battery consists of a cobalt oxide cathode and a graphite carbon anode. The cathode has a layered structure and during discharge, lithium ions move from the anode to the cathode. The flow reverses on charge. The drawback of Li-cobalt is a relatively short life span, low thermal stability and limited load capabilities (specific power). Figure 1 illustrates the structure.
The drawback of Li-cobalt is a relatively short life span, low thermal stability and limited load capabilities (specific power). Like other cobalt-blended Li-ion, Li-cobalt has a graphite anode that limits the cycle life by a changing solid electrolyte interface (SEI), thickening on the anode and lithium plating while fast charging and charging at low temperature. Newer systems include nickel, manganese and/or aluminum to improve longevity, loading capabilities and cost.
Li-cobalt should not be charged and discharged at a current higher than its C-rating. This means that an 18650 cell with 2,400mAh can only be charged and discharged at 2,400mA. Forcing a fast charge or applying a load higher than 2,400mA causes overheating and undue stress. For optimal fast charge, the manufacturer recommends a C-rate of 0.8C or about 2,000mA. (See BU-402: What is C-rate). The mandatory battery protection circuit limits the charge and discharge rate to a safe level of about 1C for the Energy Cell.
The hexagonal spider graphic (Figure 2) summarizes the performance of Li-cobalt in terms of specific energy or capacity that relates to runtime; specific power or the ability to deliver high current; safety; performance at hot and cold temperatures; life span reflecting cycle life and longevity; and cost. Other characteristics of interest not shown in the spider webs are toxicity, fast-charge capabilities, self-discharge and shelf life. (See BU-104c: The Octagon Battery – What makes a Battery a Battery).
The Li-cobalt is losing favor to Li-manganese, but especially NMC and NCA because of the high cost of cobalt and improved performance by blending with other active cathode materials. (See description of the NMC and NCA below.)
|Lithium Cobalt Oxide: LiCoO2 cathode (~60% Co), graphite anode |
Short form: LCO or Li-cobalt. Since 1991
|Voltages||3.60V nominal; typical operating range 3.0–4.2V/cell|
|Specific energy (capacity)||150–200Wh/kg. Specialty cells provide up to 240Wh/kg.|
|Charge (C-rate)||0.7–1C, charges to 4.20V (most cells); 3h charge typical. Charge current above 1C shortens battery life.|
|Discharge (C-rate)||1C; 2.50V cut off. Discharge current above 1C shortens battery life.|
|Cycle life||500–1000, related to depth of discharge, load, temperature|
|Thermal runaway||150°C (302°F). Full charge promotes thermal runaway|
|Applications||Mobile phones, tablets, laptops, cameras|
|Very high specific energy, limited specific power. Cobalt is expensive. Serves as Energy Cell. Market share has stabilized.|
Early version; no longer relevant.
Table 3: Characteristics of lithium cobalt oxide.
Lithium Manganese Oxide (LiMn2O4) — LMO
Li-ion with manganese spinel was first published in the Materials Research Bulletin in 1983. In 1996, Moli Energy commercialized a Li-ion cell with lithium manganese oxide as cathode material. The architecture forms a three-dimensional spinel structure that improves ion flow on the electrode, which results in lower internal resistance and improved current handling. A further advantage of spinel is high thermal stability and enhanced safety, but the cycle and calendar life are limited.
Low internal cell resistance enables fast charging and high-current discharging. In an 18650 package, Li-manganese can be discharged at currents of 20–30A with moderate heat buildup. It is also possible to apply one-second load pulses of up to 50A. A continuous high load at this current would cause heat buildup and the cell temperature cannot exceed 80°C (176°F). Li-manganese is used for power tools, medical instruments, as well as hybrid and electric vehicles.
Figure 4 illustrates the formation of a three-dimensional crystalline framework on the cathode of a Li-manganese battery. This spinel structure, which is usually composed of diamond shapes connected into a lattice, appears after initial formation.
Li-manganese has a capacity that is roughly one-third lower than Li-cobalt. Design flexibility allows engineers to maximize the battery for either optimal longevity (life span), maximum load current (specific power) or high capacity (specific energy). For example, the long-life version in the 18650 cell has a moderate capacity of only 1,100mAh; the high-capacity version is 1,500mAh.
Figure 5 shows the spider web of a typical Li-manganese battery. The characteristics appear marginal but newer designs have improved in terms of specific power, safety and life span. Pure Li-manganese batteries are no longer common today; they may only be used for special applications.
Most Li-manganese batteries blend with lithium nickel manganese cobalt oxide (NMC) to improve the specific energy and prolong the life span. This combination brings out the best in each system, and the LMO (NMC) is chosen for most electric vehicles, such as the Nissan Leaf, Chevy Volt and BMW i3. The LMO part of the battery, which can be about 30 percent, provides high current boost on acceleration; the NMC part gives the long driving range.
Li-ion research gravitates heavily towards combining Li-manganese with cobalt, nickel, manganese and/or aluminum as active cathode material. In some architecture, a small amount of silicon is added to the anode. This provides a 25 percent capacity boost; however, the gain is commonly connected with a shorter cycle life as silicon grows and shrinks with charge and discharge, causing mechanical stress.
These three active metals, as well as the silicon enhancement can conveniently be chosen to enhance the specific energy (capacity), specific power (load capability) or longevity. While consumer batteries go for high capacity, industrial applications require battery systems that have good loading capabilities, deliver a long life and provide safe and dependable service.
|Lithium Manganese Oxide: LiMn2O4 cathode. graphite anode |
Short form: LMO or Li-manganese (spinel structure) Since 1996
|Voltages||3.70V (3.80V) nominal; typical operating range 3.0–4.2V/cell|
|Specific energy (capacity)||100–150Wh/kg|
|Charge (C-rate)||0.7–1C typical, 3C maximum, charges to 4.20V (most cells)|
|Discharge (C-rate)||1C; 10C possible with some cells, 30C pulse (5s), 2.50V cut-off|
|Cycle life||300–700 (related to depth of discharge, temperature)|
|Thermal runaway||250°C (482°F) typical. High charge promotes thermal runaway|
|Applications||Power tools, medical devices, electric powertrains|
|High power but less capacity; safer than Li-cobalt; commonly mixed with NMC to improve performance.|
Less relevant now; limited growth potential.
Table 6: Characteristics of Lithium Manganese Oxide.
Lithium Nickel Manganese Cobalt Oxide (LiNiMnCoO2) — NMC
One of the most successful Li-ion systems is a cathode combination of nickel-manganese-cobalt (NMC). Similar to Li-manganese, these systems can be tailored to serve as Energy Cells or Power Cells. For example, NMC in an 18650 cell for moderate load condition has a capacity of about 2,800mAh and can deliver 4A to 5A; NMC in the same cell optimized for specific power has a capacity of only about 2,000mAh but delivers a continuous discharge current of 20A. A silicon-based anode will go to 4,000mAh and higher but at reduced loading capability and shorter cycle life. Silicon added to graphite has the drawback that the anode grows and shrinks with charge and discharge, making the cell mechanically unstable.
The secret of NMC lies in combining nickel and manganese. An analogy of this is table salt in which the main ingredients, sodium and chloride, are toxic on their own but mixing them serves as seasoning salt and food preserver. Nickel is known for its high specific energy but poor stability; manganese has the benefit of forming a spinel structure to achieve low internal resistance but offers a low specific energy. Combining the metals enhances each other strengths.
NMC is the battery of choice for power tools, e-bikes and other electric powertrains. The cathode combination is typically one-third nickel, one-third manganese and one-third cobalt, also known as 1-1-1. Cobalt is expensive and in limited supply. Battery manufacturers are reducing the cobalt content with some compromise in performance. A successful combination is NCM532 with 5 parts nickel, 3 parts cobalt and 2 parts manganese. Other combinations are NMC622 and NMC811. Cobalt stabilizes nickel, a high energy active material
New electrolytes and additives enable charging to 4.4V/cell and higher to boost capacity. Figure 7 demonstrates the characteristics of the NMC.
There is a move towards NMC-blended Li-ion as the system can be built economically and it achieves a good performance. The three active materials of nickel, manganese and cobalt can easily be blended to suit a wide range of applications for automotive and energy storage systems (ESS) that need frequent cycling. The NMC family is growing in its diversity.
|Lithium Nickel Manganese Cobalt Oxide: LiNiMnCoO2. cathode, graphite anode|
Short form: NMC (NCM, CMN, CNM, MNC, MCN similar with different metal combinations) Since 2008
|Voltages||3.60V, 3.70V nominal; typical operating range 3.0–4.2V/cell, or higher|
|Specific energy (capacity)||150–220Wh/kg|
|Charge (C-rate)||0.7–1C, charges to 4.20V, some go to 4.30V; 3h charge typical. Charge current above 1C shortens battery life.|
|Discharge (C-rate)||1C; 2C possible on some cells; 2.50V cut-off|
|Cycle life||1000–2000 (related to depth of discharge, temperature)|
|Thermal runaway||210°C (410°F) typical. High charge promotes thermal runaway|
|Cost||~$420 per kWh (Source: RWTH, Aachen)|
|Applications||E-bikes, medical devices, EVs, industrial|
|Provides high capacity and high power. Serves as Hybrid Cell. Favorite chemistry for many uses; market share is increasing.|
Leading system; dominant cathode chemistry.
Table 8: Characteristics of lithium nickel manganese cobalt oxide (NMC).
Lithium Iron Phosphate(LiFePO4) — LFP
In 1996, the University of Texas (and other contributors) discovered phosphate as cathode material for rechargeable lithium batteries. Li-phosphate offers good electrochemical performance with low resistance. This is made possible with nano-scale phosphate cathode material. The key benefits are high current rating and long cycle life, besides good thermal stability, enhanced safety and tolerance if abused.
Li-phosphate is more tolerant to full charge conditions and is less stressed than other lithium-ion systems if kept at high voltage for a prolonged time. (See BU-808: How to Prolong Lithium-based Batteries). As a trade-off, its lower nominal voltage of 3.2V/cell reduces the specific energy below that of cobalt-blended lithium-ion. With most batteries, cold temperature reduces performance and elevated storage temperature shortens the service life, and Li-phosphate is no exception. Li-phosphate has a higher self-discharge than other Li-ion batteries, which can cause balancing issues with aging. This can be mitigated by buying high quality cells and/or using sophisticated control electronics, both of which increase the cost of the pack. Cleanliness in manufacturing is of importance for longevity. There is no tolerance for moisture, lest the battery will only deliver 50 cycles. Figure 9 summarizes the attributes of Li-phosphate.
Li-phosphate is often used to replace the lead acid starter battery. Four cells in series produce 12.80V, a similar voltage to six 2V lead acid cells in series. Vehicles charge lead acid to 14.40V (2.40V/cell) and maintain a topping charge. Topping charge is applied to maintain full charge level and prevent sulfation on lead acid batteries.
With four Li-phosphate cells in series, each cell tops at 3.60V, which is the correct full-charge voltage. At this point, the charge should be disconnected but the topping charge continues while driving. Li-phosphate is tolerant to some overcharge; however, keeping the voltage at 14.40V for a prolonged time, as most vehicles do on a long road trip, could stress Li-phosphate. Time will tell how durable Li-Phosphate will be as a lead acid replacement with a regular vehicle charging system. Cold temperature also reduces performance of Li-ion and this could affect the cranking ability in extreme cases.
|Lithium Iron Phosphate: LiFePO4 cathode, graphite anode |
Short form: LFP or Li-phosphate. LIP is also common. Since 1996
|Voltages||3.20, 3.30V nominal; typical operating range 2.5–3.65V/cell|
|Specific energy (capacity)||90–120Wh/kg|
|Charge (C-rate)||1C typical, charges to 3.65V; 3h charge time typical|
|Discharge (C-rate)||1C, 25C on some cells; 40A pulse (2s); 2.50V cut-off (lower that 2V causes damage)|
|Cycle life||2000 and higher (related to depth of discharge, temperature)|
|Thermal runaway||270°C (518°F) Very safe battery even if fully charged|
|Cost||~$580 per kWh (Source: RWTH, Aachen)|
|Applications||Portable and stationary needing high load currents and endurance|
|Very flat voltage discharge curve but low capacity. One of safest|
Li-ions. Used for special markets. Elevated self-discharge.
Used primarily for energy storage, moderate growth.
Table 10: Characteristics of lithium iron phosphate.
See Lithium Manganese Iron Phosphate (LMFP) for manganese enhanced L-phosphate.
Lithium Nickel Cobalt Aluminum Oxide (LiNiCoAlO2) — NCA
Lithium nickel cobalt aluminum oxide battery, or NCA, has been around since 1999 for special applications. It shares similarities with NMC by offering high specific energy, reasonably good specific power and a long life span. Less flattering are safety and cost. Figure 11 summarizes the six key characteristics. NCA is a further development of lithium nickel oxide; adding aluminum gives the chemistry greater stability.
|Lithium Nickel Cobalt Aluminum Oxide: LiNiCoAlO2 cathode (~9% Co), graphite anode |
Short form: NCA or Li-aluminum. Since 1999
|Voltages||3.60V nominal; typical operating range 3.0–4.2V/cell|
|Specific energy (capacity)||200-260Wh/kg; 300Wh/kg predictable|
|Charge (C-rate)||0.7C, charges to 4.20V (most cells), 3h charge typical, fast charge possible with some cells|
|Discharge (C-rate)||1C typical; 3.00V cut-off; high discharge rate shortens battery life|
|Cycle life||500 (related to depth of discharge, temperature)|
|Thermal runaway||150°C (302°F) typical, High charge promotes thermal runaway|
|Cost||~$350 per kWh (Source: RWTH, Aachen)|
|Applications||Medical devices, industrial, electric powertrain (Tesla)|
|Shares similarities with Li-cobalt. Serves as Energy Cell.|
Mainly used by Panasonic and Tesla; growth potential.
Table 12: Characteristics of Lithium Nickel Cobalt Aluminum Oxide.
Lithium Titanate (Li2TiO3) — LTO
Batteries with lithium titanate anodes have been known since the 1980s. Li-titanate replaces the graphite in the anode of a typical lithium-ion battery and the material forms into a spinel structure. The cathode can be lithium manganese oxide or NMC. Li-titanate has a nominal cell voltage of 2.40V, can be fast charged and delivers a high discharge current of 10C, or 10 times the rated capacity. The cycle count is said to be higher than that of a regular Li-ion. Li-titanate is safe, has excellent low-temperature discharge characteristics and obtains a capacity of 80 percent at –30°C (–22°F).
LTO (commonly Li4Ti5O12) has advantages over the conventional cobalt-blended Li-ion with graphite anode by attaining zero-strain property, no SEI film formation and no lithium plating when fast charging and charging at low temperature. Thermal stability under high temperature is also better than other Li-ion systems; however, the battery is expensive. At only 65Wh/kg, the specific energy is low, rivalling that of NiCd. Li-titanate charges to 2.80V/cell, and the end of discharge is 1.80V/cell. Figure 13 illustrates the characteristics of the Li-titanate battery. Typical uses are electric powertrains, UPS and solar-powered street lighting.
|Lithium Titanate: Cathode can be lithium manganese oxide or NMC; Li2TiO3 (titanate) anode|
Short form: LTO or Li-titanate Commercially available since about 2008.
|Voltages||2.40V nominal; typical operating range 1.8–2.85V/cell|
|Specific energy (capacity)||50–80Wh/kg|
|Charge (C-rate)||1C typical; 5C maximum, charges to 2.85V|
|Discharge (C-rate)||10C possible, 30C 5s pulse; 1.80V cut-off on LCO/LTO|
|Thermal runaway||One of safest Li-ion batteries|
|Cost||~$1,005 per kWh (Source: RWTH, Aachen)|
|Applications||UPS, electric powertrain (Mitsubishi i-MiEV, Honda Fit EV),|
solar-powered street lighting
|Long life, fast charge, wide temperature range but low specific energy and expensive. Among safest Li-ion batteries.|
Ability to ultra-fast charge; high cost limits to special application.
Table 14: Characteristics of lithium titanate.
Solid-state Li-ion: High specific energy but poor loading and safety.
Lithium-sulfur: High specific energy but poor cycle life and poor loading
Lithium-air: High specific energy but poor loading, needs clean air to breath and has short life.
Figure 15 compares the specific energy of lead-, nickel- and lithium-based systems. While Li-aluminum (NCA) is the clear winner by storing more capacity than other systems, this only applies to specific energy. In terms of specific power and thermal stability, Li-manganese (LMO) and Li-phosphate (LFP) are superior. Li-titanate (LTO) may have low capacity but this chemistry outlives most other batteries in terms of life span and also has the best cold temperature performance. Moving towards the electric powertrain, safety and cycle life will gain dominance over capacity. (LCO stands for Li-cobalt, the original Li-ion.)
Figure 15: Typical specific energy of lead-, nickel- and lithium-based batteries.
NCA enjoys the highest specific energy; however, manganese and phosphate are superior in terms of specific power and thermal stability. Li-titanate has the best life span.
Courtesy of Cadex
Last updated: 2021-02-11
More information: Xiaohui Zhu et al. LiMnO2 cathode stabilized by interfacial orbital ordering for sustainable lithium-ion batteries, Nature Sustainability (2020). DOI: 10.1038/s41893-020-00660-9 | <urn:uuid:4ad001ad-80d5-4e64-9daf-1b41d8e8f9b8> | CC-MAIN-2022-40 | https://debuglies.com/2021/04/03/a-new-limno2-cathode-can-doubling-the-charging-recharging-cycle-of-lithium-batteries/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00706.warc.gz | en | 0.892033 | 6,228 | 3.28125 | 3 |
Achieving Cutting Edge Production – A Guide to Machine Learning In Manufacturing | SPONSORED
Machine learning (ML) is becoming even more prevalent as manufacturers continue their journey towards Industry 4.0 and digitally transform their factories. It has the capacity to make production more efficient by increasing output while maintaining quality standards.
ML and industrial automation in manufacturing
ML and industrial automation in manufacturing promises to overcome many of the industries most pressing challenges—including diminishing contribution margins and an expected skilled labor shortage. With continued advances in algorithms, computing power, and data availability, machine learning use cases in manufacturing are quickly emerging.
The Role of Machine Learning in Manufacturing
As industrial automation plays an ever larger role in manufacturing, the deep insights machine learning can offer are crucial for production optimization. But before manufacturers can introduce a machine learning platform, they must first understand how these solutions operate in a production environment, and how to choose the right one for their needs.
Download this e-book sponsored by Oden Technologies to find out more about:
- Manufacturing before ML
- What is ML in manufacturing?
- How ML is revolutionizing the manufacturing industry?
- How ML works
- Evaluating ML solutions
Willem Sundblad is the CEO and Co-founder of Oden Technologies, a company that empowers manufacturers to make more, waste less and innovate faster through machine learning and applied analytics. Willem is a leading voice in Industrial IoT and is pioneering the use of real-time and predictive analytic tools that uncover untapped value. Recently named one of Forbes 30 Under 30, Sundblad is working to transform the manufacturing industry by digitizing, analyzing and perfecting peak factory performance. | <urn:uuid:645e0c0f-85b3-47c9-8773-ebadf494f149> | CC-MAIN-2022-40 | https://www.iiot-world.com/industrial-iot/connected-industry/achieving-cutting-edge-production-a-guide-to-machine-learning-in-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00706.warc.gz | en | 0.925442 | 339 | 2.71875 | 3 |
It’s worth briefly considering the differences between a standard and a framework to set the context for this topic because each has different implications for use.
‘Standards’ are broadly defined as rules or characteristics established by a recognized body that provide for common and repeated use. The use of standards is typically mandated by policy. In many cases where an organization is required to adhere to (or ‘implement’) a standard, the requirement generally falls on the organization to consider the standard in its entirety, rather than just portions of the standard.
On the other hand, a ‘framework’ can be defined as a conceptual model consisting of defined components and clear relationships between those components. A framework should be (a) flexible, (b) allow for the addition of new content within the scope of the model, and (c) support the integration of related standards, frameworks, and regulations (this will be addressed in a later paper in this series).
So, at one level the differences are clear – a framework is optional while a standard will usually be mandated by policy. Importantly, a framework can accommodate certain changes in its basic ‘shape’ (or components), and will also allow for changes to the content of these components. There are also ways in which standards and frameworks are similar, such as the industry they are relevant to and the subject matter to which they apply.
To provide a little more context, the following are standards: ISO27001 and ISO31000. These next examples, on the other hand, are frameworks: SABSA, ITIL, COBIT 2019, TOGAF (now referred to as a standard by the Open Group), and the NIST Cybersecurity Framework.
To end this outline of standards and frameworks usage, it’s worth considering what a ‘requirement’ is under the law. For example, the GDPR (Article 32) requires in certain cases that organizations ‘… shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk …’. An organization’s Data Protection policy is likely to define specific behavioral requirements or rules that must be met, and it will typically reference standards, procedures, and guidelines. Here we can see the difference between following mandated organizational requirements and choosing which framework to implement.
As we’re focussing on the use of COBIT 2019, we’ll be looking at different ways in which it can be used to suit a particular organization. For the avoidance of doubt, it’s worth repeating how COBIT 2019 defines itself as ‘a framework for the governance and management of enterprise information and technology aimed at the whole enterprise’.
COBIT 2019 has been constructed so that it can be used to understand the whole organization, not just as per our earlier definition, but also in terms of providing a best-practice approach in which relevant elements can be considered for use.
Importantly, we should recognize that it is very unlikely that organizations are completely devoid of any governance structures and processes regarding their information and technology. This brings us to a basic and recognized rule of thumb for COBIT 2019: that we should recognize what is already in place and adopt what has been, or currently is, successful, while at the same time identifying where we can improve areas by referencing and using COBIT 2019.
It’s worth saying that trying to implement everything that is available from the COBIT 2019 product family is probably not the best approach to start with, simply because you’ll end up focussing on the framework rather than the benefits that it will bring to your organization and stakeholders. In fact, COBIT 2019 itself recognizes this and suggests that, when looking at implementation, we should not consider too prescriptive an approach and instead use it as ‘a guide to avoid pitfalls, leverage the latest good practices, and assist in the creation of successful governance and management outcomes over time’.
Firstly, you should recognize that you’ll most likely need to design and implement a custom governance solution (based on COBIT 2019) simply because there is no one-size-fits-all approach. Equally, you must take into account your organization’s processes, structures, information flows, behaviors, culture, technologies, other frameworks, and so on. This will provide a unique view of your organization while still recognizing certain common requirements like threat management, changes in regulations or industries, etc. COBIT 2019 will help you understand these factors in terms of specific constructs called ‘design factors’ (not discussed in this paper).
The next step requires that you consider the COBIT 2019 Core Model, which contains some 40 generic governance and management objectives, related processes, and other very useful information including sample metrics and maturity targets. The Core Model should provide the basis for your understanding of where COBIT 2019 can help, as well as how its structure can help those just starting out to recognize and integrate other elements, guidance, and references to separate standards and frameworks.
Once you’ve understood the various elements of COBIT 2019, in particular the Core Model and the 11 Design Factors, and their impact (very useful for customizing COBIT 2019), you’ll then need to consider the four-phase governance system design workflow, which requires that you:
· Understand the enterprise context and strategy
· Determine the initial scope of the governance system
· Refine the scope of the governance system
· Finish the governance system design
This workflow is designed with 17 sub-steps that provide recommendations to help you prioritize what governance and management objectives you need to achieve, the target capability level that you should aim for in each area, and the customization or variants of each component that will need to be taken into account.
Once you’ve completed the design and prioritization stages, you would then move to the more detailed COBIT 2019 Implementation Guide. This provides insight into improvement initiatives from three perspectives:
- Program management
- Change enablement
- Continual improvement across seven phases
We are reminded in the framework that, to establish and use COBIT 2019 effectively, we should always consider the purpose of doing so, which is to establish normal business practices and a sustainable approach to governing and managing your information and technology.
To support all your efforts in establishing such a framework, you will also have available to you a COBIT 2019 Governance System Design Toolkit from ISACA, which helps during the four-step workflow process. This detailed toolkit is used to help understand the amount of detail required for the design of your governance system.
One last gem that COBIT 2019 provides is the Goals Cascade, which outlines a simple yet powerful technique for understanding how stakeholder needs are transformed into actionable strategies. Generic Enterprise Goals and Alignment Goals are provided (with mappings and a set of sample metrics). These two types of goals can be used to help prepare your organization for using COBIT 2019 effectively (remember that, as with most things in the framework, the goals are extensible and customizable to reflect the needs and requirements of your organization).
As with all governance and management, this will be specific to your organization. Considering the existing activities and structures that may be working well in your organization you should use COBIT 2019 to help you identify the most effective route through without trying to be 100% compliant with the framework itself. COBIT 2019 is a framework that should be tailored for your organization to address information and technology governance and management strategy. Just as importantly, COBIT 2019 will also help you sustain these over time.
Please note that we’re focussing on standards and framework to set the context for using COBIT 2019 and that a more complete discussion would include related concepts such as policies, guidelines, procedures, principles, regulations, etc.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
We’re using the term ‘areas’ here to mean a portion of or part of, because COBIT 2019 defines and uses the term ‘components’ in a specific way
See Chapter 4, COBIT 2019 Design Guide | <urn:uuid:f695f378-ed6f-46c8-a476-a474e0bbc181> | CC-MAIN-2022-40 | https://blog.goodelearning.com/subject-areas/cobit/how-do-i-use-cobit-2019-in-a-way-that-suits-my-company/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00706.warc.gz | en | 0.931714 | 1,747 | 2.609375 | 3 |
Here we provide a brief overview and introduction to the balanced scorecard concept.
Overview of the Balanced Scorecard
The Balanced Scorecard (BSC) is a strategic performance management framework that allows organisations to identify, manage and measure its strategic objectives. The concept was initially introduced by Robert Kaplan and David Norton in a Harvard Business Review Article in 1992 and has since then been voted as one of the most influential business ideas of the past 75 years.
Like most good ideas, the concept of the Balanced Scorecard is very simple. Kaplan and Norton identified four generic perspectives that cover the main strategic focus areas of a company.
The four Balanced Scorecard Perspectives
The idea was to use the Balanced Scorecard as a template for designing objectives and measures in each of the following perspectives:
- The Financial Perspective covers the financial objectives of an organisation and allows managers to track financial success and shareholder value
- The Internal Process Perspective covers internal operational goals and outlines the key processes necessary to deliver the customer objectives
- Auditing and benchmarking your performance management approaches
- The Learning and Growth Perspective covers the intangible drivers of future success such as human capital, organisational capital and information capital including skills, training, organisational culture, leadership, systems and databases
Balanced Scorecard moves from four boxes to strategy map
Initially it was suggested to visualise the Balanced Scorecard in a four-box model Many organisations have created management dashboards with these four perspectives to provide comprehensive at a glance views of performance. However, this classic four box model is now outdated and has been replaced by a Strategy Map view.
A Strategy Map places the four Balanced Scorecard perspectives into a causal hierarchy to show that the objectives support each other and that delivering the right performance in the lower perspectives will help to achieve the objectives in the upper perspectives. For example the objectives in the Learning and Growth Perspective underpin the objectives in the Internal Process Perspective, which in turn underpin the objectives in the Customer Perspectives. Delivering the customer objectives should then lead to the achievement of the financial objectives in the Financial Perspective. This causal logic is one of the most important elements of modern Balanced Scorecards. It allows companies to create a truly integrated set of strategic objectives. The danger with the initial four-box model was that companies design a number of objectives for each perspective without ever linking them. This can lead to silo activities as well as a strategy that is not cohesive or integrated. | <urn:uuid:7dac9d6a-3b71-476f-90bf-e0434f0eea18> | CC-MAIN-2022-40 | https://bernardmarr.com/balanced-scorecard-an-overview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00706.warc.gz | en | 0.929937 | 489 | 2.90625 | 3 |
What is Data Loss Prevention?
The loss of sensitive or valuable data is something any organization, regardless of size, industry, or geography, must avoid. Data privacy and data protection laws such as CCPA, GDPR, HIPAA or SOX, among others, require organizations to maintain secure environments and always apply the appropriate level of protection to data, no matter where it is located or how it is shared. Failure to keep data secure can result in a fine for non-compliance, which negatively impacts an organization’s bottom line and, when the data breach makes headline news, its brand reputation too.
When you consider the large volumes and different types of data an organization generates, stores, sends, and receives daily, and the complexity of today’s hybrid IT environments, the probability of a data breach is very high. Add to this the many different threats to data, and data loss prevention becomes even more of a challenge. Sensitive or valuable data can be leaked accidentally or targeted by malicious actors looking to exfiltrate it for monetary gain. Threats can come from within the organization (the insider threat) or from the outside in the form of ransomware and other cyber-attacks.
To avoid data leakage or data exfiltration, organizations apply Data Loss Prevention (DLP) practices and tools to safeguard their business-critical data. DLP focuses on minimizing the risk to the organization by detecting and preventing unauthorized disclosure before the data breach occurs.
Best Practices for Implementing DLP
Whether it’s to protect sensitive data or safeguard intellectual property, putting DLP best practices in place helps organizations maintain visibility and control of their data. People, processes, and technology all play a key role in how data loss prevention activities are applied across the organization.
To minimize the risk of a data breach, everyone – from board members down to individual employees – has a responsibility to protect data within an organization. With clearly defined processes in place, data is protected while in use, in motion and at rest. While DLP software solutions monitor and consistently enforce policies across the network, at endpoints, and in the cloud.
How do DLP Software Solutions Work?
When looking at how to prevent data loss, technology is often the last line of defense. Its role is to apply the organization’s data security policies consistently over all egress points, identify possible violations, and take the appropriate remedial actions. Traditional DLP solutions are inflexible in the way they operate, making them difficult to configure and implement. Typically, the solutions “stop and block” any action deemed to have risk implications, often incorrectly mistaking legitimate daily actions as an exfiltration or data loss threat. This generates large numbers of “false positives” that can easily overwhelm the IT security staff who need to action the alerts and frustrate users who can’t work productively.
How does Clearswift DLP Differ from Other Solutions?
UNIQUE ADAPTIVE FEATURES
MINIMIZE FALSE POSITIVES
Explore Clearswift's Adaptive Data Loss Prevention Solutions
More than Stop and Block
The DLP solution from Clearswift provides much more than just stop and block functionality. It minimizes the risk of accidental data loss, data exfiltration, and cyber-attacks, to keep sensitive and valuable data safe, while reducing impact on day-to-day operations. It does this by intelligently inspecting structured and unstructured data within email messages, files transferred to and from the web or cloud, and at endpoints, making sure the appropriate security policy is always automatically applied.
The solution understands both content and context and adapts its behavior accordingly. Policies can be set so that certain individuals, teams, or departments have more flexibility than others. For example:
- The CEO is authorized to send sensitive data to the CFO, so the data is automatically encrypted to protect it while in motion.
- When the HR team sends sensitive data to an unknown third party, the solution recognizes that this could be an unauthorized transfer. But rather than block the communication, it automatically removes the sensitive data from the message, allowing a safe version to continue unhindered.
- The user is alerted to the fact that a policy violation occurred, but business is not interrupted. This significantly reduces the numbers of false positives that occur and removes any risk.
This automated process is made possible by a unique technology called adaptive redaction.
What is Adaptive Redaction?
Adaptive Redaction technology sets Clearwift apart from other vendors. It occurs during the content inspection process, when in real time, a Deep Content Inspection engine deconstructs files down to their constituent parts. If it identifies sensitive or valuable information or any cyber threats, it automatically removes, deletes, or sanitizes the files as per the rules set by the organization. The solution then reconstructs the files, allowing them to continue without delay. The inspection capability is not limited by zip/encryption, file size, analysis timing delays or multiple embedded document layers.
The three main options for Adaptive Redaction
To keep organizations compliant, sensitive and valuable data is automatically removed from messages and documents before they are transferred, sent, or received. Optical Character Recognition (OCR) functionality extracts text from image-based files.
To prevent data harvesting, hidden metadata such as comments and revision history is automatically removed from documents, along with author, user, and server names. Anti-steganography technology wipes images clean too.
To stop ransomware and other Advanced Persistent Threats from infecting the network, files are sanitized of active malicious content such as embedded macros and scripts, that would trigger when a document is opened.
Building an Effective Data Loss Prevention Strategy
There are steps organizations can take to build and implement an effective DLP strategy. First, identify the types of data that need protecting. This might be data based on regulation (GDPR, HIPAA), personal data (PII or PCI), or other valuable, business-critical data. Consider whether data needs to be labelled according to its classification, where it is stored (on-premise or in the cloud), how it is shared (email, web or managed file transfer) and who needs access to it. These considerations help determine which DLP solution is right for your organization.
Next, design policies that keep the data secure. In monitor mode, the Clearswift solution allows organizations to measure the effectiveness of DLP policies before they are implemented, refined, and finally deployed. Default policies configured for industry regulations and support for SIEM solutions, make deployment and compliance a quick and easy process. Finally, even with risks minimized, it is still important to ensure that everyone knows what to do in the event of a data breach.
Enhancing Data Loss Prevention in Office 365
Microsoft 365 (formerly Office 365) is fast becoming the collaboration tool of choice for many corporations. Leveraging the cloud, it allows professionals to create and communicate with ease. Microsoft 365 offers multiple tiers of capability, including provisions for data loss prevention – but are these features comprehensive enough to secure data to satisfy the strictest regulatory requirements?
Adaptive DLP from Clearswift working alongside Microsoft 365 deployments, makes the most of the cloud-centric infrastructure, but with zero compromise on security. Benefit from greater DLP controls, protection from incoming cyber threats, and more flexibility when implementing policies.
Using DLP Solutions Alongside Data Classification and MFT
To provide seamless protection for data from the time it is created until the time it reaches its destination, DLP solutions can be deployed alongside data classification tools and software for managed file transfers (MFT).
- During the content inspection process, adaptive DLP recognizes the different data classification labels and automatically enforces the appropriate policy.
- It also ensures data classification labelling remains in place as the data moves throughout the network or leaves the organization.
- Files being sent or received securely through managed transfer benefit from an additional layer of data loss prevention and protection from cybersecurity threats
Adaptive DLP Solutions from Clearswift
Covering data in use, in motion and at rest, the Clearswift solutions have in-built DLP capabilities to help secure and protect structured and unstructured data. This integrated DLP functionality allows us to offer protection against unwanted data loss and acquisition through all our Secure Email and Web Gateway and Endpoint products.
Request a Live Demo
Incorporating data loss prevention into your cybersecurity portfolio is crucial. Talk to one of our experts to discover the DLP solution that's right for your organization. | <urn:uuid:9b3252ba-699c-44f2-b102-8fd0b5e4b4b9> | CC-MAIN-2022-40 | https://www.clearswift.com/solutions/adaptive-data-loss-prevention | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00106.warc.gz | en | 0.913046 | 1,791 | 2.71875 | 3 |
Authoritative name servers store DNS record information –usually a DNS hosting provider or domain registrar. Recursive name servers are the “middlemen” between authoritative servers and end-users because they have to recurse up the DNS tree to reach the name servers authoritative for storing the domain’s records.
Recursive name servers are commonly referred to as resolving servers, and usually are your ISP (Internet Service Provider)or specialty resolving DNS providers. For example, Google runs their own public recursive DNS servers.
These name servers can also store caches (pronounced like cash) of DNS record information, so most queries for popular domains never end up reaching the authoritative name servers.
If the domain’s records are not cached, then the resolving name server will recurse up the DNS tree to find the server that is authoritative for the domain’s record.
The DNS Tree
Name servers store DNS records which are files that say “this domain” maps to “this IP address”. So is there a room somewhere that has all the nameservers and DNS records for every site on the Internet? No… that would be ridiculous.
They are actually distributed all around the world. These nameservers are called the root nameservers and instead of storing every domain ever, they store the locations of the TLD (top level domains).
TLD’s are the three characters like .com that end a domain name. Each TLD has their own set of nameservers that store the information that says who is authoritative for storing the DNS records for that domain.
The authoritative nameserver is typically the DNS provider or the DNS registrar (like GoDaddy that offers both DNS registration and hosting). And here we can find the DNS record that maps example.com to the IP address 127.66.122.88.
Also published on Medium. | <urn:uuid:62f4dc28-3430-411a-9398-f99ff201f55c> | CC-MAIN-2022-40 | https://social.dnsmadeeasy.com/blog/authoritative-vs-recursive-dns-servers-whats-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00106.warc.gz | en | 0.906354 | 386 | 2.890625 | 3 |
Session Initiation Protocol (SIP)
September 17, 2018
Session Initiation Protocol (SIP) is used at the application-layer in telecommunication networks to control, establish, modify and end multimedia sessions. Sessions are exchanges of data between a group of participants. With SIP, participants and media can be added and removed to and from existing sessions.
SIP is used across an array of multimedia services including gaming and video and particularly alongside VoIP services as it provides signalling functions to it.
How Session Initiation Protocol works
SIP works by allowing the communicating devices to send and receive messages. The messages carry a wide range of information which help to identify the session, control timing and describe the media being used. A typical message contains:
- Protocol information (such as the version)
- Session information (name etc)
- Participant information (email, phone etc)
- Bandwidth information
- Encryption information
- Time description (active and repeat time)
- Media description (media name, title, address ec)
- Media bandwidth information
- Media encryption key
Functions of SIP
There are a number of key features, including:
Name Translation and User Location: By translating the address of users to names, SIP can reach the called party at any location.
Participation Management: Participants are able to make or cancel connections to other users during a call, as well as being transferred or placed on hold.
Feature Changes: SIP allows users to change the characteristics of a call during the call. For example, video can be enabled and disabled.
Why use SIP?
SIP allows users to communicate using computers and/or mobile devices over the internet. It allows users to benefit from the use of VoIP (voice over IP) services, which offers a rich communication experience.
Costs are also slashed as calls between SIP users and VoIP users are free, worldwide.
SIP as a protocol is also very powerful and efficient in many ways. Many organizations use SIP for their internal and external communication, centred around a PBX (private branch exchange).
Get all of our latest news sent to your inbox each month. | <urn:uuid:8abe39a7-3e27-41a9-820d-e8c0618a93d6> | CC-MAIN-2022-40 | https://www.carritech.com/news/session-initiation-protocol-sip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00106.warc.gz | en | 0.90749 | 454 | 3.171875 | 3 |
What is NB-IoT?
NB-IoT is a new, cellular Internet of Things (IoT) technology standardized by 3GPP in Release 13. It builds upon existing LTE networks to enable low-power, wide-area (LPWA) connectivity for a variety of IoT devices and applications.
NB-IoT is designed to support a wide range of IoT devices and applications with low power consumption and low cost. It uses a slotted mode that allows devices to sleep most of the time, and only wake up to receive or transmit data. This makes it ideal for applications that require long battery life, such as smart meters, parking meters, and streetlights.NB-IoT also uses a narrowband spectrum, which makes it more resistant to interference than other cellular technologies. This is important for applications that need to operate in crowded environments, such as cities.
NB-IoT is already being deployed by mobile operators around the world. In the United States, AT&T and Verizon have both launched NB-IoT networks. In Europe, Vodafone, Deutsche Telekom, and Orange have all launched NB-IoT networks. | <urn:uuid:dd02a970-a05a-46d8-9896-c3e95c2e2b9a> | CC-MAIN-2022-40 | https://inseego.com/resources/5g-glossary/what-is-nb-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00106.warc.gz | en | 0.947345 | 245 | 2.84375 | 3 |
User authentication is the foundation of ensuring digital identities and information are protected. Your digital identity is who you are and what you do and store online. Therefore, to be able to access that information and those sites, you will need to validate – authenticate you are who you are.
User Authentication Definition
“An act, process, or method of showing something (such as an identity, a piece of art, or a financial transaction) to be real, true, or genuine.”
This applies to any number of things or individuals. It answers the questions, is this real, is this legitimate, are you who you say you are.
Identity theft is on the rise and is something you must guard against, whether digital or not. Your identity allows access to any number of privileges and resources, from driving to working to accessing your bank accounts. Authentication is how your identity is confirmed or validated.
When you open a bank account, you are asked to provide identification. This is often a birth certificate, a driver’s license, Social Security number, address, and more. The financial institution has an authentication process by which they validate that you are who you say you are. From there, they have you sign documents, making that token, your signature, a line of validation moving forward. That is the authentication step.
You can also protect your identity by adding additional protection and requirements for authentication. By adding alerts or locks on your credit, you increase the required steps for authentication, which can stop someone from opening accounts in your name, as this would require additional steps to validate the identity.
User authentication is not new, nor is different levels of user authentication. Whether it’s showing a government ID, providing a Social Security number, providing a short-form birth certificate or a long-form one, the level of authentication required is typically proportional to the sensitivity of the information or access involved.
Digital authentication, or user authentication in the digital world, is the process of verifying the user is who they say they are. This is done at the basic level of matching a username and password to what was entered upon registration. If you meet those two basic authentication criteria, you are authorized for access.
This is also one of the most vulnerable points in security. When entering user ID and passwords you could be inadvertently offering up your information to potential hackers. This can happen any number of ways:
- Using public networks
- Clicking on a phishing email
- Someone could be watching over your shoulder
- Malware on your computer could be tracking keystrokes
It’s also likely that at some point you’ve forgotten a username or password and have to click on the reset link, further making you vulnerable, as you now likely have to access your email or mobile device.
To further protect against cyber threats, many sites, companies, and platforms have set up multi-factor authentication (MFA). This can be a simple two-factor authentication (2FA) or a more robust and adaptive process for authentication.
A few examples of MFA include:
- Validating the image/token you’ve selected at registration is correct
- Entering a pin number
- Answering a question
- A biometric scan (face, fingerprint, retina)
- Response to a push notification (authorize on a mobile device, enter a sent code)
Each of these offers more opportunities to authenticate that you are the user, which serves as additional protection and security for your identity and your information. It also serves to protect access to the networks and services you are using. | <urn:uuid:b6d71dc8-7464-4560-b0a1-47e8f4a8cc7f> | CC-MAIN-2022-40 | https://www.lastpass.com/nl/resources/learning/basics-of-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00106.warc.gz | en | 0.927669 | 751 | 3.921875 | 4 |
Where do our computers live? We tend to live in places that provide some comfort to our psyche for various reasons – but when we plug into the cyberworld – what does THAT neighborhood look like? How comfortable should you be when connected to the Internet?
I am trying to visually represent the “lay of the land” when you connect to the Internet.
I could list all of the Countries, but I will stick to a list of languages that are on the Internet from Internetworldstats.com¹.
How do we decide on the neighborhoods on the worldwide Internet? it must be by languages.
The reason I have chosen to segregate the Internet neighborhoods is that as human Internet users we are creatures of comfort and ease of use. Why would an English speaker go to a Chinese site? If they did it would be for a specific reason. Not sure if many other Chinese sites would be frequented.
You can see that languages can be a valid form of segmentation.
So the top10 languages add up to 2.6 Billion
I placed all English(873mil) in USA region because the Internet originated in the US and that has a weight of it s own.
Chinese(704mil), Arabic(168mil), Japanese(115mil), and Malaysian(99mil) are in Asia(1087mil)
Europe(673mil) consists of Spanish(257mil), Russian(103mil), French(97mil), Portuguese(132mil), and German(84mil).
Of course, Brazil has Portuguese and could be placed in America as well.
Mexico is Spanish as well as many South American nations which also could be placed in America. But I wanted to not overweight the USA as it received several million from England and Australia as it is.
This is representation is not perfect as many countries and languages may have multiple languages, But if that was the case like in Universities, the USA would just siphon more from everywhere due to its initial setup and momentum.
A very large country just beginning to fully connect is India which has English as primary language but also has Hindu and other regional dialects.
The other item I left off is Africa, as it consists of many European languages, the African native languages are very few in number.
To keep things simple I kept only the top10 languages and made some decisions for now.
So even imperfectly we can figure out the Cyber neighborhoods.
Why perform this exercise? Because we need to know the lay of the land to figure out our risk.
The big attack angle are all the US companies in English as they can be attacked by Eastern Europe and Asia and any other light in law area. Why is that an attack angle? Because the English websites are at least 26% of all sites, with a likely number higher (since the Internet was developed in the English speaking countries first gave them a leg up on the number of websites, as well as using old software.
Arpanet was the earliest network back in 1969 (pictures taken at slsc.org Science Center in Saint Louis)
NSFNet in 1986.
Notice the picture above shows the Internet and how it grew by 1991You can see the NAP’s (Network Access Points in the major data centers at east and west coast, as well as Chicago, Denver, and Texas.
US businesses seem to be obsessed with running their businesses instead of spending a little bit of time on cybersecurity.
In either case the major cyber breaches have overwhelmingly been based in the US for whatever reason.
So the point is that when you connect your computer to the Internet you are in a neighborhood with a lot of unsavory characters (criminals, nation state actors, anonymous hackers and more)
(focusing on the DarkNet and criminal element)
This fact of life does not mean you spend 50% of revenue on Cyber security, but maybe you need to spend a _little_ more than you have been with risk analysis and disaster recovery plans and more.
contact Us to Discuss | <urn:uuid:50f2550e-c7dc-41b2-b361-a1a9fc9a9b5f> | CC-MAIN-2022-40 | https://fixvirus.com/defensive-cybersecurity-services/the-cyber-neighborhood-is-it-safe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00307.warc.gz | en | 0.96951 | 825 | 2.59375 | 3 |
Secure Socket Shell (SSH), also called Secure Shell, is a special network protocol leveraging public-key cryptography to enable authorized users to remotely access a computer or other device via access credentials called SSH keys. Because they are used to access sensitive resources and perform critical, highly privileged activities, it’s vital to properly manage SSH keys as you would other sensitive credentials.
While SSH keys are standard, and more frequently used, in Unix and Linux environments, they are also used in Windows systems.
Read on for an overview of SSH key management that will cover SSH security and authentication, how SSH keys work, the risks and benefits to consider with SSH keys, and strategies for improving SSH security and key management.
Overview of SSH Key Security Authentication
The Secure Shell, and the public-key cryptography (an encryption schema using two keys: one public, one private) that SSH keys use, is designed to provide strong, encrypted verification and communication between the user and a remote computer.
SSH technology is based on the client-server model and provides an ideal way to access remote devices over unsecured networks, like the internet. The technology is typically used by administrators for several functions including:
- Logging into remote computers/servers for support and maintenance
- Transferring of files from computer to computer
- Remote execution of commands
- Offering support and updates
Today, Telnet, one of the Internet’s first remote login protocols and in use since the 1960’s, has largely been supplanted by SSH, owing to the latter protocol’s enhanced security features.
Benefits of SSH Key Authentication
IT teams routinely use SSH keys to automate secure access to servers, bypassing the need to manually enter log-in credentials. The SSH network protocol encrypts all traffic between the client and the server while it is in transit. This means that anyone eavesdropping on the traffic, such as by packet sniffing, would not be able to improperly access and decrypt transmitted data.
SSH is also resistant to brute force attacks and protects against certain attack vectors being used to gain access to remote machines. Public key encryption ensures that passwords need not be sent over the network, providing an additional layer of security. SSH keys are an excellent way to stay secure and compliant with various regulations and mandates, provided that you use best practice to generate, store, manage, and remove them.
Due to the massive number of SSH keys that may be in use or exist across an enterprise at any time, SSH key management software can significantly lower the overhead and risk of manually managing and updating keys.
Generating SSH Keys
SSH keys are always generated in pairs. These pairs consist of one “public” SSH key, and one “private” SSH key. These keys are paired using extremely strong algorithms, making it infeasible to guess or “fake” a private key, even if you know the public key. While private keys should be kept secret by the authorized person wishing to gain access to a system, public keys may be freely shared.
SSH keys are usually generated by a user entering a passphrase or other information. Typically, public and private keys will be generated from phrases of a few words.
SSH Key Access
A remote computer identifies itself to a user using its public key. When a user attempts to connect, the remote computer issues a “challenge” derived from the public key, for which only someone possessing the paired private key could correctly decrypt and respond. Once the challenge is correctly answered, the remote computer provides access.
In almost all cases, generating keys, sharing public keys, issuing challenges, answering them, and gaining access is managed by SSH software, so the process is largely transparent to the end user.
SSH Key Sprawl Poses Security & Operational Risk
SSH key sprawl exposes organizations to considerable cyber risk, especially considering that they can provide such a high level of privileged access, such as root. In a study of over 400 IT security professionals conducted by Dimensional Research, over 90% of respondents reported that they lacked a complete and accurate inventory of SSH keys. Additionally, nearly two out of three of the cybersecurity professionals stated that they do not actively rotate SSH keys.
With typically 50 – 200 SSH keys per server, organizations may have upwards of a million SSH keys. While many of these SSH keys are long dormant and forgotten, they can provide a backdoor for hackers to infiltrate critical servers. And once one server and SSH key is cracked, an attacker could move laterally and find more hidden keys.
As with other types of privileged credentials (or passwords in general), when organizations rely on manual processes, there is a proclivity to reuse a passphrase across many SSH keys or to reuse the same public SSH key. This means that one compromised key can then be harnessed to infiltrate multiple servers.
6 SSH Key Security Best Practices
As with any other security protocols, it’s imperative to maintain strong standards and best practice around SSH network protocols and keys. NIST IR 7966 offers guidance for government organizations, businesses, and auditors on proper security controls for SSH implementations. The NIST recommendations emphasize SSH key discovery, rotation, usage, and monitoring.
In even modestly complex environments, manual SSH Key rotation is infeasible. For instance, you could identify accounts set up to use SSH keys, you could manually scan through authorized keys file in the hidden .SSH user folder, but this falls short of helping you identify who has the private key matching any of the public keys in the file.
Organizations who recognize the risks posed by SSH Key sprawl risk and take a proactive cybersecurity posture typically use a dedicated SSH key management or automated privileged password management (PPM) solution to generate unique key pairs for each system, and perform frequent rotation. Automated solutions dramatically simplify the process of creating and rotating SSH keys, eliminating SSH key sprawl, and ensuring SSH keys enable productivity without compromising security.
To tighten security controls around SSH Keys, you should also apply the following six best practices:
1. Discover all SSH Keys and Bring Under Active Management
A first step to eliminating SSH key sprawl and properly assessing SSH security risk is to discover and inventory all SSH keys, and then to reign in centralized control of all keys. This is also an appropriate juncture to determine who is using various keys and how the keys are being used.
2. Ensure SSH Keys Are Associated With a Single Individual
Tie SSH keys back to an individual, rather than just to an account that can be accessed by multiple users. This will provide an effective SSH audit trail and more direct oversight.
3. Enforce Minimal Levels of User Rights Through PoLP
Apply the principle of least privilege (PoLP), such as in tying SSH keys to granular areas of remote devices, so users can only access certain, necessary systems. This limits the potential fallout from misuse of SSH keys.
4. Stay Attentive to SSH Key Rotation
Implement diligent SSH Key rotation -- force users to generate keys on a regular basis and disallow use of the same passphrases across multiple accounts or iterations. These actions help protect the organization from password re-use attacks. In organizations with a large SSH key estate, this can only be feasibly performed via an automated solution.
5. Eliminate Hardcoded SSH Keys
SSH Keys are one of the many types of credentials that can be embedded within code, such as in applications and files. This practice creates dangerous backdoors for malware and hackers to exploit. Embedded keys that use simple or default passphrases may be vulnerable to password-guessing and other attacks. Therefore, an important piece of SSH security is to uncover and eliminate embedded SSH keys, and bring them under centralized management.
6. Audit All Privileged Session Activity
Any privileged session started via a SSH Key authentication (or other means) should be recorded and audited to meet both cybersecurity and regulatory needs. Privileged session management activities can entail capturing keystrokes and screens (allowing for live view and playback). Ideally, you also layer on the ability to control (pause or terminate) privileged sessions in real-time to maintain strong oversight and a short leash over privileged activity.
SSH Key Management Resources
- Secure and Manage SSH Keys (data sheet)
- Simplified SSH Key Management (web page)
- Privileged Password Management Explained (white paper)
For more security terms explained, check out the BeyondTrust glossary.
Matt Miller, Director, Content Marketing & SEO
Matt Miller is Director, Content Marketing at BeyondTrust. Prior to BeyondTrust, he developed and executed marketing strategies on cybersecurity, cloud technologies, and data governance in roles at Accelerite (a business unit of Persistent Systems), WatchGuard Technologies, and Microsoft. Earlier in his career Matt held various roles in IR, marketing, and corporate communications in the biotech / biopharmaceutical industry. His experience and interests traverse cybersecurity, cloud / virtualization, IoT, economics, information governance, and risk management. He is also an avid homebrewer (working toward his Black Belt in beer) and writer. | <urn:uuid:c0165f4f-4bf9-4dc8-bacc-e0cf2e120785> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/ssh-key-management-overview-6-best-practices | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00307.warc.gz | en | 0.911996 | 1,876 | 3.421875 | 3 |
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Network Configuration: Using CI/CD Methodology
The process of making and integrating minor changes to software projects is known as continuous integration. Smaller changes have a lower impact and are easier to test than larger changes. Continuous deployment/delivery pushes those changes to the production environment in a controlled manner, helping avoid human error due to manual processes. The combination is called DevOps, shorthand for software development operations.
The software development world has employed DevOps methodology for years, but few organizations use it for their networks. The problem is that networks and their configurations are fundamentally different than that of business software. With business software, it’s relatively easy to create software-only test environments and employ various tools to run automated tests to verify the operational correctness of changes. Similar tools for networking are in their infancy. As a result, most organizations are not yet applying CI/CD techniques to network configurations—something that must change.
Requirements for Change
To apply CI/CD to networking, our network change process must model the hardware and software of the existing network infrastructure. We can then create tests to validate network configuration changes before deploying them to the production network.
To build the model, we need discovery systems that identify the network infrastructure components, interconnections, and configurations. The use of network virtualization techniques like virtual routing and forwarding (VRF) instances and Ethernet VPN (EVPN) complicate the task by layering virtual networks on top of the physical infrastructure.
Ideally, the discovery system would also determine the network flows requiring support, potentially identifying each application and its interconnection requirements. This task is problematic in large networks that support hundreds of applications. It would be inappropriate to model and replicate the connectivity used by an unknown malware package in a network model.
We also need tools that use the discovered topology and configuration to create a “network digital twin” and use it for pre-deployment testing. This digital twin must be an “act-alike” twin that can accurately model configuration and topology changes. To verify the operation, it will need to replicate enough of the traffic flows to validate whether the desired connectivity exists (or doesn’t exist) after a proposed change.
Lastly, we need methods for evaluating the correctness of network operation—in both the test network and eventually in the operational network. The simple checks would verify neighboring network device reachability or routing table entries. This area is where the networking industry’s capabilities are today. In the future, I expect to see greater fidelity regarding network digital twins where synthetic network traffic can potentially allow verification of security and quality of service (QoS) functionality.
Development, Continuous Integration, and Continuous Delivery
The network design and development process must be the same as we are using for software development. That is, develop and test the change in a development environment that mirrors the real environment. When it passes all tests, the change gets submitted to the CI/CD process.
The CI phase builds the virtual test network, applies the proposed network change, and evaluates the results. Did the change achieve the desired result? Test failures could be because the change was incorrect, incomplete, or that the tests were insufficient. A test failure could also be due to a conflict between two changes. Regardless, the network engineer receives the test results for evaluation and correction.
However, if the CI process indicates a successful change, the CD process can apply this change to the production network. Undoubtedly, many organizations will want to verify the results of successful tests before applying them to the production network as part of their normal change control process.
The goal of the CI/CD process is to avoid the human errors that are prevalent in today’s network operational processes.
Adopting CI/CD Processes
Adopting networking change CI/CD processes is a major operational shift and is best done with a slow and easy approach. Begin with simple automation tasks that have little impact on network operations when you make a mistake. Yes, you will make mistakes, so strive to make them small and low impact.
Keep your changes small by breaking larger changes into small, independent pieces. For example, a QoS implementation might start with classification and marking, while the next step would implement queueing and forwarding logic.
You should adopt a CI/CD process for sub-sets of the overall network configuration. For example, start with managing the configuration of all the global static configuration items like network time protocol (NTP), simple network management protocol (SNMP), system logging protocol (Syslog) , dynamic host configuration protocol (DHCP), and login authentication. Then apply what you learn to more advanced parts of the configuration like interfaces and routing protocols. As you get started, make the network’s design and configuration consistent across similar constructs. Branch sites of the same general size should use one design and a configuration that differs only in things like IP addresses. This consistency greatly simplifies the creation of test environments and makes it easier to automate configuration changes.
Your CI/CD process should become the only way to deploy new changes to the covered parts of the configuration. Otherwise, you create inconsistencies across the network that adds unnecessary complexity to your CI/CD automation process.
If you’re beginning to embark on the networking CI/CD journey, the next step is reading and research. I recommend How Do You Set Up a Network CICD Pipeline?, an article by Brunner Toywa, a telecommunications engineer. In addition, generic articles on CI/CD, abound, such as An Introduction to CI/CD Best Practices. Then work on setting up a development and test environment for change development and the CI part of the process. You’ll also need to learn about code repositories and how to automate your build cycle using tools like Jenkins or Drone.
The whole process may seem like a lot of work, but its objective is to reduce network outages by adopting best practices from the software development industry. | <urn:uuid:a42ca55a-2f76-471a-8724-bb88493fbdba> | CC-MAIN-2022-40 | https://www.nojitter.com/enterprise-networking/network-configuration-using-cicd-methodology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00307.warc.gz | en | 0.916999 | 1,271 | 2.609375 | 3 |
The Abbey of Our Lady of Fontevraud is a former Benedictine abbey founded in 1101. This is one of the largest monastic complexes in Europe.
This was originally a mixed monastery with both men and women in the same buildings. It was later expanded into multiple monasteries with the sexes segregated. The Counts of Anjou protected it from the beginning as it was in their territory. Then the Plantagenet dynasty that ruled England and half of France supported it, with prominent members buried in the main abbey church.
The French Revolution shut down religious establishments including this one. It was converted into a prison, and operated as such until 1963.
The complex is built from the soft limestone underlying the region. The stone was quarried nearby.
Pope Urban II visited Angers in 1096 and appointed Robert Arbrissel, a monk and hermit, to a mission position in this area. Robert traveled from settlement to settlement, establishing a large following of men and women from various social classes. He founded the Abbey of Fontevraud in 1099-1101, mixing men and women in violation of the usual rules.
There was a hermitic practice of συνείσακτοι or syneisaktisme, an ascetic practice of chaste cohabitation intended to overcome carnal desire. It came from the Desert Fathers and was more common in Celtic monasticism than in continental Europe, with Robert's group the most prominent European example. They even slept together. Robert made a point of only sleeping with the former prostitutes, not the noble women, at least most of the time.
Pope Gregory VII had initiated a series of reforms during his 1073-1085 papacy. Those reforms reached Fontevraud's co-ed abbey around 1100 and put an end to their syneisaktisme practice. In 1101 the mixed house was split into a double order: the Monastery Saint-Jean-de-Habit for the men and the Monastery of the Grand Moûtier for the women. They soon created two more organizations: the Monastery of the Madeleine for repentant sinners and the Saint-Lazare Convent for Lepers. Since we're all sinners and we should all be repentant, I suspect the Monastery of the Madeleine specialized in Robert's former sleeping partners.
The changes led to the bishop of Poitiers and Pope Pascal II recognizing the order of Fontevraud in 1106.
What is the difference between an Abbey and a Monastery?Glastonbury
Abbeys and monasteries are similar religious organizations. A simple explanation is that "monastery" is the general term, and an abbey is a monastery of higher ecclesiastic rank.
A monastery is a building or cluster of buildings containing the workplaces and housing of people following a religious way of life we call monasticism. That term comes from the Greek μοναχός, derived from μόνος, meaning alone. These could be men or women, monks or nuns, who have removed themselves from the secular world, and they might be Christian, Buddhist, Hindu, or Jain. They might live in a tight-knit community, or they might be isolated hermits.Hermitages in
A monastery, at least in Christianity, might be an abbey under the rule of an abbot or abbess, or a priory under the rule of a prior or prioress (at lower rank than the abbot or abbess of an abbey), or a hermitage which is the dwelling of an individual hermit or anchorite or anchoress, derived from the Greek ἀναχωρέω, in turn from the ancient ἀναχωρητής.
By 1200 Fontevraud supported a hundred priories throughout France, and later expanded into Spain and England.
Fontevraud and the Plantagenets
Eleanor of Aquitaine
Eleanor of Aquitaine inherited the Duchy of Aquitaine, almost the southwestern quarter of today's France, from her father in 1137. Three months later, she married King Louis VII of France.
The two soon left on the Second Crusade, the first to be led by actual Kings. Louis VII led the French forces, and Conrad III led the Germans. Eleanor's uncle Raymond was Prince of the Crusader Kingdom of Antioch and he was looking for protection. Eleanor went to Vézelay, one of the purported locations of Mary Magdelaine's grave, to organize supporters. She recruited some of her royal ladies-in-waiting and 300 non-noble Aquitainian vassals, and insisted on accompanying the army.
The French and German armies took separate paths through central and eastern Europe. "Let's split up, we can do more damage that way." Manuel I Komnenos, the Byzantine Emperor, hindered the progress of the western European armies through Byzantine-controlled eastern Europe.
Louis and Eleanor had a grand time in Constantinople. They stayed for three weeks, attending feasts and seeing the many sights. Meanwhile the German army continued across Asia Minor toward Antioch.Constantinople
Eventually it was time to get the French army back on the march. The Byzantine Emperor told them that Conrad's German army had achieved a great victory against the Turkish forces. He may have told them that in order to get them out of town quicker. Louis and Eleanor had just reached Nicaea, not far from Constantinople at the eastern end of the Sea of Marmara, when Conrad and the battered survivors of the German army staggered into camp with a gruesome story of their defeat.Ephesus
The French, joined by some surviving German forces, continued into Asia Minor. They got as far as Ephesus before being attacked by a small Turkish detachment on what happened to be Christmas Eve. The French slaughtered those Turks and took over their camp.Olimpos
Louis decided to cross the Phrygian Mountains heading roughly toward Olimpos in order to take a more direct route to Antioch. Eleanor marched in the vanguard, the point group led by her Aquitainian vassal Geoffrey de Rancon. Louis decided to take charge of the rear of the column, with the unarmed pilgrims and baggage trains.
This was as far as the Germans had reached, and the French army was marching past unburied bodies of the recently massacred German army. Eleanor and the lead groups of the army crossed the highest point at Κάδμος or Mount Cadmus, known to the Turks as Baba Dağı. Then the Turks attacked the rear groups, both the armed groups toward the rear and the unarmed pilgrims and baggage and Louis at the very rear.
The Turkish attack took the French by surprise. The body count was high. Louis had dressed in a pilgrim's tunic instead of royal garb, so while his bodyguards' limbs were severed and skulls smashed, Louis managed to escape notice and survive. He reportedly "nimbly and bravely climbed a rock by making use of some tree roots which God had provided for his safety."
Eleanor was indirectly blamed for the disaster — her vassal Goeffrey de Rancon had made the decision to continue up and over the divide. As a non-noble, he was a convenient fall guy. Eleanor was further blamed for the size of the baggage train, and for the fact that her Aquitainian forces were marching at the front of the army and not toward the rear where the attack happened.
The army split. Royalty and upper nobility retreated to the coast and sailed to Antioch while the soldiers and commoners continued marching across Asia Minor.Aleppo
Things became tense once they arrived in Antioch. Eleanor's uncle Raymond, prince of the Crusader Kingdom of Antioch, pressured Louis to attack the Muslim army in Aleppo. That made the most military sense and it supported the Pope's objective of retaking Edessa, but Louis was much more interested in making a pilgrimage to Jerusalem.
Eleanor was fed up with Louis and wanted to stay in Antioch. Sure, they had had some good times — marching across Europe, being fêted in Constantinople, the excitement in Asia Minor — but she wanted out of the marriage. She brought up the issue of the consanguinity between her and Louis. They were too closely related. Third cousins once removed, both descended from Robert II of France. That invalidated a marriage in medieval Europe.
Now Louis was the one who had had too much. He forcibly dragged Eleanor on to Jerusalem along with what remained of his army.Damascus
Louis got his pilgrimage to Jerusalem. He later organized some military support from Conrad and King Baldwin III of Jerusalem and tried an attack on Damascus. This violated a truce between the rulers of Jerusalem and Damascus and ended in failure. Louis and Eleanor sailed for France in separate ships.
Their ships were attacked by Byzantine ships acting under orders of the Byzantine Emperor. Maybe they had left some bills unpaid in Constantinople. The attack and following storms separated the ships, with Eleanor's driven south to the Barbary Coast. When she arrived in Sicily two months later, she found that she and Louis had both been presumed dead. King Roger II of Sicily gave her food and board, and then Louis's ship eventually arrived. This may have come as a disappointment.
They stopped in Italy on their way to Marseille in order to get a divorce from Pope Eugene III, but he proclaimed that their marriage was legal. He had a special bed prepared for them, and told them to get into it and get down to business. Eleanor became pregnant, the Pope said "Problem solved!", and they sailed for Marseille and France.
To Louis' frustration, the result was only a second daughter. That was the end of it for Louis. He rounded up the Archbishops of Sens, Bordeaux, Rouens, and Reims and got the approval of the Pope. "Oh, you meant third cousin? Here's an annulment."
Eight weeks later, in May 1152, Eleanor married again.
Henry of Anjou becomes Henry II
Anjou was a county, a region ruled by a man ranked as Count in the hierarchy of European nobility. It was centered around the city of Angers in the lower Loire river valley, south of Normandy and lying between Paris and Brittany. Henry of Anjou was the Count of Anjou.
Henry married Eleanor almost as soon as her marriage to Louis was annulled. They were even more closely related, being third cousins and both descended from Robert II, but neither was looking to get out of this marriage. At least not yet.
With Anjou and Aquitaine, the couple now controlled the southwestern quarter to third of what today is France. They controlled more of France than any ruler since the Carolingians, Charlesmagne and his dynasty. Meanwhile the King of France only controlled the Île-de-France region surrounding Paris, and his "control" of that depended on the cooperation of the various nobility really running things at the local level.
The next year, 1154, Henry, Count of Anjou and husband of Eleanor of Aquitaine, also inherited control of the Duchy of Normandy and the throne of England. He became the English King Henry II. About half of the territory of today's France was now controlled by the newly-crowned King of England.
Henry and Eleanor first visited Fontevraud in 1154. They put their two children Jeanne and John, the future king of England, in the care of the monastery.
The rapid rise to power is the good news. But otherwise, things never went well for Henry II. Squabbling within his family led to multiple armed rebellions. He fathered eight children by Eleanor and several more by mistresses, and he and Eleanor argued frequently about more than just the mistresses. Chosen heirs died of dysentery and tournament accidents. Henry appointed his friend Thomas Beckett as Archbishop of Canterbury, but then Thomas didn't remain a compliant puppet. Henry muttered something about "Who will rid me of this troublesome priest" and then four knights promptly killed Thomas. Or at least Henry said something along those lines. The awkward part was that he was muttering about Thomas and four knights took that as an assassination tasking. Finally, he had Eleanor subjected to what we would call "house arrest" today, imprisoned in a series of royal castles for the last sixteen years of Henry's life.
Henry II died in Chinon in 1189, physically and morally exhausted by fighting one of his sons and the King of France. Not just arguing, but actual military action. He had asked to be buried in Grandmont in the Limousin, but he died in the summer and no one wanted to try to transport his body that distance through the summer heat. Fontevraud made a convenient burial place.
Henry's third son Richard, known as Richard Cœur de Lion or Richard Lionheart, was crowned as his successor. As a third son he hadn't been expected to rule, but those rebellions and tournament accidents had bumped him up in the queue. Like the ultimate American country music figure, his first act was to release his mother Eleanor from prison. Once crowned as King of England, he probably spent no more than six months in England. He started by riding off on the Third Crusade.
The point was to recapture the Holy Land from the forces of Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūb, known as Saladin in the west, but Richard found more to do. He massacred 2,600 prisoners at Acre, deposed the King of Cyprus and sold the entire island, insulted a number of other royal leaders, and apparently arranged for the assassination in 1192 of Conrad of Montferrat, the King of Jerusalem and ruler of the Crusader States.
Richard had planned to become the King of Jerusalem once Conrad was out of the way, but that didn't work out. Then, traveling in disguise on his way home from the Crusades, Richard was captured and imprisoned by Leopold V of Austria. Leopold was Conrad's cousin, and Richard had already insulted Leopold and refused to share his spoils from the Crusade. That might have been why Richard was traveling in disguise.
Leopold handed Richard Lionheart over to Henry the Lion, the Duke of Saxony and Duke of Bavaria. Leopold probably figured that two guys named Lion-something deserved each other's company. Henry the Lion got in touch with England. Richard's brother John had taken over Richard's unattended English holdings and let Normandy fall to Phillip II of France. Richard's ransom would cost 150,000 Marks. John imposed a 25% tax on goods and income to raise the money. This made him even less popular in England than he already was. John's pettiness and cruelty was the basis of the legend of Robin Hood.
John raised the ransom, Henry the Lion released Richard Lionheart, and Richard briefly returned to England. He got back in charge of affairs in England, forgave John, and soon left England in 1194 for what would be the last time.
Over the next five years, 1194-1199, King Richard Lionheart of England fought King Phillip II of France for control of Normandy. Château Gaillard on the Seine above Les Andely was Richard's largest military architecture project and his favorite residence. It's thought that Richard applied design ideas he had picked up from castles he had seen in Syria during the Crusades.
Then Richard Lionheart died in 1199 at Chalus-Chabrol. Eleanor had his body brought here to Fontevraud for burial. Well, most of it, anyway. His heart is buried in Rouen and his lower digestive tract in Châlus, where he died.
Then Richard's sister Jeanne died here later that year, and Eleanor had her buried in the abbey church.
At this point there began to be talk that the Plantagenet family was trying to create a "dynastic necropolis" in Fontevraud to exert a sense of possession over their ancestral lands including Poitou and Aquitaine.
Eleanor moved about 80 kilometers south to Poitou. When she died in 1204, she was also buried here at Fontevraud.
Eleanor and Henry's son John was crowned King of England in 1199. He married Isabella of Angoulême in 1200. He was not loved, with the barons forcing him to sign the Magna Carta in 1215 and civil war breaking out soon afterward. Supporters of his son Henry III proclaimed Henry the legitimate King, making him the only king since the Norman Invasion to be crowned while his father was still alive. John died of dysentery while on a military campaign in western England in late 1216, and Henry's supporters were victorious.
Historians can come up with positive attributes such as "hard-working administrator", but John was petty and cruel and had reason to be the villain of the Robin Hood legends.
In 1254, Henry III organized the transfer of the remains of his mother, Isabella of Angoulême, from Angoumois to Fontevraud for burial. Meanwhile, John was buried in Worcester Cathedral.
The end of the Plantagenet dynasty hurt the abbey. Then the Hundred Years War led to even further declines. It lost about 60% of its land rents. The abbey itself wasn't raided, but many surrounding areas normally supporting the abbey were repeatedly devastated in 1357, 1369, and 1380.
Things began to improve through the late 1400s. Then there were ups and downs, nothing like the prosperous times under the Plantagenets, but further construction projects continued.
Revolution and Conversion into a Prison
The French Revolution changed everything. On 2 November 1789 all church property was declared national property. There were about a hundred religious and lay members living here, and the community continued for several months. In April 1790 most of the remaining religious community left.Mont
In 1804 Napoleon signed a decree converting several former religious institutions including this abbey and Mont Saint-Michel into prisons. The Revolution marked the real starting point of the French penitentiary system, with the conversion of the château of Saumur, Fontevraud and Mont Saint-Michel and many more monasteries, and other historic structures. Prison population tripled under Napoleon's rule. By the early 21st century the incarceration rate in France had increased to 103 per 100,000, one-seventh the 693 per 100,000 rate in the United States.
The main structures of most of the abbey buildings were preserved, although there were heavy modifications. The nave of the main church had two floor levels added in order to house prisoners. The many windows and doors were blocked to make escape more difficult. The first prisoners arrived in 1812. The prison was designed to accommodate 1,000 inmates, but by 1830 up to 2,000 were confined here.
Prisoners built workshops and factories. These, along with guard positions, provided work for the local people.
In the 1950s and early 1960s the prison population here was down to the 600s, back up to 818 near the end. Most of the prisoners were transferred out in 1963, except for 40 of the more trusted ones who were kept there to maintain the green areas and work on the demolition of the added prison-specific structures. In 1985 the last prisoners were finally transferred out.
The cloister, now restored, is about 60 meters on a side.
The main kitchen has a distinctive roof with several chimneys.
Now we'll continue to Chinon,
where Joan of Arc met the French Dauphin.
Next ❯ Chinon | <urn:uuid:b7d9e218-515e-4fc3-89be-adbadaafc9db> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/france/loire-valley/fontevraud-abbey.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00307.warc.gz | en | 0.97768 | 4,174 | 3.046875 | 3 |
Server virtualization has its fair share of benefits for a business – maximizing the IT capabilities, saving physical spaces, and cutting costs on energy and new equipment. But for a company that’s just starting to explore the realm of server virtualization, choosing one from the three types of server virtualization can be daunting.
So what are the three types of server virtualization and how do companies utilize them? Most companies use either full virtualization, para-virtualization, and OS-level virtualization. The difference lies in the OS modification and hypervisor each type employs.
Understanding Server Virtualization: What it is and How it Works
Server computers are powerful – they manage computer networks, store files, and host applications. But most of the time, these powerful processing units are not utilized to their full potential because businesses tend to purchase more computers and other hardware instead, which is not always the wise decision because it occupies more physical space and consumes more energy.
Server virtualization offers one solution to these two problems by creating multiple virtual servers in one physical server. This method ensures that each processing unit is maximized to its full capacity, preventing the need for more computer units in a data center. The adoption of different virtualization technologies, including server virtualization is expected to rise up to 56% by 2021.
Currently, there are three types of virtualization used for sharing resources, memory, and processing.
This type of virtualization is widely utilized in the IT community because it only involves simple virtualization. It makes use of hypervisors to emulate an artificial hardware device along with everything it needs to host operating systems.
In full virtualization, separate hardware emulations are created to cater to individual guest operating systems. This makes each guest server fully functional and isolated from the other servers in a single physical hardware unit.
What’s great about this type is that you can run different operating systems in one server, since they are independent of each other. Modification of each OS also isn’t necessary for the full virtualization to be effective.
Currently, enterprises make use of two types of full virtualization:
- Software Assisted Full Virtualization
Software-assisted full virtualization uses binary translation when trapping and virtualizing the execution of instruction sets. The binary translation also emulates the hardware by utilizing software instruction sets. Here’s a list of software under this type:
- VMware workstation (32-bit guests)
- Virtual PC
- VirtualBox (32-bit guests)
- VMWare Server
- Hardware-Assisted Full Virtualization
On the other hand, hardware-assisted virtualization eliminates the need for binary translation. Instead, the original hardware is directly interrupted by the virtualization technology found on the X86 Processors (Intel VT-x and AMD-V). Depending on the guest OS’s instructions, privileged instructions can be executed directly on the processor.
This type of full virtualization can use either of the two hypervisor types:
- Type 1 Hypervisor – also known as the bare-metal hypervisor type, lays directly on top of the physical server and its hardware. Since there is no software of the operating system between the two, Type 1 can provide excellent stability and performance.
Since Type 1 hypervisors are relatively simple, there isn’t much functionality to them. Moreover, once this hypervisor is installed on the hardware, the latter cannot be utilized for anything else except virtualization. Type 1 hypervisors include:
- VMware vSphere with ESX/ESXi
- Kernel-based Virtual Machine (KVM)
- Microsoft Hyper-V
- Oracle VM
- Citrix Hypervisor
- Type 2 Hypervisor – also known as the hosted hypervisor type, is installed inside the operating system of the host machine. Unlike the Type 1 hypervisor, this one has one software layer underneath.
The Type 2 hypervisor is typically used in data centers that only have a small number of physical servers. What makes it convenient to use is that it isn’t much different from the applications in the current operating system. It’s easy to set up and manage multiple virtual machines once the hypervisor has been installed. Here are some of the type 2 hypervisors in the market:
- Oracle VM Virtualbox
- VMWare Workstation Pro/VMWare Fusion
- Windows Virtual PC
- Parallels Desktop
Para-virtualization is a type similar to full virtualization because it also uses the host-guest paradigm. The only main difference is that the guest systems are aware of each other’s presence and they all work as one entire unit.
This type is also time efficient and less intrusive since the virtual machines do not trap on privileged instructions. The operating systems acknowledge the hypervisor used in the hardware, sending the comments – known as hypercalls – in a more direct way.
To exchange the hypercalls between hypervisors and operating systems, both of them must be modified through implementing an application programming interface (API).
Sine paravirtualization utilizes a slightly different hypervisor than full virtualization, here are some of the more common products that support it:
- IBM LPAR
- Oracle VM for SPARC (LDOM)
- Oracle VM for X86 (OVM)
Unlike the first two types of server virtualization, OS-level virtualization doesn’t use a hypervisor and doesn’t apply a host-guest paradigm. Instead, it utilizes a process called “containerization” which creates multiple user-space instances (containers or virtual environments) through a kernel in the OS.
A specific container can only utilize the amount of resources allocated for them, not the available resources for the primary OS. Programs can also run in the container but the access to content is only limited to everything associated with that container and the devices assigned to it.
In this virtualization, kernels and operating systems can have different versions of OS from the host – for example, if the host server runs on Linux, the kernels and OS can only use different versions of Linux and not Windows. Otherwise, the OS-level won’t work.
Here are some of the commonly used containers in the market:
- Oracle Solaris
- Linux LCX
- AIX WPAR
What to Consider for Server Virtualization
Server virtualization is a promising method that can maximize the use of IT resources – that’s why tech giants like Microsoft, Dell, and IBM are continuously developing this technology. However, before picking the optimal virtualization for a business, it’s important to determine their benefits and disadvantages first.
|Full Virtualization||It can support different and unmodified operating systems on one physical server.||To prevent the slowing down of applications, you will need to allocate a big part of the physical server’s processor for the hypervisor.|
|Para-virtualization||Para-Virtualized servers don’t need as much space for processing in the physical server.||The operating system of guest servers needs modification to be able to communicate hypercalls with the host.|
|OS-Level Virtualization||It does not need a hypervisor, therefore no additional space requirement needed for processing.||To build a homogenous environment, you are required to install the same operating system on all guest servers.|
Aside from the virtualization method, you should also consider the following factors before settling on a specific type:
- OS Rebooting – Operating system rebooting is typically overlooked because OS are expected to work all the time. However, there is still a small risk of OS crashes. If this happens, an independent OS reboot must be possible.
- Deployment Work – While the type 2 hypervisor is easy to implement, it’s not the same case for type 1 hypervisor. The bare-metal hypervisor is much more difficult to handle than the former, so a thorough integration process is needed – especially for large deployments.
- Multiprocessing – Before selecting the virtualization solution, check first it includes symmetric multiprocessing support (SMP) for multiple processors of the same type or asymmetric multiprocessing support (AMP) for multiple processors of a different type. Some virtualization infrastructures also come with both SMP and AMP combined.
Adopt Server Virtualization for Your Company with Abacus
Thousands of companies are turning to server virtualization as a cost-effective IT strategy, so why not do the same?
With our team of IT experts at Abacus, we will help you maintain optimal efficiency in your office. Enhancing business productivity starts with the right technological solution. Take the first step with us in reaching the top. Contact (856) 505 6860 or email firstname.lastname@example.org now for your first consultation. | <urn:uuid:78937b2e-1b34-4766-ac9f-9dfdd936c36a> | CC-MAIN-2022-40 | https://goabacus.com/three-types-of-server-virtualization-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00307.warc.gz | en | 0.897411 | 1,852 | 2.65625 | 3 |
Monday, September 26, 2022
Published 2 Years Ago on Thursday, Nov 26 2020 By Karim Husami
In our hyper-connected world, advancing technology in IoT is bringing promise to many systems across industry sectors.
The Internet of Medical Things or IoMT which is a subset of the Internet of Things is one of the many emerging technologies that has impacted the healthcare system and our lives.
Hospitals and medical centers depend on smart devices for doctors to monitor their patients and their medical situations quickly and efficiently. In addition, these devices offer more precise analysis and earlier recognition of medical issues with the help of information flow.
According to a report published by Deloitte, “Hospitals in the U.S. have an average of 15 smart medical devices per bed, while the IoMT market is expected to reach $52 billion by 2022.”
IoMT, like any other technological device, is also subject to security risks such as cyberattacks. Malicious activities have increased in number in the last few years targeting medical institutions and being the cause of major disruption in the healthcare system, financial losses, which has lowered patient’s confidence in healthcare.
For example, hackers disabled computer systems at Düsseldorf University Hospital in Germany last September and led to the death of a patient while doctors attempted to transfer her to another hospital. The ransomware attack scrambles data, making computer systems inoperable.
The hospital’s President Arne Schönbohm said hackers took advantage of a well-known vulnerability in a piece of VPN (virtual private network) software developed by Citrix and warned other organizations to protect themselves from the flaw.
The need to implement robust IoMT security solutions in the medical industry has never been more important. Encryptions and conducting a secure boot – making sure that when a device is turned on, none of its configurations have been modified – are some of the basic yet fundamental security measures providers and manufacturers of IoT devices can take.
Cyberattacks will never simply vanish. No matter the level of precautions we take, there will always be a degree of risk but making sure devices are secure and teams are vigilant and prepared, may help reduce overall disruption caused by cybercrime.
Web 3 simply refers to the next version of the internet, which supports decentralized protocols and promises to lessen reliance on major tech firms, and grants ownership to the very people that use it. Having said that, how central is individuality in Web3? Web3, while still a vague concept, represents the read/write/own version of the […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:bead8b1e-8087-4575-82d6-e60f7a944e7b> | CC-MAIN-2022-40 | https://insidetelecom.com/the-importance-of-iomt-security-across-the-healthcare-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00307.warc.gz | en | 0.953743 | 579 | 2.9375 | 3 |
Data is an increasingly integral part of the enterprise. With the advent of data analtyics, more companies are leveraging software tools to gather information regarding business operations, market trends and consumer behaviors to reduce inefficiencies and increase overall operational effectiveness. In a very short of amount of time, data has become an extremely valuable commodity.
A team of researchers recently found that businesses are storing more information than ever before. These files include everything from financial transactions and payment records to customer information. The total amount of data stored by the world's businesses has eclipsed two zettabytes. SMBs and large enterprises alike are gathering massive amounts of information and are not expected to slow down anytime soon. The average enterprise has 100,000 terabytes of files stored in its data warehouses and is expected to increase that capacity by 67 percent in the next year. While the average SMB can be expected to store much less information – 563 terabytes – than an enterprise-class operation, it shows greater growth potential. Across the board, SMBs are prediced to increase their data collection by 178 percent in 2013.
Furthermore, that information presents a great deal of worth to small, medium-sized and enterprise-class businesses. Worldwide, information costs businesses approximately $1.1 trillion each year. The average enterprise spends $38 million each year on its data acquisition, storage and leveraging campaigns. Although SMBs spend, on average, $332,000 per year on data, the cost per employee is higher at $3,670.
The high cost of data loss
Losing or otherwise compromising this information could have a devastating effect on business operations and continuity. This can be especially true when the data in question relates to personal information belonging to clientele. In that circumstance, a company could be faced with diminishing public trust and support, resulting in fewer customers, decreased revenue streams and a falling stock price. It could take years to undo the damage incurred from such an incident. The loss of mission-critical information may have even greater lasting effects, as the functionality of the organization itself could be affected.
The study found that even with the high cost of data loss, companies still struggle to keep their informaiton secure. Sixty-nine percent of survey respondents reported some form of data loss in the past year. There was no one cause for these incidents. Researchers found that hardware failure, cybersecurity breaches, human error and lost or stolen mobile devices all caused data loss.
Securing data with system recovery
Cybersecurity expert Michael Krutikov recently recommended that organizations concerned about maintaining reliable access to their mission-critical information should deploy data recovery solutions to ensure that it does not become lost or compromised. Krutikov noted that although some resource-strapped business owners may not consider system restore technology an essential investment, this viewpoint is quickly becoming incongruent with reality.
"In some cases, smaller manufacturers often have few or no dedicated IT staff members, and they rarely have extra time to pursue any initiative that is not focused on running the day-to-day business," Krutikov wrote. "When there is no immediate visible risk that a data loss incident will occur, it's easy for them to de-emphasize and even neglect regular backups – especially when backup has a historical reputation for being difficult to manage. These incidents happen all too easily and far too often, and manufacturers can no longer run under the premise that backup is not a real need."
With the right system restore application in place, businesses can be sure that even if critical data is lost or deleted, administrators can always roll the system's configurations back to an earlier setting. This way, enterprises and SMBs alike can maintain the valuable information they rely on for day-to-day operations. | <urn:uuid:c01f8b46-261a-4358-b8a4-d1426c0c2a7e> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/safeguarding-valuable-data-with-system-restore | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00307.warc.gz | en | 0.955697 | 762 | 2.625 | 3 |
Phishing continues to be a large and growing problem for organizations of all sizes. As pioneers in the use of simulated phishing attacks, Wombat Security, strongly recommends organizations make anti-phishing education the foundation of their security awareness and training programs. However, it’s also recommended to think beyond the phish to assess and educate end users about the many cybersecurity threats that are prevalent (and emerging) in today’s marketplace. Risky behaviors like lax data protection, oversharing on social media and improper use of WiFi are all dangers in their own right – and could be considering contributing factors to the ever-growing phishing problem.
Wombat Security’s Beyond the Phish Report looks at answers to nearly 20 million questions around nine different topics in Wombat’s Security Education Platform over the past two years to understand what areas end users still struggle with and what areas they are doing better it.
“Clearly, phishing is a focus area across the industry, but the efforts can’t stop there,” said Joe Ferrara, President and CEO of Wombat. “To reduce cyber risk in organizations, security education programs must teach and assess end users across many topic areas, like oversharing on social media and proper data handling. Many of these risky behaviors exacerbate the phishing problem.”
Key findings from the report that show room for improvement include:
- The No. 1 problem area for end users, with 31% of questions missed, is safe social media use; yet only 55% of security professionals assess employee knowledge on this topic.
- Professional services and healthcare employees performed the lowest on the nearly 1 million questions asked about safe passwords.
- Additionally, end users missed 30% of questions about protecting and disposing of data securely, second only to safe social media use.
- While healthcare was the industry that had the highest assessment percentage on end users’ ability to protect confidential information, 31% of questions on the topic were missed by those in the industry.
With the rise in remote working and end users who value the ability to work outside of the office, organizations need to educate their employees on how to stay safe while they are outside the office. However, only 50% of companies are assessing employees about their telecommuting habits. Improper use of free WiFi, inattention to physical security, lax data protections, and the lack of security guidelines during travel led to 26% of questions missed by end users on this important topic.
“This is a topic that should be a part of every security awareness training program, particularly for today’s mobile workforce. Many employees are accessing corporate email and internal systems from mobile devices or remote locations. Do employees understand the risks of connecting to free WiFi networks? Do they know what a rogue hotspot is? Are they using strong passcodes or other locking mechanisms? Do they use VPNs? Do they understand the implications of malicious applications and over-reaching permissions,” Ferrara added.
While there is room for improvement in all risk areas, the report also highlights categories where employees have answered the highest percentage of questions correctly:
- 90% of questions were answered correctly about building safe passwords.
- 85% of questions were answered correctly on how to best protect against physical risks, such as ensuring no one follows you into a secure area or not leaving sensitive files on your desk.
- 79% of organizations assess end users on internet safety, and 84% of the questions in this category were answered correctly.
Wombat Security also surveyed hundreds of security professionals — customers and non-customers — about what security topics they assess on, and their confidence levels in their end users’ abilities to make good security decisions. Of the organizations that participated, 20% were in financial industries, 13% in technology, 11% in healthcare, and others in verticals including manufacturing, professional services, education, insurance, retail, energy, government, telecommunications, and consumer goods. You can download the full report here.
[su_box title=”About Wombat Security Technologies” style=”noise” box_color=”#336588″][short_info id=’61162′ desc=”true” all=”false”][/su_box] | <urn:uuid:a83023f6-ab47-46b4-87ed-02a24d1ccad9> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/beyond-phish-report-2016/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00307.warc.gz | en | 0.948534 | 885 | 2.59375 | 3 |
In VoIP Carrier networks, as in other complex systems, there are two types of problems. I'll call these "Preppy" problems, and "Gambling" problems.
Preppy problems occur when you're at the limits of achievable quality within the tolerable costs.
-- A physical device fails.
-- A generally good algorithm has a memory leak.
-- A hacker finds a way to exploit a defect in your firewall.
-- VoIP through a link with guaranteed prioritization for voice packets occasionally drops voice packets when the link isn't saturated.
Gambling problems happen when you have large risks, but you're typically just lucky.
-- VoIP across the Internet has occasional audio quality.
-- You use the word "password" as your password, and don't usually get attacked.
It's important to make a distinction, because solving them is the responsibility of two different groups:
The Preppy problems are the domain of scientist, researchers, and product developers. Their goal is to push the boundaries of quality, reliability, and robustness.
The Gambler problems are the domain of every engineer or business owner. They are the risks you're taking when you run a business and assemble someone else's components into a product. For example, occasional bad voice quality due to packet drops across the Internet are Gambler problems, because there are engineering ways to prevent that problem. | <urn:uuid:ec08b525-8273-45d1-b900-a44c5c62bcee> | CC-MAIN-2022-40 | https://www.ecg.co/blog/95-gamblers-and-preppies | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00307.warc.gz | en | 0.950734 | 283 | 2.515625 | 3 |
The course introduces students to exploit development in MIPS processor architecture. Exploit development on MIPS processor hasn’t seen the attention that other architectures such as x86 and ARM got. With the growing IoT devices, we have been seeing many embedded devices with MIPS architecture along side ARM. Exploit development is getting harder and harder with exploit mitigation techniques in place. But, the good news is that it is not impossible to write working exploits as exploit mitigation techniques do not fix the underlying problem in the vulnerable source code. This practical training starts with the basics of MIPS Architecture and slowly moves towards writing own shell code and creating working exploits using Return Oriented Programming for a given target binary. To give a sense of real exploitation, real world examples will be discussed with proof of concept exploits. By the end of this training, students will be able to write Memory corruption exploits for MIPS architecture, understand how Return Oriented Programming can be used in MIPS for modern day exploit development and bypass some of the most common exploit mitigation techniques such as ASLR.
Who Should Attend This Training:
- Red and Blue Team members, pentesters
- Anyone interested in MIPS exploitation
- Anyone interested in IoT and embedded device security
- Anyone with knowledge in x86 and/or ARM to take it to the next level
Prerequisites and Requirements:
- Familiarity with debuggers (gdb, WinDBG, OllyDBG or equivalent) is recommended to have but not a must
- Familiarity with command line tools
- Working knowledge of Python or Perl
- A laptop with VMWare Player/Workstation/Fusion installed
- 8GB RAM required at a minimum
- Wireless network card
- 40GB free Hard Disk space
This two day course is divided into 4 high level parts.
- Fundamentals of MIPS covering its assembly language, calling conventions etc.
- Writing MIPS shell code, which can be used later in the training. We will write several different shell code (exit, write, execve, reverse tcp etc) and fine tune them to avoid bad characters such as null bytes.
- Basics of Stack Based Buffer overflows and exploiting them in MIPS.
- Bypassing exploit mitigation techniques using ROP gadget chaining and avoiding commonly seen problems such as cache in coherency. Finally an Introduction to heap based vulnerabilities.
- Introduction to MIPS Architecture
This section covers the fundamentals of MIPS Architecture, covering calling conventions, instruction formats, CPU Registers etc.
- An overview of QEMU MIPS setup
This section covers how the lab set up is done and explains how students can setup their own setup to debug MIPS applications. This section also covers cross compiling and MIPS binary emulation on x86 for debugging.
- MIPS compared to x86 and ARM
This section covers how MIPS is different from other processor architectures such as x86 and ARM. We will also discuss the similarities with other architectures. Eg. Both ARM and MIPS have 32 bit length instructions(excluding thumb mode instructions). Similarly, we will compare how MIPS is harder to avoid bad characters compared to ARM.
- Basics of GDB
- Basics of MIPS assembly language
The above two sections let the students run through MIPS assembly programs and students will debug the programs using GDB. This will give them the details of MIPS assembly as well as the taste of GDB if they never used GDB.
- Introduction to Memory corruption attacks
This section covers the fundamentals required to understand memory corruption attacks such as Buffer Overflows. This will provide generic details of buffer overflows, which will be practically used in the next section. This section also gives a high level overview of commonly seen challenges specific to MIPS.
- Debugging MIPS Binaries
This section lets the students to run through steps such as crashing the applications, getting control over the return address, executing shellcode taken from the internet. This will also give a taste of analyzing core dumps to perform crash dump analysis.
- Writing MIPS shell code
This section discusses the details of how one can write MIPS shellcode from the scratch.Students will write different varieties of shellcode (execve, reversetcp)that works. To provide proper understanding of concepts such as syscalls, we will begin with simple shellcode such as exit and write. We will then move towards writing shellcode that can be used in exploits.
- Avoiding Bad characters
This section discusses how bad characters can break shellcode and techniques to avoid bad characters while writing shellcode in MIPS.
- Stack based Buffer Overflows in MIPS
This section re-visits stack based buffer overflows with the new shellcode students have written. We will enable memory protections to demonstrate that existing exploit will fail and then proceed to the next sections of the training.
- Ret2Libc in MIPS
This section discusses how Ret2libc can be used to invoke system() function and obtain a shell.
- Dealing with MIPS cache coherency
This section talks about cache coherency which usually breaks the exploits in MIPS. we will discuss techniques such as flushing the cache using return oriented programming. We will discuss the basics here and put it to use in Return Oriented Programming section.
- Exploit Mitigation techniques
- Return Oriented Programming
- Bypassing ASLR
The above three sections discuss various memory protection techniques and how they work. We will then discuss the loop holes in memory protection techniques and why they are not foolproof. We will discuss some of the techniques to bypass ASLR such as ROP, brute forcing and memory leaks.
- Introduction to Heap overflows in MIPS
This section of the training only provides an introduction to Heap overflows by solving a simple crack me. | <urn:uuid:99932ffa-0299-4ce1-a980-ba547ce95cf9> | CC-MAIN-2022-40 | https://nanosec.asia/nsc2019/trainings/it04/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00507.warc.gz | en | 0.881157 | 1,217 | 2.6875 | 3 |
hen it comes to security technology, the engineers, programmers, and designers behind it all are constantly looking for ways to up the ante. Whether this means improving the ease of use for end-users or just finding new ways to make their tech more secure, there’s always room for improvement. One principle many involved in the field are now focused on is multifactor authentication—wherein more than one form of authentication must be presented to access a system, like typing in a password but also having to answer a security question for your bank’s website. Multifactor authentication is an idea that can be widely applied across a number of security technology disciplines, so let’s look at the how and the why behind it all.
Why Use Multifactor Authentication?
With any form of authentication, there’s a way to fake it. From biometrics to key cards, just about everything on the market is susceptible to being copied or faked. That doesn’t mean these tools are useless or that they’re necessarily easy to bypass, but it does mean that we must operate on the assumption that someone out there is going to try their hardest. Instead of relying on a single authentication tool, multifactor authentication is all about diversifying methods of authentication so that you aren’t reliant on any single method. The more redundant a system is, the less likely it is that someone will be able to crack the code and access something they shouldn’t.
What Are Some Applications?
Now that we have a basic definition, it’s likely that you’ve already encountered multifactor authentication before. Your workplace may require you to swipe an access card at one door and then enter a key code at the next. Or, as mentioned above, you may have experienced multiple forms of authentication being required to access things like online banking or other financial-related web services. However, when developing security technology, this idea has become more widespread, both in terms of cybersecurity as well as physical security. Here at Gatekeeper, multifactor authentication is something we take seriously—which is why our entire suite of vehicle inspection technologies are designed to work together in a safe, secure harmony.
Vehicle Inspection Security With Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to take action. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 30 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company. | <urn:uuid:e3f6c412-e35a-4ac6-9d0a-a4bb16de3753> | CC-MAIN-2022-40 | https://www.gatekeepersecurity.com/blog/multifactor-authentication-critical-security-technology-development/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00507.warc.gz | en | 0.935562 | 589 | 2.625 | 3 |
A little more than a decade ago, digital forensics professionals had a much simpler task in front of them: examine the computer or computers of the suspect, look at the suspect’s incoming and outgoing emails, and ensure that each piece of relevant data is collected, identified and analyzed with the right tools in the right way.
When cell phones entered the scene around 2008, the digital forensic professional’s methodology was challenged. Hard drive forensics tools weren’t designed to work on mobile devices. And, this was only the beginning of the changes and challenges that were ahead for the unsuspecting digital forensics expert.
Today, ancient technologies such as faxing and email are being replaced with instant messaging and social media platforms such as Facebook, Twitter, SnapChat, TikTok, etc. Data moves at a rapid pace across many different types of mobile phones, tablets, cloud-based email and storage, game consoles, IoT devices and wearables. According to a recent report published by the World Economic Forum and Raconteur, the astonishing amount of data transmitted daily includes: 500 million tweets, 294 billion emails, four petabytes of data on Facebook and 65 billion messages sent via WhatsApp. By 2025, it is estimated that 463 exabytes of data will be created globally, each day.1,2
Because of this, digital forensics experts are struggling to employ the Daubert Standard—the standard used in all expert testimony within the United States–to ensure the testimony itself is based on a reliable methodology.
On newer social media platforms, data is being deleted, altered, moved, and housed in other countries or on cloud platforms where it is more difficult for forensic experts to defensively acquire a pristine copy when they need it. The key to digital forensics is not about showing data; it’s about collecting and preserving it in a way that is defensible and admissible in court. So, the Daubert Standard suggests that preservation and collection of data must be conducted in a way where if somebody with the same skill sets and tools followed the same procedure, they would get the same results. It’s designed to ensure that forensic experts maintain independence along with law abiding methodologies.
Some forensic experts may fall into the trap of believing that since they’re being paid by one side, they have to find that “smoking gun,” or nugget of discovery that will help them prove their worth. The Daubert Standard, in essence, encourages independence. Professional forensics experts should be just as proud to find nothing as they are to find evidence—because they aren’t working for one side or the other, but instead unveiling, acquiring and preserving potentially relevant data that remains uncompromised.
New media. New cloud frontiers.
Generally, when data is stored on new media or in the cloud, it is not backed up in a way that’s conducive to an investigation. Vast data repositories with treasure troves of potentially relevant nuggets are housed outside of the digital forensic expert’s care, custody and control. Forensics experts will attempt to gain proper credentials to access these areas legally. This is not always easy on Facebook, Twitter, Office 365, Dropbox and many other well-known cloud environments. Forensics experts often find that by the time they are granted legal access, a great deal of potentially relevant data has already been purged and is unrecoverable or lost in the cloud.
These platforms all have different levels of log-ins and connections. As a professional forensic expert, you would not, for instance, lie about your identity to try to gain access to the suspect’s social media account. Nor would you hack into an account or bend the rules in any way that would fall outside the guiding principle of “what a reasonable person would do under similar circumstances.” When you’re classified as an expert witness, that is because a judge believes in your standards, qualifications, general practices and in your ability to provide defensible, independent evidence.
Because data is everywhere, it’s important not to focus on just one repository. Creative forensics experts unable to legally access the suspect’s social media accounts could instead put a timeline or story together based on cell phone records. If the suspect sent messages by phone, they were going to somebody—which means there are tower logs and many other legal and effective ways of generating leads.
The good, the bad and EDRM
Since electronic data is very different from paper information, because of the intangible nature, sheer volume, impermanence and ability to carry metadata, the
Electronic Discovery Reference Model (EDRM) was developed. Basically, EDRM is the discovery reference model that outlines the process involved in a proper e-discovery engagement. The process includes: identification, preservation, collection, processing and review. So, from the first time an expert arrives on the scene, all the way through to production of data in court, there should be a full chain of custody that protects everything documented. Professionals who follow EDRM can honestly claim that their collected data was never out of their sight. Forensics experts who aren’t professionals might collect data onsite, put that data into their cars and drive it away. One would hope they would go directly to their lab, but if they are careless, or not following procedure, they might stop for lunch, pick up their kids from school, or run other errands that could compromise the data. When asked, these forensics experts could not guarantee that the data in their custody wasn’t altered. Compromised evidence needs to be thrown out, which would spoil their whole procedure. If a forensics expert ever tampers with or taints evidence—even if it’s accidental—his or her career could be ruined. That’s why it’s extremely important to follow proper procedure, standards and collection.
Best practices for a successful digital forensics engagement
At the start of a digital forensics assignment, digital forensics experts will first want to sit with their client and get a topology, or overview, of the facts. They will start by asking:
What is relevant?
What are the timelines?
What are the repositories?
Who are the targets?
Who are the custodians?
The target is the person/persons to potentially go after, and custodians are people who may have some evidence or relevant information.
Look at the narrowest area first. This is also known as, “early case assessment.”
Often, within two days, professional forensics experts should be able to collect and process enough data to present a findings memo. This will show the experts if they are on the right track, or identify if the target is still accurate. In some lucky cases, the entire engagement can be concluded within this short period of time.
Consider working with an outside firm to ensure your digital forensics professionals are truly independent. Anything that might impede independence always has to be evaluated. Look for forensics experts that are not only credentialed, but also participating members of the forensics industry. When hiring an expert who has written industry rules or guidelines, you will be able to leverage professional guidance that is based on the most up-to-date regulations, standards and practices. Steer clear of anyone in forensics who acts like a cowboy. It is critical that your forensics experts follow the laws, rules, standards and guidelines to the letter.
It’s also important to hire an organization that has experience with the appropriate technology. There are a lot of people who can do forensic imaging of a Windows Workstation, but they have no real experience of Linux, Unix, MacBook, iPhone or a cloud repository. Some forensics experts are very good on-premise, but they don’t know how to handle Dropbox. So, when hiring forensics professionals, you will want to ensure they can cover all of your potentially relevant repositories that exist across your vast cloud landscape. | <urn:uuid:b410b2be-633f-4223-8e5b-41e20af6cecb> | CC-MAIN-2022-40 | https://cbisecure.com/insights/why-todays-digital-forensics-can-feel-like-the-wild-west-and-what-to-do-about-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00507.warc.gz | en | 0.948147 | 1,648 | 3.015625 | 3 |
Today marks the 43rd anniversary of Earth Day, which is intended to inspire awareness and appreciation for our environment. Originally created in response to a massive oil spill in waters near Santa Barbara, California in 1969, this day is one where people all over the world can take tours of recycling centers, pick up free compost, participate in clean-ups of local parks, and become more informed about ways we can reduce our carbon footprint .
(Quick tip: Tell your boss that by working from home today you’ll save emissions from your car from sitting in traffic. According to the Census bureau, the estimated time for workers commuting to work continues to increase. You’re welcome).
So what does this mean for technology? In can be hard sometimes to reduce our carbon footprint amidst an explosion of information and knowledge sharing. All of this data has to go somewhere. With server rooms growing rapidly, how can we be conscious of the environment while not limiting productivity?
Cloud computing has long been hailed as one of the ways in which organizations all over the world can become more green. In fact, it’s the topic of an interesting white paper Microsoft sponsored with Accenture: Cloud Computing and Sustainability: The Environmental Benefits of Moving to the Cloud.
Here is the most telling statistic: “This study’s finding that companies can reduce their carbon emissions by 30 to 90 percent by switching to a cloud infrastructure is certainly impressive. As impressive as these numbers are, the cloud’s efficiency is likely to improve even more over time. Cloud computing is rapidly expanding; demand is increasing and providers are ramping up extra servers to meet predicted future capacity requirements. As more customers become cloud users, greater economies of scale will be reached and cloud providers will be able to more accurately predict capacity for computing demand.”
So how can you make the move to utilizing the cloud today?
Hybrid on-premises infrastructure and storage, with cloud storage for data offloading
Moving to the cloud may not always be an option due to concerns over compliance with corporate or industry policies. For example, it may be a requirement to store customer information in on-premises storage and prevent the shipment of this information on hard drives or via File Transfer Protocol (FTP) to migrate into cloud services. A hybrid setup using the cloud for offloading certain data means saving on storage expenses without losing the security and privacy for sensitive data.
Location may be another reason to keep infrastructure and storage on premises. Since cloud resources are accessed via the internet, poor bandwidth results in degraded quality of service. Having an on-premises setup provides users with the necessary resources for higher productivity. In this case, cloud storage can be used as a more economical storage option for backup and archive data – the data transfer can be planned for non-business hours so as not to impact service.
Hybrid (on-premises and cloud) infrastructure and storage
While compliance with corporate or industry policies may dictate that sensitive data, such as customer information, must remain on-premise for some, others may not feel secure with corporate data residing outside the enterprise’s walls. Since cloud hardware is maintained by hosting providers, non-employee IT administrators possess a high level of access and control over the information. Hybrid setups with both on-premises and cloud infrastructure and storage give organizations the control of on-premises with the flexibility provided by cloud solutions:
· On-premises intranet with cloud-hosted extranet: Maintain control over internal information with the traditional on-premises deployment, but provide your customers with the uptime offered by cloud hosting providers without the hassle of hardware maintenance.
· On-premises production deployment with temporary cloud-hosted test and development environments: Spin up on-demand testing and development environments removes the cost of maintaining up to date hardware that idles when not needed.
· On-premises deployment with cloud-hosted partner portals: Collaborate with partners through cloud portals paying only for the users involved in the projects without worrying about security risks.
All-in cloud deployment
With everything hosted on the cloud – infrastructure, operating systems, and applications – hardware maintenance lies with the hosting provider, allowing organizations to focus IT efforts on improving services. While data is not stored on-site, cloud hosting providers already have high security measures in place to ensure the safety of their customers’ data.
For more information and resources on taking advantage of all that Microsoft’s cloud has to offer, visit our microsite. And happy Earth Day from all of us at AvePoint! | <urn:uuid:71fa8684-d32a-44c3-8767-105a3634b299> | CC-MAIN-2022-40 | https://www.avepoint.com/blog/protect/earth-day-2013-reducing-your-carbon-footprint-by-embracing-the-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00507.warc.gz | en | 0.924839 | 937 | 2.984375 | 3 |
Cloud Basics For Beginners
Cloud computing is a term we hear quite often, but there are very few people who understand what it’s all about. You would argue that whatever technology this is, it is probably out of your world or too complex. In reality, cloud computing is a simple technology that has been around for a while, and almost all of us have used it, without even knowing. In simple terms, cloud computing entails running computer/network applications that are on other people’s servers using a simple user interface or application format. It’s that simple.
If this language still sounds strange, going back to basics will tell you something about what the cloud computing basics is all about. In the olden days of networking, way before Google or Yahoo was born, companies ran e-mail as an application whose data was stored in-house. As such, all the files, documents, messages, and other things you currently use in e-mail were stored in a safe, dark room on the company’s premises. These sounds familiar because you were probably banned from visiting that room due to security reasons.
The Cloud Basics
Moving forward into the 20th century, when companies like Google started showing up, the way e-mail was treated and utilized was revolutionized. It would have been a commercial bid to get more subscribers, but these companies chose to open their servers to store e-mail information for you, free of charge. However, to access that data, you have to use their applications like Gmail, Yahoo Mail, and so many others. Practically, this is what cloud computing is all about – using other people’s servers to run applications for your organization, remotely.
Cloud Computing Is Bigger
Into the 21st century, the concept of the cloud is the same, but more than ever before, cloud computing is bigger. It’s now becoming possible to use bigger applications that will leverage your business goals and functions easily in the cloud. For example, with cloud computing, you can run all your computer networks and programs as a whole without ever buying an extra piece of hardware or software.
The cloud technology has many benefits and that would explain its popularity. First, companies can save a lot of money; second, they are able to avoid the mishaps of the regular server protocols. For Instance, when a company decides to have a new piece of software, whose license can only be used once and it’s pretty expensive, they wouldn’t have to buy software for each new computer that is added to the network. Instead, they could use the application installed on a virtual server somewhere and share, in the ‘cloud’.
These capabilities are becoming even more personalized today, and there are even a few solutions that allow you to use mobile in the cloud. Of course, there are very few people who aren’t willing to lose control of the little physical tools they are used to (like the dark server room); however, largely, any business that considers cutting costs and wants to move forward in this dynamic age needs to embrace the cloud computing basics, or at least give it a shot to survive.
By Gregory Musungu
The ‘Cloud Syndicate’ is a mix of short term guest contributors, curated resources and syndication partners covering a variety of interesting technology related topics. Contact us for syndication details on how to connect your technology article or news feed to our syndication network. | <urn:uuid:4f784879-0973-439d-b9a4-50871011b95d> | CC-MAIN-2022-40 | https://cloudtweaks.com/2012/08/cloud-basics-for-beginners/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00507.warc.gz | en | 0.96441 | 710 | 2.9375 | 3 |
What is a Botnet?
Botnets are networks of hijacked computer devices used to carry out various scams and cyberattacks. The term “botnet” is formed from the word’s “robot” and “network.” Assembly of a botnet is usually the infiltration stage of a multi-layer scheme. The bots serve as a tool to automate mass attacks, such as data theft, server crashing, and malware distribution.
Botnets use your devices to scam other people or cause disruptions — all without your consent. You might ask, “what is a botnet attack and how does it work?” To expand this botnet definition, we’ll help you understand how botnets are made and how they are used.
How Botnet Works
Botnets are built to grow, automate, and speed up a hacker’s ability to carry out larger attacks.
One person or even a small team of hackers can only carry out so many actions on their local devices. But, at little cost and a bit of time invested, they can acquire tons of additional machines to leverage for more efficient operations.
A bot herder leads a collective of hijacked devices with remote commands. Once they’ve compiled the bots, a herder uses command programming to drive their next actions. The party taking command duties may have set up the botnet or be operating it as a rental.
Zombie computers, or bots, refer to each malware-infected user device that’s been taken over for use in the botnet. These devices operate mindlessly under commands designed by the bot herder.
Basic stages of building a botnet can be simplified into a few steps:
- Prep and Expose — hacker exploits a vulnerability to expose users to malware.
- Infect — user devices are infected with malware that can take control of their device.
- Activate — hackers mobilize infected devices to carry out attacks.
Stage 1 exposure starts with hackers finding a vulnerability in a website, application, or human behavior. The goal is to set the user up for being unknowingly exposed to a malware infection. You’ll commonly see hackers exploit security issues in software or websites or deliver the malware through emails and other online messages.
In stage 2, the user gets infected with the botnet malware upon taking an action that compromises their device. Many of these methods either involve users being persuaded via social engineering to download a special Trojan virus. Other attackers may be more aggressive by using a drive-by download upon visiting an infected site. Regardless of the delivery method, cybercriminals ultimately breach the security of several users’ computers.
Once the hacker is ready, stage 3 initiates by taking control of each computer. The attacker organizes all of the infected machines into a network of “bots” that they can remotely manage. Often, the cybercriminal will seek to infect and control thousands, tens of thousands, or even millions of computers. The cybercriminal can then act as the boss of a large “zombie network” — i.e. a fully assembled and active botnet.
You’re probably still are asking, “what does a botnet do?” Once infected, a zombie computer allows access to admin-level operations, such as:
- Reading and writing system data
- Gathering the user’s personal data
- Sending files and other data
- Monitoring the user’s activities
- Searching for vulnerabilities in other devices
- Installing and running any applications
What is Botnet Controllable?
Candidates for botnet recruitment can be any device that can access an internet connection.
Many devices we use today have some form of computer within them — even ones you might not consider. Nearly any computer-based internet device is vulnerable to a botnet meaning the threat is growing constantly. To protect yourself, take note of some common devices that are hijacked into botnets:
Traditional computers like desktops and laptops that run on Windows OS or macOS have long been popular targets for botnet construction.
Mobile devices have become another target as more people continue to use them. Smartphones and tablets have notably been included in botnet attacks of the past.
Internet infrastructure hardware used to enable, and support internet connections may also be co-opted into botnets. Network routers and web servers are known to be targets.
Internet of Things (IoT) devices include any connected devices that share data between each other via the internet. Alongside computers and mobile devices, examples might include:
- Smart home devices (thermometers, security cameras, televisions, speakers, etc.)
- In-vehicle infotainment (IVI)
- Wearable devices (smartwatches, fitness trackers, etc.)
Collectively, all these devices can be corrupted to create massive botnets. The technology market has become oversaturated with low-cost, low-security devices, leaving you particularly vulnerable as a user. Without anti-virus malware, bot herders can infect your devices unnoticed.
How Do Hackers Control a Botnet?
Issuing commands is a vital part of controlling a botnet. However, anonymity is just as important to the attacker. As such, botnets are operated via remote programming.
Command-and-control (C&C) is the server source of all botnet instruction and leadership. This is the bot herder's main server, and each of the zombie computers gets commands from it.
Each botnet can be led by commands either directly or indirectly in the following models:
- Centralized client-server models
- Decentralized peer-to-peer (P2P) models
Centralized models are driven by one bot herder server. A variation on this model may insert additional servers tasked as sub-herders, or “proxies.” However, all commands trickle down from the bot herder in both centralized and proxy-based hierarchies. Either structure leaves the bot herder open to being discovered, which makes these dated methods less than ideal.
Decentralized models embed the instruction responsibilities across all the zombie computers. As long as the bot herder can contact any one of the zombie computers, they can spread the commands to the others. The peer-to-peer structure further obscures the identity of the bot herder party. With clear advantages over older centralized models, P2P is more common today.
What Are Botnets Used For?
Botnet creators always have something to gain, whether for money or personal satisfaction.
- Financial theft — by extorting or directly stealing money
- Information theft — for access to sensitive or confidential accounts
- Sabotage of services — by taking services and websites offline, etc.
- Cryptocurrency scams — using users’ processing power to mine for cryptocurrency
- Selling access to other criminals — to permit further scams on unsuspecting users
Most of the motives for building a botnet are similar to those of other cybercrimes. In many cases, these attackers either want to steal something valuable or cause trouble for others.
In some cases, cybercriminals will establish and sell access to a large network of zombie machines. The buyers are usually other cybercriminals that pay either on a rental basis or as an outright sale. For example, spammers may rent or buy a network to operate a large-scale spam campaign.
Despite the many potential benefits for a hacker, some people create botnets just because they can. Regardless of motive, botnets end up being used for all types of attacks both on the botnet-controlled users and other people.
Types of Botnet Attacks
While botnets can be an attack in themselves, they are an ideal tool to execute secondary scams and cybercrimes on a massive scale. Common botnet schemes include some of the following:
Distributed Denial-of-Service (DDoS) is an attack based on overloading a server with web traffic to crash it. Zombie computers are tasked with swarming websites and other online services, resulting in them being taken down for some time.
Phishing schemes imitate trusted people and organizations for tricking them out of their valuable information. Typically, this involves a large-scale spam campaign meant to steal user account information like banking logins or email credentials.
Brute force attacks run programs designed to breach web accounts by force. Dictionary attacks and credential stuffing are used to exploit weak user passwords and access their data.
How to Protect Yourself from Botnets
Considering the threats to the safety of yourself and others, it is imperative that you protect yourself from botnet malware.
Fortunately, software protections and small changes to your computer habits can help.
6 Tips for protecting yourself against Botnets
- Improve all user passwords for smart devices. Using complex and long passwords will help your devices stay safer than weak and short passwords. Such as ‘pass12345.
- Avoid buying devices with weak security. While this isn’t always easy to spot, many cheap smart home gadgets tend to prioritize user convenience over security. Research reviews on a product’s safety and security features before buying.
- Update admin settings and passwords across all your devices. You’ll want to check all possible privacy and security options on anything that connects device-to-device or to the internet. Even smart refrigerators and Bluetooth-equipped vehicles have default manufacturer passwords to access their software systems. Without updates to custom login credentials and private connectivity, hackers can breach and infect each of your connected devices.
- Be wary of any email attachments. The best approach is to completely avoid downloading attachments. When you need to download an attachment, carefully investigate, and verify the sender’s email address. Also, consider using antivirus software that proactively scans attachments for malware before you download.
- Never click links in any message you receive. Texts, emails, and social media messages can all be reliable vehicles for botnet malware. Manually entering the link into the address bar will help you avoid DNS cache poisoning and drive-by downloads. Also, take an extra step to search for an official version of the link.
- Install effective anti-virus software. A strong internet security suite will help to protect your computer against Trojans and other threats. Be sure to get a product that covers all your devices, including Android phones and tablets.
Botnets are difficult to stop once they’ve taken root in user’s devices. To reduce phishing attacks and other issues, be sure you guard each of your devices against this malicious hijack. | <urn:uuid:f73f5331-b58f-44f0-be01-1435d2256d42> | CC-MAIN-2022-40 | https://usa.kaspersky.com/resource-center/threats/botnet-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00507.warc.gz | en | 0.926781 | 2,195 | 3.78125 | 4 |
Conducting surveillance in furtherance of a criminal investigation can be one of the most crucial and beneficial actions a law enforcement officer engages in. In addition to the more traditional definition of surveillance in which a law enforcement officer follows and observes a suspect or observes a suspect from a fixed location, surveillance today can also include observing a suspect or suspects using electronic equipment or by intercepting electronically transmitted information. Successful surveillance operations can yield important information about a suspect including who they interact with, what their day to day schedule is like, and what assets they are in possession of.
In addition to ensuring that all laws are respected, especially a Fourth Amendment Rights, a capable law enforcement officer will need to understand and be able to apply techniques that will lead to a successful surveillance operation. Here we take a look at the different types of surveillance and some of the best practices to keep in mind when conducting them.
The original and most recognized form of surveillance is physical surveillance. Physical surveillance occurs when a law enforcement officer or law enforcement officers visually observe a suspect or suspects within a certain time frame. Physical surveillance can be long and boring and certain considerations should be made prior to conducting surveillance. Are any of the cars the officers’ may be driving obviously police vehicles? Do the officers and their vehicles look out of place for the neighborhood? How will the surveillance team schedule restroom and/or breaks for food during the operation? All of these issues must be discussed prior to conducting physical surveillance.
Since having video evidence is always preferable to not having video evidence, are the officers conducting surveillance trained in using their agency recording devices? Are all law enforcement officers participating in the surveillance operation familiar with using their departments’ video evidence management system? If they are not, it may be worth conducting extra training with regard to these subjects.
Electronic surveillance occurs when a law enforcement officer or officers observe a suspect or suspects via closed-circuit television or other video feed. While electronic surveillance methods are mostly used to monitor large areas, they can also be useful in monitoring specific locations that are known to law enforcement to be areas of criminal activity. Traffic cameras are a perfect example of electronic surveillance tools that are used by law enforcement on a daily basis. Polecams (cameras placed on top of poles) can also be useful when conducting electronic surveillance of fixed areas. For example, a polecam can be set up on a suspect’s house and the house can be recorded in place of having a law enforcement officer sitting in front of the suspect’s house.
If this method of surveillance is implemented however, it may be worth investing in video redaction software to protect the identities of children, informants, or undercover law enforcement personnel. Especially with surveillance operations being a catalyst to courtroom proceedings, protecting identities of all people becomes crucial.
Changes and advancements in technology necessitate changes and advancements in law enforcement. It is important for law enforcement officers to be current on all laws regarding telephone and Internet surveillance, and to receive training in proper methods for conducting these types of surveillance operations. The monitoring of information (whether it is voices speaking over the telephone or emails or text messages being exchanged) in real time can be especially tricky to navigate. Title III of the Omnibus Crime Control and Safe Streets Act of 1968 was enacted to ensure that the government could not overreach in listening in on private citizen’s phone calls.
The Electronic Communications Privacy Act of 1986 extended these protections to include electronic data being transmitted by computer. If a law enforcement officer does wish to conduct real time surveillance of a suspect’s phone or Internet use, they will more than likely need to obtain a “Title III” warrant, which can be much more in depth and harder to obtain than a regular search warrant. This type of warrant is considered to be much more invasive than a typical search or arrest warrant may mandate that a law enforcement officer re-submit the warrant after a stated period of time (i.e. 60 or 90 days). The officer should also take into consideration how the information will be captured and/or stored. Does the officer’s department or agency have a digital evidence system that is capable of keeping and storing large amounts of digital information? Does the officer’s department or agency have access to audio redaction software that may be necessary to protect the identity and/or privacy of any non-suspects? All of these issues must be taken into consideration before conducting telephone and/or Internet surveillance.
We’ve discussed the need to pre-plan surveillance operations. One of the topics that is easily missed in those planning meetings are the post-surveillance considerations. We’ve mentioned the need for personnel to familiar with your agency’s digital evidence management software. But, with any surveillance operation, the consideration for handling and managing corresponding digital evidence is likely a higher, more nuanced priority that may not get the attention it requires. When it comes to ingesting digital evidence into your software, you have to consider file nomenclature. Any surveillance operation is going to produce an unusually higher amount of digital evidence, in particular multimedia type files, and with that in mind, naming conventions have to be considered, if for no other reason, file organization, ease of access in the future, and efficient recall for FOIA requests, courtroom introduction, and even public information events, like press conferences. Each surveillance operation should have its own file nomenclature that each digital evidence file follows. It could be the operation name, followed by a case number (either an all-inclusive case number, or for segment charges/responsibilities – naming each file to the respective case number), along with date and start time. There could be other methods of organizing by file nomenclature, but the idea is to have one way of naming each piece of digital evidence, so that all parties involved in the surveillance operations, and all parties involved in the future, can find the digital evidence, recall it for a multitude of purposes, and disseminate on demand, when required.
Another post-operation consideration is retention of digital evidence. It does stand to reason that once the statute of limitations expires on a given case, that while some digital evidence will need to be preserved beyond that basic date, there will be some that is not subject to extended retention periods, and removal of that digital evidence should be reviewed by lead personnel from the surveillance operation, not just your digital evidence management personnel. The surveillance personnel should know what is the status of the case linked to their operation, and if they don’t, adding this responsibility ensures they will. They don’t have to be in control of disposing the evidence, but that should be one of the signatures captured in the process of removing surveillance-related digital evidence.
Information obtained while conducting surveillance can mean the difference between the conviction of a criminal and that criminal going free. Whether it is accomplished through physical, electronic, or telephone/Internet monitoring, law enforcement officers must be up to date with all laws pertaining to surveillance and must have the proper training and experience in order to conduct an effective surveillance operation.
Police departments and other law enforcement agencies must provide their personnel with the proper training and equipment that is necessary to perform proper, lawful, and effective surveillances no matter what type of surveillance needs to be conducted. By doing so, these various departments and agencies can ensure that their officers are prepared to meet any challenges that may arise during the lifecycle of their case and will help to avoid unnecessary roadblocks to success. | <urn:uuid:8523a613-decd-4309-9d55-45b666347f80> | CC-MAIN-2022-40 | https://caseguard.com/articles/surveillance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00507.warc.gz | en | 0.9407 | 1,553 | 2.625 | 3 |
Director, Information Security
Get The Exercise Template
Resilience is all about the ability to recover quickly when faced with a challenge. For businesses, resilience is often tied directly to business continuity, where professionals are tasked with ensuring an organization can quickly adjust, adapt, respond, and recover from disruptions and disasters.
Today, with an increasing number of successful cyber breaches (like ransomware attacks) making headlines, resilience is often discussed in terms of cyber resilience. For many organizations, it’s a new and imperative focus for executives and investment in resources, staffing, and tools.
But when you hear the term “cyber resilience,” what does it entail and what does it mean for your operations? And, more importantly, if you’re not doing so already, how do you directly connect your cyber resilience goals with your business continuity program?
First, what is cyber resilience?
In short, cyber resilience is your ability to understand your cyber risks and make plans that anticipate the “what ifs,” if you experience a cyber event, and successfully stop the spread or impact, adapt to your changing environment, and then recover from it, with a return to normal operations as soon as possible.
MITRE, a nonprofit organization that runs federally funded research and development centers, points out that to date, there is not a singular authoritative definition for cyber resilience (yet). Instead, it draws on seven key areas when defining cyber resilience: national security, critical infrastructure, critical infrastructure security and resilience, Department of Defense (DoD) cybersecurity, network engineering, resilience engineering, and Homeland Security. It pulls from all of these resource keywords such as:
That sweeping — but direct — focus should be at the heart of all of your cyber resilience planning.
Cyber resilience may be more critical to organizations than ever before. Why? Because it’s how your organization can anticipate, plan for, mitigate, respond to, and recover from cyber events. As we have mentioned in several other blogs here at Castellan, when it comes to resilience, our approach should no longer be about if we experience a disruption or disaster — but when.
We often anticipate the when in our disaster recovery plans and even in our everyday life. But historically, in the IT / Security organization, we’ve been conditioned to believe 100% prevention is the only acceptable business position.
Yet the when mindset certainly rings true for cyber resilience. We see increasing data that when it comes to cyber events, all industries are moving closer to the when scenario and further away from the if. Acknowledging that breach is likely (whether it be your own IT failure or a third-party provider’s) sets us up to focus on what matters from that point on…how we respond.
Case in point: More than 37 billion records were exposed through cyber breaches in 2020. While the total number of breaches was down from the previous year, the number of record exposures alone was up 141% compared to 2019.
Not only are record exposures increasing, but also the nature of the attacks are changing, too. Ransomware continues to be a growing focus for attackers and in 2020, there was a 100% increase in ransomware attacks compared to the previous year. That’s likely reflected in the number of successful attempts spurred by the sudden influx of teams around the world moving into remote work environments as a result of the coronavirus pandemic.
It’s not surprising then to see more organizations giving cyber resilience increased attention. In one of our recent webinars, we asked participants about their organization’s cyber resilience focus and learned:
Unfortunately, however, just shy of 10% tell us it’s not a priority at all or is only a priority when incidents happen.
As we’re getting more focused on cyber resilience and engaging in more cyber resilience conversations with our customers, we are learning that some struggle to understand the nuances between cyber resilience and cybersecurity. Some organizations mistakenly think that just because they have implemented a cybersecurity program, they can simultaneously check the box for cyber resilience.
There are, however, some key differences between cyber resilience and cybersecurity. Think of cybersecurity as your defense to protect your organization from a cyber event. It’s the way your organization looks for all of its weaknesses and vulnerabilities and makes plans to shore them up to prevent an attack. For example, you might have an antivirus program installed on all of your devices to decrease the chance of infection. That’s a cybersecurity measure.
Cyber resilience is more about the day-to-day how you do business. It’s how you mitigate the impact of an attack on your organization—not just on your core systems and data, but also on all of your operational functions and brand reputation as a whole. You add a cyber resilience component to your business continuity program so that your organization knows what to do if you experience a cyber event, how to stop it, and what you need to quickly adapt, recover, return to business as usual, and prevent a similar event from happening again.
As we’re seeing an increase in both cyber-attack attempts and successful attacks, we’re understanding that even the best, most well thought-out security measures can’t always stop an attack. We cannot approach cyber resilience comfortably with an “it can’t happen to us” mindset. Instead, we must build a culture of resilience throughout our organization, one where cyber resilience is woven into the overall corporate ethos, regardless of disruptive event type or disaster.
As a good rule of thumb, your cyber resilience approach should include these core areas:
Cyber resilience also includes:
Security management and information security management
These are the controls and policies your organization uses to protect your information technology assets, but can also include physical security features, such as card control access and locked server rooms.
Data protection, including backups, restore processes, and Disaster Recovery as a Service (DRaaS)
These are your processes to ensure you have reliable, geographically diverse back-ups for all of your infrastructure and critical data and systems.
These are the steps you take to build a culture of resilience, not just within your organization with your employees, executives, and key stakeholders, but also with your customers and the public at large.
These are the processes you undertake that evaluate the impact of a disaster or disruptive event on your operations. It can also help you identify your critical assets and functions.
You should conduct internal and external penetration tests, also known as pen tests, to determine if your controls are working as designed and to identify gaps or other security issues before a breach.
Stress testing may be helpful in resilience because it can help your organization determine how well your systems and other processes perform under intense loads or pressure.
System hardening and Zero Trust
System hardening includes all of the tools and processes you have in place to decrease weaknesses in your operations including your IT infrastructure, systems, apps, and more. For example, you may choose to establish a Zero Trust policy, which is an approach that eliminates trust from your environment, and instead requires authentication for access.
Incident plan strategy testing and exercising
Incident plan testing and exercising are processes that you can take to ensure that your plans are effective and work as you intended. By routinely exercising and testing your resilience plans, you can identify gaps or areas of deficiencies and fix them before an actual event. These testing and exercise processes can also help you mature your resilience and business continuity programs as your organization evolves. Here’s a tip: As your testing matures, leverage a blended threat scenario in your exercises. For example, global pandemic, remote workforce, and a cyber threat attack.
Your incident management processes should include the gamut of plans, policies, and processes to address incidents as they happen, learn from them, and make plans for improvement and modifications, including attack control, restoration, crisis communications, and more.
Cyber events and other disasters and disruptions don’t just affect your employees and vendors, it can also have a negative impact on your customers and your brand identity. That’s why it’s important to include reputational management in your resilience planning, including how you respond, with pre-approved clear communications during and after an event. Through preparation, when you have a high profile event, headlines can read “Company X restores services only hours after major cyber attack:” instead of “Weeks after cyber-attack Company X has yet to restore all services”.
There are a wide range of benefits about how cyber resilience can help your organization become more resilient as a whole. Cyber resilience empowers your organization with all the tools you need to anticipate, protect, detect, defend, recover, and adapt from an attack.
Not only do these processes help improve your information technology system security, but it can also help you mitigate the financial and reputational damage from an event, and help ensure you’re meeting your RTOs and RPOs for minimal impact on your ability to deliver your core services.
Cyber resilience is also a key component of meeting all of your compliance, legal, and regulatory standards and can help decrease the likelihood of an audit or a non-compliance event that could result in legal action, fines, and penalties.
By building resilience into your organizational culture, you’ll soon find that your entire team understands their individual roles in ensuring your organization is safe and protected, which in turn will help build trust among your customers and the public, helping you scale and grow.
If you haven’t yet built cyber resilience into your business continuity program, now is the time. You can do this even if you’re a small team or have limited resources.
Consider adopting a cyber resilience framework. It can help you establish processes, policies, controls, and more, all while helping you attain the level of resilience you need today and in the future.
Need help understanding what cyber resilience looks like for your organization and how you can work it into your existing business continuity and disaster recovery plans?
Consider working closely with a resilience advisor such as Castellan. We can help you enhance your existing team and processes or help build, manage, and maintain your program for you.
Have questions about cyber resilience? Contact Castellan today. We’ll be happy to show you how we can help.
Get The Exercise Template
Director, Information Security
Get resilience insights delivered to your inbox.
Save a spot in our upcoming webinar “The Current State of Business Continuity” on Wednesday, October 19th at 11:00am ET / 4:00pm BST. | <urn:uuid:ecd34a5f-ee6f-4879-92c5-47c8fbb261ca> | CC-MAIN-2022-40 | https://castellanbc.com/blog/what-is-cyber-resilience/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00507.warc.gz | en | 0.942158 | 2,220 | 2.515625 | 3 |
Watermarks are special collections of data which are embedded into the class files using steganography techniques, used for the purpose of identification. This data can contain any kind of information, but usually it is used for identifying the owner of the application. For example, you can generate a separate build for every client and place the data concerning the client himself in it. And if someday you find your product on a warez site, you'll be able to reveal (with the help of Allatori's utility) which of your clients has helped the pirated copy of your application to be spread. It must be emphasized, that watermarks are admitted to be verification of copyright in the court. So, because Allatori has all the functionality to work with watermarks, you can feel secure, because watermarks are considered to be a great weapon against pirates and those who help them. | <urn:uuid:8ec4ebcd-3d82-464e-9bab-16a4c8174acf> | CC-MAIN-2022-40 | https://allatori.com/features/watermarking.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00707.warc.gz | en | 0.954717 | 181 | 2.5625 | 3 |
Modern enterprises face security threats on a number of fronts. From DDoS attacks to malware and data theft, no organization is too large or too small to take the issue of security lightly.
While most breaches occur when a hacker penetrates a firewall or intercepts data in transit, this usually occurs after a password is cracked or malicious code is introduced into secure infrastructure willingly. And to do that, most hackers exploit perhaps the most common IT service of all: e-mail.
E-mail’s status as the preferred attack vector of choice is nothing new and has proven to be financially rewarding for actors in the field. Targeted phishing attacks and other scams are typically the easiest way to breach an organization’s defenses, particularly now that security has emerged as a top priority following the much-publicized string of major data thefts in the past few years. Through sophisticated social engineering and messages disguised as those originating from legitimate sources, the cyber underground can circumvent even the most elaborate security regime to gain access to all manner of confidential information or introduce viruses and/or data scraping bots that can operate for months, even years, before they are discovered.
According to the Ponemon Institute, the threat from e-mail-based cyberattacks is growing. The group reports that nearly a quarter of people regularly open phishing e-mail, which in itself does not usually trigger an attack or data theft through ransomware. What’s worrisome is the fact that 10 percent will click on a malicious link or open a weaponized attachment. This means that an attacker has a 90 percent chance of scoring a hit after sending only 10 e-mails. This is in large part why the average business loses some $3.7 million per year to phishing scams.
And this is likely to get worse as the tools available to hackers on the dark web and elsewhere become more advanced. Using modern data mining techniques and AI-driven technology, fake e-mails are becoming increasingly difficult to spot, containing all manner of personal information that can fool even the most vigilant knowledge worker.
Read more: What is Cyber Insurance and do I need it?
To help shore up vulnerable e-mail infrastructure and fight back against e-mail cyberattacks, CBTS has created the Advanced E-mail Security Services platform. It provides all the necessary filtering to weed out infected spam, fake e-mail, and targeted attacks. At the same time, it delivers enhanced business continuity and cloud options to ensure high availability and continued e-mail service even in the event of a main server failure.
The program provides three tiers of protection designed to meet the unique needs of individual enterprises. These include:
One of the key aspects of e-mail security is transparency. Without that ability to peer into the workings of the e-mail environment, enterprises are left guessing as to what is happening and what level of risk they are experiencing. As part of its managed program, CBTS provides detailed reports documenting the health of e-mail systems and related security trends that may affect future performance. There is also a read-only access option to the platform, allowing users to view real-time dashboard information of overall system heath. In addition, custom reporting can always be configured to suit unique requirements.
Security solutions should also work quietly behind the scenes, so as not to disrupt critical business functions. All e-mail security services integrate seamlessly into existing CBTS operational processes, including ticket-tracking for issues generated with the security platform, as well as chronic event reporting and incident response up to and including those requiring customer contact.
In this day and age, e-mail is an essential business tool. Without the ability to effectively thwart intrusion, however, it can easily become your biggest problem. By delivering industry-leading software as an integrated managed service, CBTS not only provides world-class protection of critical e-mail assets, but backs it up with certified technical expertise, ongoing monitoring, management, and support, and even data migration as necessary.
With a secure e-mail environment in place, the enterprise not only protects itself but its employees, investors, partners, and perhaps most importantly, its customers. To learn more about CBTS E-mail Security and Data Protection, download the related infosheet.
Contact us for information on how CBTS can help protect your organization from e-mail cyberattacks. | <urn:uuid:8beb638f-835d-4d1e-9b62-3a0f5d66430c> | CC-MAIN-2022-40 | https://www.cbts.com/blog/protecting-your-most-vulnerable-cyberattack-vector-e-mail/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00707.warc.gz | en | 0.945577 | 883 | 2.671875 | 3 |
There are many different ways in which you can answer the seemingly simple question: “What is an API?”
Previous discussions focused on the technical fundamentals (“it has to be networked and reusable”) and on the bigger picture (“it’s a delivery mechanism for a product”).
Today, we’ll look at the functional essence of what an API is, how that essence comes into existence, and who it is for.
What is an API? It’s a language!
At a fundamental level, APIs can be compared to being languages: It’s a communications mechanism that allows applications to communicate.
Much of the value of APIs is based on this “language nature” of APIs: If the only thing that two applications need to collaborate is to agree on a language, then there is much more freedom for these components than in more tightly coupled scenarios, where agreement may also cover aspects of how applications are implemented, or where they are run.
It’s interesting to think about who is creating and using the “API language.” It is certainly used by communicating applications (like the weather API example shown in the video). But these applications are simply executing instructions that were created by developers:
- API developers design the API and therefore design the language (they have to choose an API style and then design the API for that style). They implement the API, and they publish the API to allow others to learn about the API and its language.
- Application developers discover the API and then can read the API documentation and learn about the API and its language. They then implement an application that uses the API and now the API consumer and the API provider can successfully communicate.
The important takeaway from this is that the “API language” is designed by developers and consumed by developers. Applications then use the language to interact, but the act of understanding the API is done by humans on both the provider and the consumer side.
This shows that for APIs to be successful, they of course need to be functional so they can fulfill the role of allowing applications to communicate.
More importantly, APIs are a communications mechanism between developers, and therefore a limiting factor for API success is how well they work in this scenario.
This means the API itself must be well-designed, but it also clearly shows that additional factors such as documentation, examples, sandboxes, support channels, and similar supporting materials play a critical role as well.
All of this is often subsumed under the name of developer experience (DX), and it is something that often is overlooked (at least when the discussion is about private APIs and not about a partner or public APIs).
It is exactly this nature of the API as the way how developers communicate that allows APIs to scale so well.
If an API’s documentation is good enough for application developers to use it without ever having to talk directly to the API developers, then hundreds or thousands of application developers can use the API, without this scale of API consumption resulting in any bottlenecks.
If you want to learn more about this view of APIs as a language between developers, including a view of an example API and how this API plays the role of enabling developer communications, check out this video:
If you liked this video, why don’t you check out Erik’s YouTube channel for more “Getting APIs to Work” content? | <urn:uuid:1e5f7437-9ea7-4a77-94b9-0c485a7393d5> | CC-MAIN-2022-40 | https://blog.axway.com/learning-center/apis/api-management/what-is-an-api-language | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00707.warc.gz | en | 0.948425 | 710 | 3.625 | 4 |
Digital vaccine credentials can be used to certify that a person has been vaccinated or tested for COVID-19, but those tools come with challenges that can limit the use of the credentials including security and health data privacy concerns, the Federal government’s chief watchdog agency said.
The Government Accountability Office (GAO) recently spotlighted how mobile device-based vaccine credentials can be used to reduce COVID-19’s spread and allow travel and other activities to resume safely. According to GAO, the concept of a health credential is not new, and that “a paper vaccine certificate known as the ‘yellow card’ has long been recognized as an official record of immunizations for international travel and other purposes.”
Digital vaccine credential users would download an application on a mobile device, create an account, and link their COVID-19 vaccination record from an immunization registry or a COVID-19 test result from a certified test laboratory. From there, the application would:
- Confirm the user’s identity and authenticate COVID-19 health information;
- Validate health information against the destination’s entry requirements, like specific vaccines or tests accepted by a country; and
- Generate a secure digital code the user can present to officials, like airline staff or border control officials.
“A digital credential can use technologies that address widely shared concerns about the security and ownership of personal health information,” wrote GAO. “An example of a technology that addresses certain security concerns is blockchain, which enables the encrypted transfer of digital information without storing it in a centralized database.”
A digital credentialing system, however, comes with considerable challenges, GAO said, including:
- A lack of clear standards for defined uses of digital credentials, which can undermine security and privacy of users’ health data;
- A lack of harmonized technical standards leaving interoperability challenges that would impact achieving effective and secure data transfer among numerous digital platforms used by immunization registries, testing laboratories, industries, governments, and other parties; and
- Digital credential use could exacerbate inequalities or constrain freedom of movement for those who don’t have vaccine access, cannot be vaccinated for health or age reasons, or don’t own mobile devices. | <urn:uuid:25c4bb4c-6c2c-4678-bb8a-2b62d4cc0d3c> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/gao-use-of-digital-vaccine-credentials-comes-with-challenges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00707.warc.gz | en | 0.925242 | 469 | 2.890625 | 3 |
Healthcare providers are vulnerable to cyberattacks because their industry is lucrative. People and even government institutions spend so much money on their medical bills, thanks to emerging markets and aging populations. Advances in technology have transformed paper medical records to digital files that can easily be stored and accessed, but can also easily be stolen by hackers. If any data is stolen or held for ransom, the healthcare provider may rather pay the ransom then risk their reputation and the privacy of their patients.
From financial information to medical information
In the past few years, cybercriminals have focused on stealing financial data, including credit card numbers and personal information. But things are taking a turn, the result of financial institutions fortifying their database and raising client awareness of the problem.
Stronger data protection measures in the financial industry have forced criminals to turn their attention to medical data, which is typically much less secure. Patient data includes date of birth, medical and physical records, and social security number — information that can’t be easily reset, and is significantly more valuable than credit card data.
Securing healthcare data
Healthcare data has become more attractive to criminals, and it’s crucial that medical institutions take necessary precautions to secure their patients’ information from data thieves. Here are some best practices to secure healthcare data.
- Protect the network and Wi-Fi – Because hackers use a variety of tools to break into IT systems and obtain medical records, your healthcare organization needs to invest in firewalls and antivirus software. Network segregation is also a wise move; in the event of a breach, the attacker can’t instantly access all of your organization’s information at once.
- Educate employees – Staff members need training in information security, including setting passwords, spam filters, protection against phishing, and spotting different kinds of data breach methods.
- Data encryption – Encrypting data is one of the safest ways to secure it. Encryption translates patients’ data into code, and only authorized users with a decryption key can decode it. Multi-encryption is also an effective way to keep out intruders.
- Physical security – Most healthcare institutions still retain their patients’ records on paper, which are stored in cabinets. Ensure that all loopholes are covered by installing surveillance cameras and other physical security controls, such as electronic door locks.
It is important for healthcare providers to secure the sensitive information of their patients. If you want to know how your organization can better protect your patients’ information, give us a call. | <urn:uuid:3af55f3b-efbc-4a7d-8c23-9a9ce2042340> | CC-MAIN-2022-40 | https://www.datatel360.com/2019/05/06/secure-healthcare-data-from-hackers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00707.warc.gz | en | 0.945536 | 516 | 2.6875 | 3 |
People can intuitively recognise small numbers up to four; however, when calculating they depend on the assistance of language.
In this respect, the fascinating research question ensues: how do multilingual people solve arithmetical tasks presented to them in different languages of which they have a very good command?
The question will gain in importance in the future, as an increasingly globalised job market and accelerated migration will mean that ever more people seek work and study outside of the linguistic area of their home countries.
This question was investigated by a research team led by Dr Amandine Van Rinsveld and Professor Dr Christine Schiltz from the Cognitive Science and Assessment Institute (COSA) at the University of Luxembourg.
For the purpose of the study, the researchers recruited subjects with Luxembourgish as their mother tongue, who successfully completed their schooling in the Grand Duchy of Luxembourg and continued their academic studies in francophone universities in Belgium.
Thus, the study subjects mastered both the German and French languages perfectly. As Luxembourger students, they took maths classes in primary schools in German and then in secondary schools in French.
In two separate test situations, the study participants had to solve very simple and a bit more complex addition tasks, both in German and French.
In the tests it became evident that the subjects were able to solve simple addition tasks equally well in both languages. However, for complex addition in French, they required more time than with an identical task in German. Moreover, they made more errors when attempting to solve tasks in French.
During the tests, functional magnetic resonance imaging (fMRI) was used to measure the brain activity of the subjects.
This demonstrated that, depending on the language used, different brain regions were activated. With addition tasks in German, a small speech region in the left temporal lobe was activated.
When solving complex calculatory tasks in French, additional parts of the subjects’ brains responsible for processing visual information, were involved. However, during the complex calculations in French, the subjects additionally fell back on figurative thinking.
The experiments do not provide any evidence that the subjects translated the tasks they were confronted with from French into German, in order to solve the problem.
While the test subjects were able to solve German tasks on the basis of the classic, familiar numerical-verbal brain areas, this system proved not to be sufficiently viable in the second language of instruction, in this case French.
To solve the arithmetic tasks in French, the test subjects had to systematically fall back on other thought processes, not observed so far in monolingual persons.
The study documents for the first time, with the help of brain activity measurements and imaging techniques, the demonstrable cognitive “extra effort” required for solving arithmetic tasks in the second language of instruction.
The research results clearly show that calculatory processes are directly affected by language.
Source: Thomas Klein – University of Luxembourg
Original Research: Abstract for “Mental arithmetic in the bilingual brain: Language matters” by Amandine Van Rinsveld, Laurence Dricot, Mathieu Guillaume, Bruno Rossion, and Christine Schiltz in Neuropsychologia. Published online July 1 2017 doi:10.1016/j.neuropsychologia.2017.05.009 | <urn:uuid:efdbda1d-fa1c-473c-97c6-b118519863af> | CC-MAIN-2022-40 | https://debuglies.com/2017/09/15/mathematical-processes-brain-influenced-language/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00707.warc.gz | en | 0.95511 | 676 | 3.375 | 3 |
New research from the University of Aberdeen shows that weight loss in people with Parkinson’s disease leads to decreased life expectancy, increased risk of dementia and more dependency on care.
The team, led by Dr Angus Macleod propose that closer monitoring for weight loss in Parkinson’s patients and interventions in those who lose weight, such as a high calorie diet, may improve life expectancy, reduce dementia and reduce dependence on carers.
The study, published in Neurology, followed 275 people with Parkinson’s disease and parkinsonian disorders for up to ten years, monitored patients’ weight and investigated associations between weight loss and outcomes of the disease.
The main findings showed that weight loss is common in Parkinson’s disease and in the parkinsonian disorders and can occur in the early stages of the disease.
Further analysis showed that this early weight loss is associated with higher risk of becoming dependent (i.e. needing help with activities of daily living), of developing dementia, and of dying.
Although other studies have identified weight loss as a common problem in Parkinson’s disease, this is the first to identify the link between weight loss and death, dementia and dependence on carers.
Dr Angus MacLeod who led the study explained:
“Weight loss is a common problem in Parkinson’s but it wasn’t clear before we did this how common it was, mainly because of biases in previous studies, or what the consequences were of weight loss.
Our hypothesis was that people who are losing weight were going to have adverse outcomes.
“Our finding that those who lose weight have poorer outcomes is important because reversing weight loss may therefore improve outcomes.
Therefore, it is vital that further research investigate whether e.g. high calorie diets will improve outcomes in people with Parkinson’s who lose weight.
The study was partially funded by Parkinson’s UK. Professor David Dexter, Deputy Director of Research at Parkinson’s UK added: “While other studies have demonstrated that weight loss is common in Parkinson’s, this is the first to consider the impact this symptom may have.
“It has yet to be determined whether this quicker progression can be corrected by supplementation with a high calorie diet, however this could be a key potential development.”
Source: University of Aberdeen
image : University of Aberdeen news release.
Original Research: The study will appear in Neurology. | <urn:uuid:629d779c-66dc-4f68-8028-804e78d1e355> | CC-MAIN-2022-40 | https://debuglies.com/2017/11/24/dietary-changes-may-improve-life-expectancy-for-parkinsons-patients/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00707.warc.gz | en | 0.956945 | 523 | 2.921875 | 3 |
AI system predicts emissions rates in under a second
Dr Georgina Cosma and postgraduate student, Kareen Ahmed, of the university’s School of Science, have designed and trained an AI model to predict building emissions rates values with 27 inputs with little accuracy loss.
The proposed AI model was created with the support of Cundall’s head of research and innovation, Edwin Wealand and trained using large-scale data obtained from the UK government energy performance assessments. The system can generate a building emissions rate, or BER, almost instantly.
Dr Cosma said the research was “an important first step towards the use of machine learning tools for energy prediction in the UK” and also shows how data can “improve current processes in the construction industry”.
The Artificial Intelligence Model
The AI system they have revealed in their latest paper is able to generate building emissions rates for non-domestic buildings in less than one second and with as few as 27 variables and little loss in accuracy. It works by using what they call a ‘decision tree-based ensemble’ machine algorithm. They built and validated the system using 81,137 real data records that contain information such as building capacity, location, heating, cooling, lighting and activity.
The team also focused on calculating the rates for non-domestic buildings - such as shops, offices, factories, schools, restaurants, hospitals and cultural institutions - because these are some of the most inefficient buildings in the UK in terms of the amount of energy use.
The findings were presented at the Chartered Institution of Building Services Engineers, or CIBSE) Technical Symposium 2021 and will be published on the CIBSE’s website later this year.
The aim to reach net zero
Dr Cosma said that studies on the applications of machine learning on energy production of buildings already exist, but currently they are limited and only make up eight per cent of all buildings. She added non-domestic buildings account for 20% of the UK’s total CO2 emissions.
And Wealand went on to say that he hoped to build on the techniques developed in the project to predict real operational energy consumption. He said: “By predicting the energy consumption and emissions of non-domestic buildings both quickly and accurately, we can focus on the more important task of reducing energy consumption and trying to reach net zero.” | <urn:uuid:1b28df54-da2a-49a0-b1ec-1968062db626> | CC-MAIN-2022-40 | https://aimagazine.com/machine-learning/ai-system-predicts-emissions-rates-under-second | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00707.warc.gz | en | 0.960966 | 493 | 2.65625 | 3 |
The huge benefit that the Internet of Things (IoT) brings to different industries and domains is driving its growth and adoption at an unrelenting pace. Soon billions of connected devices will be spread across smart homes and cities, harvesting data, sending it to huge repositories for analysis and processing, and carrying out commands sent from smart apps and machine-learning-based systems.
While larger numbers of smart devices will unlock bigger opportunities for efficiency, energy and cost saving and revenue increase, they’ll also trail along some serious challenges and difficulties, some which are notably not addressable with current technological and communication infrastructure.
What’s wrong with centralized communications?
As is, all IoT ecosystems depend on client/server communications, centralized trust brokers and protocols such as SSL/TLS or mechanisms such as the Public Key Infrastructure (PKI) to identify network nodes and control communications.
These technologies have proven their worth for communications between generic computing devices for years, and will continue to respond to the needs of small, closed IoT ecosystems, like smart homes. But with the growth of IoT, centralized networks will soon become the bottleneck and cause lags and failures in critical exchanges because of too much network traffic, to say nothing of the extra investment they’ll require in terms of hubs and communications hardware. Imagine what would happen if your smart defibrillator failed to receive a command because your dishwasher, toaster, fridge, kettle and lights are having a nice M2M chat and have clogged up the network.
Decentralizing IoT networks
A solution would be to decentralize IoT networks in order to improve speed and connectivity. In many cases, substituting over-the-internet connectivity for local communication between devices will help increase speed and efficiency. After all why should a command exchange between a smartphone and light-switch have to go through the internet?
However achieving decentralization will present its own set of challenges, namely in the realm of security. And we know that IoT security is much more than just about protecting sensitive data. How do you make ensure security in communications between devices?
Devices would have to be able to communicate in a peer-to-peer manner and ensure security and integrity without the intervention of or dependence on a centralized trust center. The proposed system would have to protect the network and ecosystem against device spoofing and man-in-the-middle (MittM) attacks and make sure each command and message that is exchanged between nodes in a network are coming from a trusted and authenticated source and received by the right recipient.
How blockchain addresses the problem
Fortunately, the decentralization problem has already been solved in another popular technology: Bitcoin. The famous cryptocurrency is powered by a less-known (but no less exciting) technology named blockchain. The blockchain is a data structure that allows the creation and maintenance of a transaction ledger which is shared among the nodes of a distributed network. Blockchain uses cryptography to allow participants to manipulate the ledger without going through a central authority.
The decentralized, secure and trustless nature of the blockchain make it an ideal technology to power communication among nodes in IoT networks. And it is already being embraced by some of the leading brands in enterprise IoT technologies. Samsung and IBM announced their blockchain-based IoT platform called ADEPT at the Consumer Electronics Show (CES) last year.
When adapted to IoT, the blockchain will use the same mechanism used in financial Bitcoin transactions to establish an immutable record of smart devices and exchanges between them. This will enable autonomous smart devices to directly communicate and verify the validity of transactions without the need for a centralized authority. Devices become registered in blockchains once they enter IoT networks, after which they can process transactions.
There are many use cases for blockchain-based communications. A paper published by IBM and Samsung describes how blockchain can enable a washing machine to become a “semi-autonomous device capable of managing its own consumables supply, performing self-service and maintenance, and even negotiating with other peer devices both in the home and outside to optimize its environment.”
Other IoT domains can benefit from blockchain technology. For instance, an irrigation system can leverage the blockchain to control the flow of water based on direct input it receives from sensors reporting the conditions of the crops. Oil platforms can similarly use the technology to enable communications between smart devices and adjust functionality based on weather conditions.
What are the challenges?
In spite of all its benefits, the blockchain model is not without its flaws and shortcomings. The Bitcoin crew itself is suffering from inner feuds over how to deal with scalability issues pertaining to the Blockchain, which are casting a shadow over the future of the cryptocurrency.
There are also concerns about the processing power required to perform encryption for all the objects involved in a blockchain-based ecosystem. IoT ecosystems are very diverse. In contrast to generic computing networks, IoT networks are comprised of devices that have very different computing capabilities, and not all of them will be capable to run the same encryption algorithms at the desired speed.
Storage too will be a hurdle. Blockchain eliminates the need for a central server to store transactions and device IDs, but the ledger has to be stored on the nodes themselves. And the ledger will increase in size as time passes. That is beyond the capabilities of a wide range of smart devices such as sensors, which have very low storage capacity.
Other challenges are involved, including how the combination of IoT and blockchain technology will affect the marketing and sales efforts of manufacturers.
It’s still too early to say that blockchain will revolutionize and conquer the IoT industry. But it sure looks like a promising offer especially if its challenges can be met. We’ll see more of this in the coming months and years, as IoT continues to grow and become more and more ingrained in our lives. | <urn:uuid:c973b915-c10b-4b94-8e75-324abeeaa099> | CC-MAIN-2022-40 | https://bdtechtalks.com/2016/06/09/the-benefits-and-challenges-of-using-blockchain-in-iot-development/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00707.warc.gz | en | 0.944324 | 1,176 | 3.09375 | 3 |
An unpatched bug thatmalicious files into the victim’s system.
This flaw affected Office 2016 and older versions and it will not produce any security warning while victims opening the document.
Researchers built a Proof-of-concept for this attack using youtube video link with word document and demonstrate the infection process.
This flaw allows let an attacker execute the powerful malware or ransomware also they will use the evasion technique to avoid the security software detection.
How Does This Attack Works
Malicious hackers having an embedded video link inside of the Microsoft word document and send to victims via phishing mail that trick users to open it.
Embedded video contains a link that pointed to YOUTUBE and the hidden html.
According to cymulate, This attack is carried out by embedding a video inside a Word document, editing the XML file named document.xml, replacing the video link with a crafted payload created by the attacker which opens Internet Explorer Download Manager with the embedded code execution file.
Embed an online video option within the word document and link any YouTube video and save the document.
Later unpack the word document using unpacker or change the extension as
Word document contains a file called
A researcher from cymulate
Block Word documents containing the tag: “embeddedHtml” in the Document.xml file of the word documents.
Block word documents containing an embedded video. | <urn:uuid:700becdf-b72b-4772-aa63-fa67a6fa50fb> | CC-MAIN-2022-40 | https://gbhackers.com/microsoft-word-online-video-feature/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00707.warc.gz | en | 0.806715 | 320 | 2.65625 | 3 |
By Konstantin Agouros, Solution Architect Security Technologies at Xantaro in Germany.
Distributed Denial of Service (DDoS) attacks are a common threat on the Internet. The main threat for an entity is an attack from the outside. In most cases the attackers flood the victim's network with packets or request that either consume all the available bandwidth or exhaust resources like state tables or memory and CPU.
As a result most detection and defense mechanisms are placed at the perimeter of the network. Figure 1 depicts a typical set up for DDoS defense.
Fig. 1: DDoS Defence common set up.
However there is a second side to this story. Cloud providers' infrastructure is often leveraged to stage the attack. (Virtual) machines in cloud data centers do have a high speed connection to the internet and thus are a perfect attack tool to flood the victim's network. The systems in the cloud that are vulnerable to be used to stage a DDoS attack can also be used to stage an attack in the same cloud with a much higher impact as the connections are running with LAN speed.
As all defense mechanisms to recognize and/or mitigate are deployed at the perimeter the attack that stays in the cloud is not recognized until the service of the victim fails.
Netflow for Detection
Modern cloud infrastructures use virtual switches to connect the VMs to each other and the outside world. VMWare's vSphere infrastructure uses distributed vswitches, OpenStack uses Open vSwitch in its default installation. Both support the NetFlow protocol to send information about flows passing through the virtual switch to a flow receiver. A flow describes one particular connection e.g IP address 126.96.36.199 talking to 188.8.131.52 using tcp with source port 54332 and destination port 80. Also the number of packets and bytes are count. As DDoS attacks usually consist of lots of packets and or bytes, NetFlow data can be used to recognize them. Statistical analysis of this data shows anomalies if a DDoS is staged against or from the cloud. Software like Flowmon's DDoS defender or Arbor's Peakflow SP can be fed with the data and after some baselining to detect "normal" traffic can be used to trigger alarms if a signficant deviation is recognized.
There are various solutions in place to mitigate the attack. As in many cases the victim's uplink is so slow that it easily flooded a method is needed that tells the router on the other side of the slow uplink to divert the attacking traffic. This can happen by null routing (e.g. discard the packets for the destination) or using BGP Flowspec to give firewall like rules to the routers that might even propagate a bit through the provider's network. Another popular way is to route all the traffic for the victim through a so called scrubbing center that intelligently filters the traffic and then sends only clean traffic in the direction of the victim. However for the intra cloud case these methods are not feasible. In some cases the traffic from the attacking to the victim VMs does not even pass a router. If the providers' cloud is used to attack an outside destination the mostly destination based mitigations also don't work.
SDN to the Rescue
For the OpenStack use case Xantaro devised a solution with the support of Flowmon. Flowmon's DDoS Defender product offers a 'Shell Script' mitigation method where if a DDoS is detected an uploaded shell script is triggered. Xantaro combined the detection technology of Flowmon with the industry leading SDN controller OpenDaylight to push a mitigation filter onto the switch(es) that detected the attack. As Flowmon also recognizes when an attack has stopped the mitigation is automatically lifted.
To implement the detection the particular virtual switch where data shall be collected must be configured to use the Flowmon machine as NetFlow collector. This can be achieved with the following command:
ovs-vsctl -- set Bridge flow-bridge netflow=@nf -- --id=@nf create netflow targets=\"flowmonserver:3000\" active-timeout=10
Next in the DDoS defender application a network segment for the network where the VMs are located must be defined. Also a learning period must be defined and traffic must be collected for the configured time. For real life deployments one or two weeks of collecting data should be used. DDoS defender then allows for a relative deviation of traffic (eg 300% more than normal) to trigger an alert. In the alert definition shell script can be chosen and a script can be uploaded. In the alert definition the admin can define, when the alert triggers. To have sufficient information for the alert to be precise the option 'Run when attack is detected and attack characteristics are collected' needs to be selected. Also the 'Run when attack is ended' needs to be selected so the script is run, when the attack has ended to lift the mitigation. If the script is run it gets a dataset describing the attack. This data contains the destination IPs, ports and protocols. Also source IPs are included. This information can be used by the script to determine a filter for the mitigation. In comes the next player: OpenDayLight. OpenDayLight is the industry leading SDN controller. It uses a REST API on the north bound interface. On the south bound interface a number of protocols are available. Among them are OpenFlow (the de facto standard for SDN control connections) but also routing protocols like BGP. Open vSwitch is one of the most complete OpenFlow implementations available on the market. To glue everything together the script developed by Xantaro takes the information from the DDoS alert, analysis it and uses OpenDayLight's REST API to push a flow onto the virtual switch that blocks the attacking traffic. Figure 2 gives an overview over the architecture.
Fig. 2: Solution architecture.
This method contains the attack as far inside the cloud as possible and prevents the traffic from spreading outside the compute node if the VM is used to attack.
You can’t protect, what you can’t see! Get the insight with Flowmon. Try out the Flowmon Live Demo or download free Flowmon TRIAL and stay in touch for further information on our products! | <urn:uuid:b72d89e1-c8c0-43e0-a4cc-85dee3973a96> | CC-MAIN-2022-40 | https://www.flowmon.com/en/blog/intracloud-ddos-detection-and-mitigation-using-sdn | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00707.warc.gz | en | 0.904619 | 1,293 | 3 | 3 |
It shouldn’t be surprising that the global volume of spam is on the rise. Some estimates say that 90 percent of all mail is junk, and recently we’ve been bombarded with reports like, “Spam Volume Increases 35 percent in November.” What is being done, and why is spam such a hard problem?
The volume of spam is definitely increasing, just look at your own mailbox for evidence. Some reports claim that 100 billion junk e-mail messages are sent on the busy days. Spam is big business, and laws passed last year made spam legal, but subject to some regulation. The battle continues, and the mass-mailing marketers, as they call themselves, aren’t showing any signs of fatigue.
Bayesian filtering is one method that mail scanning software such as SpamAssassin, and even mail servers themselves, implement in an effort to distinguish spam from ham (legitimate) messages. Bayes filtering works based on a database trained to remember words in both spam and ham. The algorithm compares the probability of finding words in the message it’s scanning, with the probability of finding the same word in all mail.
Bayes filters are limited, and like all other technologies the spammers have found a way to work around them. The limitation is roughly 5,000 messages, according to research and the authors of SpamAssassin, at which point diminishing returns begin. Regardless of limitations, the effectiveness of bayes analysis is questionable, since spammers normally include random words with most e-mail they send. Sometimes you’re lucky enough to get a well-written poem. Unfortunately, that’s not the type of mail Bayesian filtering can identify as spam. Keyword searching, however, is useful right? Chances are good that you’ll never want e-mail with the word Viagra in it, so that can just be blocked. Of course the spammers are smart enough to work around that.
At first they just started sending e-mail with horrible misspellings, using symbols and numbers that look similar to the original letters. When filters started identifying and blocking those messages, highly motivated spammers put on their thinking caps. Spam is a wonderful driving factor of invention, both on the good and bad sides.
Somebody had the idea to send the “bad” content in images. At the time, current filters couldn’t identify any of these messages, but that quickly improved. The technology already existed: Optical Character Recognition (OCR). If you want to scan a paper document, OCR software enables you to convert the image to editable text. It actually converts the picture of a letter into a letter, and OCR software has steadily improved over the years. So we started running OCR programs on the images in spam, and started blocking effectively again.
The evildoers very quickly started getting even more creative, as was expected. It turns out that you can put all kinds of light-colored lines and squiggles over text, and still leave it easy to read by humans. OCR doesn’t fare too well against random lines through letters though. It’s clear that we aren’t very good at blocking spam.
So who is to blame for the increases in spam? The business of spam has also fueled another facet of organized cybercrime: botnets. Spam is frequently referred to as an Internet security risk, not because it can clog up mail servers, but because the number one use for compromised machines is to send spam.
One of the most effective mechanisms for blocking spam is through the use of real-time blacklists. These lists are updated constantly, and contain the IP addresses where spam originates. Spammers must keep moving, sending messages from many different IP addresses. What better way to do this than through the use of compromised Windows machines? Malware authors have solved very large-scale systems management issue. If you think about it, they are managing thousands and thousands of machines all over the world—more than most IT organizations.
IRC botnets are bought and sold on the open market, and the owners can easily tell thousands of machines to begin sending spam all at once. Again, it’s pretty darn clever if you think about the logistics of it all.
We’re left with a world full of spam, and the effectiveness of spam detection the same as it was 5 years ago.
But spammers aren’t the only ones raking in the millions. IronPort, Sophos, and many other software vendors are capitalizing on the need for better spam scanning. The blackbox solutions are sometimes better than open source alternatives, sometimes not. The real advantage is that updates are automatic and frequent; for example Sophos’s PureMessage product updates itself every five minutes.
People are experimenting with DomainKeys, SPF, and similar technologies that seek to verify the senders of e-mail, but the fact remains: they make e-mail cumbersome. People need to be able to forward mail automatically, but SPF breaks that. DomainKeys cryptographically sign a message, stating that it really was “From:” a certain domain and that the message hasn’t changed since it left. There are gotchas with DomainKeys, but it’s the most likely solution. Unfortunately, widespread adoption is very unlikely to happen, and if it does, it’ll take years for all the mail server software on the Internet to get updated.
So what do we see for the future of spam? Even if spam is made illegal, it will still exist. We can’t prosecute people in other countries, and we certainly can’t do anything about compromised Windows machines. The most likely course for spam is that it will stay the same as it is currently. That is, a game of catch-up for the anti-spam software writers. | <urn:uuid:564d7cb1-a15a-4f52-8203-f90fa66bcd25> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/management/still-no-silver-bullets-spam-in-07/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00107.warc.gz | en | 0.947729 | 1,223 | 2.796875 | 3 |
Let there be light: No perfect solution for under-lit surveillance, but technology constantly improving
Lighting is arguably the greatest challenge for any surveillance camera.
March 15, 2017 By Colin Bodbyl
The extreme differences outdoor cameras face from day to night means cameras have to be able to adapt to lighting changes that vary from complete sun to total darkness. While camera resolution, frame rates and analytics have experienced drastic improvements over the last decade, low light technology has struggled to keep up.
IR (Infrared) lighting is the primary method for dealing with low-light conditions. IR is simple, inexpensive and easy to integrate into any camera. It is the oldest method for improving low light performance and has evolved over the years, but still faces many challenges. IR lights, like any light, have a finite lifespan. Some last longer than others but eventually they all fail. Since IR LEDs are not a health-monitored system component, they usually fail without any notice to the user. The first time users discover their IR lighting has burnt out is usually when they need to review recorded video and discover the recording is too dark.
Even if IR lights had an infinite lifespan, there are still several other trade-offs. IR requires the camera to be in monochrome or night mode, removing all colour from the scene and often making important details difficult to capture. IR also attracts insects, especially spiders who often find IR a warm place to make their home. The benefits of using IR, however, still outweigh the downsides. As long as integrators pay attention to the IR range of the camera and stay within it, IR will provide satisfactory results.
More recently, manufacturers have been releasing super low light cameras which combine low light image sensors with special processing techniques to provide colour images in low light applications. These cameras solve issues for a lot of scenarios where there is light but where that light is minimal, such as bars, nightclubs and partially lit outdoor areas.
Super low light cameras allow users to capture colour evidence in scenes that would otherwise require a monochrome camera. The benefits are obvious, but even these cameras will switch to monochrome and rely on IR illumination when no artificial white light is available. Thermal cameras are the ultimate low light technology for the right applications. While thermal cameras historically have been very expensive, prices are coming down with several manufacturers now selling thermal cameras for under a thousand dollars. These cameras do not rely on light at all as they measure temperature patterns instead. Since these cameras are not impacted by lighting changes they are extremely effective when running image analysis software like video analytics.
Part of the challenge for video analytics to work effectively is contrast, which usually varies depending on lighting conditions, but with thermal cameras the image contrast is constant. Of course the trade-offs when using thermal cameras are the complete lack of detail and low image quality due to their low resolution sensors and the fact that they are not able to capture colour.
No single camera can handle every lighting condition while simultaneously providing detailed colour images. IR technology still dominates the low light market but a lot of R&D effort is going into improving colour images in low light scenes so we become less reliant on IR. Thermal imaging is the ultimate technology for detection purposes and as costs come down these cameras will become more common for mid-market security applications. Low light performance may be one of the most challenging aspects of video surveillance with the slowest rate of advancements.
Unfortunately, with current technology there are always trade-offs anytime light is minimal. The key to success in any low light scenario is to understand what is important and select the correct technology based on what the customer needs to achieve.
Colin Bodbyl is the director of technology for UCIT Online (www.ucitonline.com).
Print this page | <urn:uuid:acac8ab6-9e11-4e22-926d-a91a9c047b87> | CC-MAIN-2022-40 | https://www.sptnews.ca/5092-let-there-be-light-no-perfect-solution-for-under-lit-surveillance-but-technology-constantly-improving/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00107.warc.gz | en | 0.944404 | 776 | 2.515625 | 3 |
“In those days Caesar Augustus issued a decree that a census should be taken of the entire Roman world. This was the first census that took place while Quirinius was governor of Syria. And everyone went to their own town to register.”
These are the famous words from the Gospel According to Luke that you, if you belong to the part of the world where Christianity is practiced, hear every Christmas.
Today scholars don’t think that there actually was a census for the whole Roman Empire but there are evidences that a local census in Syria and Judea took place around year 1. This was in order to collect taxes in those provinces. As you know: The taxman is data quality’s best friend.
Today doing census is still the most practiced method of knowing about the people living in a given country. The alternative is a public registry that is constantly updated with all the information needed about you. I had the chance to describe such a method in the post on a Canadian blog some years ago. The post is called How Denmark does it.
India has a similar scheme with a centralized citizen registry on the go. This program is called Aadhaar.
As reported in the post Citizen ID and Biometrics the United Kingdom was close to adapting doing citizen Master Data Management some years ago. But it didn’t happen, so it’s still possible to have multiple names and multiple addresses at the same time in different registries while Cameron is Prime Minister of the United Kingdom, First Lord of the Treasury and Minister for the Civil Service. | <urn:uuid:1e13c321-f8a8-44c5-b6b2-6c58117bbb9c> | CC-MAIN-2022-40 | https://liliendahl.com/2012/12/25/doing-census-versus-doing-master-data-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00107.warc.gz | en | 0.972845 | 324 | 2.8125 | 3 |
Successfully pass the free certification exam at IW Academy and become an Infinet Certified Engineer.
The evolution of the data networks entails an increase in the volume of the transmitted traffic, which requires the usage of a quality of service policy. The implementation of the policy will allow the classification of the network traffic and the distribution of the network resources between different traffic classes.
- QoS (Quality of Service) - technology that allows to classify a data stream and to prioritize each stream's transmission in accordance with its class.
- QoS policy - document describing the principles of traffic stream classification and the resource requirements for each class.
- Traffic stream - the data of one service transmitted between two nodes.
- Service - process running on end nodes. The data of a service is distinguished by a unique set of service field values within the network packet's structure. IP telephony, web, and video surveillance are examples of services.
- Responsibility area - a network segment whose effective operation lays in the responsibility of a certain subject. A subject can be either a specific person or an organization.
- DS domain (Differentiated Services domain) - a logical area having uniform traffic classification rules, defined by a QoS policy. Usually the DS domain coincides with the responsibility area.
- CIR - Committed Information Rate. The system must guarantee the resource allocation in compliance with the CIR of the service.
- MIR - Maximum Information Rate. In case that the CIR is ensured, the additional resources may be allocated to the services. The additional resources cannot exceed the MIR threshold and their allocation is not guaranteed.
Packet distribution scheme
In packet networks, the traffic is transmitted from the sending node to the receiving node through communication channels and intermediate devices. Generally a data packet is processed by each intermediate device independently. Let's look at an example of data packet processing performed by an intermediate network device (Figure 1):
- Node-2 generates a data frame and transmits it to Medium-2. The data is encapsulated in a frame based on the L2-protocol that is used in Medium-2.
- The data frame is distributed in Medium-2: the frame is converted into a modulated signal according to the physical properties of the environment. The signals used in wired and wireless environments are different and this affects their propagation properties and their usage scenarios.
- The signal arrives at the incoming network interface of the intermediate network device; after demodulation, the received data frame is checked for integrity: the damaged frames are discarded.
- Next, the frame must be processed by the switching module in order to determine its path. If the frame is addressed to this intermediate network device, it will be passed for processing to the internal services. If the frame is addressed to another node, two scenarios are possible: the frame is passed to further processing until it reaches the output interface, or it is discarded (if Medium-2 is a common environment, where all signals will be received by all devices connected to the medium, according to the L2 protocol's operational principles, if the destination address in the frame's header does not belong to the device, then the device should discard it).
- In case that the frame should be processed and transferred to another node, before exiting the device it will be placed into a packet queue. A packet queue is a set of buffers that contain the data received by the incoming interfaces. The number and size of the memory buffers used for the packet storage are not standardized and depend on the equipment's manufacturer. For example, the InfiLINK 2x2 family of devices has 32 queues, 17 of which are available for configuration to the user.
- The data frame passes through the packet queue to which it was assigned and arrives at the outgoing interface.
- Since packet queues are a link between incoming and outgoing interfaces, a device should have a controller that fills the queues with the incoming data and picks data from the queues for transmission to the outgoing interfaces. Usually, these functions are performed by the central processing unit (CPU). As it will be shown below, the filling and the emptying of data into and from the queues can be performed unevenly and depends on the classification of the data streams.
- The outgoing interface generates a modulated signal and transmits it to Medium-5 which is connected to Node-5, the destination of the original data frame.
- Node-5 receives the signal, demodulates it and processes the received data frame.
Note that in modern network devices, the network interfaces are usually combined and can operate both as incoming and outgoing.
Figure 1 - Traffic passing through an intermediate network device
A network device can be intermediate for several pairs of nodes and each node can transmit the data of several services (Figure 2a). Let's look at a scheme where the "Network device" is an intermediate node for the traffic coming from the following pairs of nodes: Node-1 - Node-4, Node-2 - Node-5 and Node-3 - Node-6. The first pair transmits data for three services, the second for two and the third for one service. If there are no QoS settings, the data of all services get through the general queue in the order they are received at the "Network device" and in the same order they will be transferred from the queue to the outgoing interfaces.
With QoS configured, each of the incoming traffic flows can be classified by its type (for example) and a separate queue can be mapped to each class (Figure 2b). Each packet queue can be assigned a priority, which will be taken into account while extracting the packets from the queues, and will guarantee specific quality indicators. The traffic flow classification can be performed not only with respect to the services used, but according to other criteria also. For example, each pair of nodes can be assigned to a separate packet queue (Figure 2c).
Figure 2a - Queuing for various services without QoS
Figure 2b - Queuing for various services with QoS
Figure 2b - Queuing for various users with QoS
Keep in mind that several intermediate network devices can be located on the data path between the source and the receiver, having independent packet queues, i.e. an effective QoS policy implementation will require the configuration of several network nodes.
The main conclusions from the previous section, which will be used to define the quality metrics, are the following:
- The throughput of the communication channel and of the network devices is limited.
- The data delivery time from source to destination is non-zero.
- A communication channel is a medium with a set of physical parameters that can have influence on the signal propagation.
- The software and hardware architecture of the network devices impacts the way in which the data is being transmitted.
There are three main quality metrics:
Let's look at each metric using an example: Node-2 transmits three data packets to Node-5; the data source and the recipient are connected to an intermediate Network device and the packets are part of the same service, i.e. their key service fields are the same.
During a data stream transmission, some packets may not be received, or may be received with errors. This process is called data loss and it is defined as the ratio between the number of received packets and the number of transmitted packets. In the example below (Figure 3), Node-2 transmits packets with the identifiers 1, 2 and 3, however, Node-5 receives only packets 1 and 3, i.e. the packet with the identifier 2 was lost. There are network mechanisms which allow the retransmission of the lost data. Examples of such mechanisms are the TCP and the ARQ protocols.
The causes of data loss can be divided into the following groups:
- Losses in the medium: losses related with the propagation of the signal in the physical environment. For example, the frame will be lost if the useful signal level is lower than the receiver sensitivity. Losses can also be caused by the physical damage of the interfaces connected to the media or by impulse pickups resulting from poor grounding.
- Losses on the interface: losses while processing a queue at the incoming or at the outgoing interface. Each interface has a memory buffer, which can be completely filled in case of intensive data stream transmissions. In this case, all the subsequent data entering the interface will be discarded, because it cannot be buffered.
- Losses inside the device: Data discarded by the network device according to the logic of the configuration. If the queues are full and the incoming data cannot be added to the processing queue, the network device will drop it. Also, these losses include the data packets rejected by access lists and by the firewall.
Figure 3 - Data packet loss example
The losses affect two indicators: throughput and packet performance.
One of the main indicator that is practically used is the throughput, whose value depends on the losses. The throughput is defined by the capabilities of the physical channel and by the ability of the intermediate network devices to process the data stream. The link throughput is defined as the maximum amount of data that can be transmitted from the source to the receiver per unit of time.
The parameter that affects the throughput and the state of the queues is the packet performance of the device. Packet performance is the maximum number of data packets of a given length that a device is capable to process per unit of time.
The real throughput depends on both packet performance and on the interface's characteristics, therefore, at the network design stage, pay attention to the coherence of these parameters in order to avoid the situation when one of them becomes a bottleneck for a link or for a network segment.
The packet performance is defined by the hardware capabilities of the central processor and by the amount of internal memory. Network devices process multiple traffic streams with different L2 frame sizes, so the following Ethernet frame size values are used for a performance test:
- minimum size = 64 bytes;
- medium size = 512 bytes;
- maximum size = 1518 bytes.
Due to the limited amount of internal memory, better packet performance is achieved for the minimum frame size. Using minimum sized frames assumes a large amount of overhead since each data frame has a service header, whose size does not depend on the size of the frame itself.
For example, the service header length for 64 bytes long frames (Figure 4b) and 156 bytes frames(Figure 4c) will be the same, but the user data amount will be different. To transmit 138 bytes of user data, three 64 bytes frames or one 156 bytes frame will be required, so in the first case 192 bytes are sent, in the second only 156 bytes. For a link having a fixed throughput, large frames will increase the efficiency by rising the useful throughput of the system, but the latency will also increase. The performance of the Infinet devices in various conditions is shown in the "Performance of the Infinet Wireless devices" document.
Figure 4 - Frame structure for various Ethernet frame lengths
Delay is defined as the time it takes for a packet to travel from the source to the destination. The value of the delay depends on the following aspects:
- The signal's propagation duration in the medium: depends on the physical characteristics of the medium and it is nonzero
- Serialization time: the conversion of a bitstream to a signal and backward by the incoming/outgoing interfaces is not instantaneous and makes use of the hardware resources of the network device.
- Processing time: the time spent by the data packet inside the network device. This time depends on the status of the packet queue, as a data packet will be processed only after processing the packets placed earlier in this queue.
The delay is often measured, as a round-trip time (RTT), i.e. the time it takes for the data packet to be transmitted from the source to the destination and backward. For example, this value can be seen in the ping command's results. The time it takes for the intermediate network devices to process the data packets forward and backward may differ, therefore, usually the round-trip time is not equal to the double of the one-way delay.
Figure 5 - Example of data transfer delay
The CPU load and the status of the packet queues are frequently changing at the intermediate network devices, so the delay during the data packet transmission will vary. In the example below (Figure 6), the transmission time for the packets with the identifiers 1 and 2 is different. The difference between the maximum and the average delay values is called jitter.
Figure 6 - Example of varying delay in data transfer
When using a redundant network infrastructure the data between the source and the receiver can be transmitted through different paths, so jitter will occur. Sometimes the difference between the delays on each path may become so large that the order of the transmitted data packets will change on the receiving side (Figure 7). In the example below, the packets were received in a different order.
The effect depends on the characteristics of the service and on the ability of the higher layer network protocols to restore the original sequence. Usually, if the traffic of different services is transmitted through different paths, then it should not affect the ordering of the received data.
Figure 7 - Example of unordered data delivery
Service requirements with respect to the quality indicators
Each of the data transfer services has a set of requirements for the quality indicators. The RFC 4594 document includes the following service types:
|Telephony||very low||very low||very low|
|Multimedia Conferencing||medium||very low||low|
|Real-Time Interactive traffic||low||very low||low|
|Broadcast video||very low||medium||low|
|Low-Latency Data||low||medium||very low|
|Application Categories||Service Class||Signaled||Flow Behavior||G.1010 Rating|
|Application Control||Signaling||Not applicable||Inelastic||Responsive|
|Multimedia Conferencing||Yes||Rate Adaptive||Interactive|
|Best Effort||Standard||Not Specified||Non-critical|
The transmission of the various services is performed on a single network infrastructure, which has limited resources, therefore, mechanisms should be provided for distributing the resources between the services.
Let's look at the example below (Figure 8). Node-2 generates traffic serving different services with a total speed of 1 Gbit/s. Medium-2 allows to transfer this data stream to an intermediate network device, however, the maximum link throughput between the Network device and Node-5 is 500 Mbps. Obviously, the data stream cannot be processed completely and part of this stream must be dropped. The QoS task is to make these drops manageable in order to provide the required metric values for the end services. Of course, it is impossible to provide the required performance for all the services, as the throughput does not match, therefore, the QoS policy implementation involves that the traffic of the the critical services should be processed first.
Figure 8 - Example of inconsistency between the incoming traffic amount and the link throughput
Two main methods used during the QoS policy implementation can be highlighted:
- Prioritization: the distribution of the data packets into queues and the extraction of the packets from the queues by their priority. In this case, the packets that are most sensitive to delay and jitter are processed first, then the traffic for which the delay value is not critical is processed.
- Throughput limitation: throughput limitation for the traffic flows. All the traffic that exceeds the set throughput threshold will be discarded.
Let's look at the example above, and add a second intermediate device to the data distribution scheme (Figure 9a). The packet distribution scheme follows the next steps:
- Step 1:
- Node-1 and Node-2 generate packets for two services: telephony and mail. The telephony traffic is sensitive to delay and jitter unlike the mail service data (see Services requirements for quality indicators), therefore, it must be processed first by the intermediate devices.
- Network device-1 receives the packets of Node-1 and of Node-2.
- Step 2:
- Traffic prioritization is configured on Network device-1, thus the device classifies the incoming traffic and places the data packets in different queues. All the voice traffic will be put in queue 0, and the mail traffic will be put in queue 16. Thus, the priority of queue 0 is higher than the one of queue 16.
- The packets leave the queues and proceed towards the outgoing interfaces in accordance with the queue priorities i.e. queue 0 will be emptied first, then queue 16 will be emptied.
- Step 3:
- Network device-1 sends data to Medium-7, which is connected with Network device-2. The sequence of data packets corresponds to the quality metrics - the telephony data is transmitted first through the medium, and the mail service is sent next.
- Node-3 is connected to Network device-2 and generates a mail service data stream.
- Step 4:
- Network Device-2 has no prioritization settings, thus all the incoming traffic is put in queue 16. The data will leave the queues in the same order that it entered, i.e. the telephony and the mail services will be handled equally, despite the requirements of the quality indicators.
- Network device-2 increases the delay for the telephony traffic transmission.
- Step 5:
- The data is transmitted to the end nodes. The transmission time of the voice packets was also increased due to the additional processing of the mail service traffic of Node-3.
Each intermediate network device without traffic prioritization settings will increase the data transmission delay, so the value of the delay is unpredictable. Thus, having a large number of intermediate devices without QoS policies implemented, will make the real-time services's operation impossible because of the mismatch with the quality indicators, i.e. traffic prioritization must be performed along the entire network traffic transmission path (Figure 9b).
Keep in mind that implementing QoS policies is the only method to ensure the quality metrics. For an optimal effect, the QoS configuration should be synchronized with other settings. For example, using the TDMA technology instead of Polling on the InfiLINK 2x2 and InfiMAN 2x2 families of devices reduces jitter by stabilizing the value of the delay (see TDMA and Polling: Application features).
Figure 9a - Example of data distribution with partly implemented QoS policies
Figure 9b - Example of data distribution with implemented QoS policies
The traffic prioritization mechanism
From the management point of view, the transmission path through the network can be described in two ways (Figure 10a, b):
- White-box: all the network devices along the data propagation path are in the same responsibility zone. In this case, the QoS configuration on the devices can be synchronized, according to the requirements specified in the section above.
- Black-box: some network devices in the data propagation path are part of an external responsibility zone. The classification rules for incoming data and the algorithm for emptying the queues are configured individually on each device. The architecture of the packet queues's implementation depends on the manufacturer of the equipment, therefore there is no guarantee of a correct QoS configuration on the devices in the external responsibility zone, and as a result, there is no guarantee of the high-quality performance indicators.
Figure 10a - White-box structure example
Figure 10b - Black-box structure example
To solve the described problem of the black-box network structure, the packet headers can be labeled: the priority required during packet processing is set in a header field and is kept over the whole path. In this case, all intermediate devices can put the incoming data in a queue according to the field values in which the priority is indicated. This requires the development of standard protocols and the implementation of these protocols by the equipment manufacturers.
Keep in mind that usually, the equipment located in an external responsibility zone does not support data prioritization in accordance with the priority values in the service headers. Traffic priority coordination should be performed at the border of the responsibility zones, at the administrative level, by implementing additional network configuration settings.
The processing priority of a packet can be set using the service fields of various network protocols. This article describes the use of the Ethernet and of the IPv4 protocol headers.
Ethernet (802.1p) frame prioritization
The Ethernet frame header includes the "User Priority" service field, which is used to prioritize the data frames. The field has a size of 3 bits, which allows to select 8 traffic classes: 0 - the lowest priority class, 7 - the highest priority class. Keep in mind that the "User Priority" field is present only in 802.1q frames, i.e. frames using VLAN tagging.
Figure 11 - Frame prioritization service field in the Ethernet header
IP packet prioritization
The IP protocol has three historical stages in the development of the service field responsible for packet prioritization:
- When the protocol was first approved, there was an 8-bit ToS (Type of Service) field in the IP packet header (see RFC 791). ToS included the following fields (Figure 12a):
- Precedence: priority value (3 bits).
- Delay: delay minimization bit.
- Throughput: throughput minimization bit.
- Reliability: reliability maximization bit.
- 2 bits equal to 0.
- In the second stage, 8 bits were still used for packet prioritization, however, ToS included the following fields (see RFC 1349):
- Cost: bit to minimize the cost metric (1 bit is used, whose value was previously zero).
- Last, the IP header structure has been changed (see RFC 2474).The 8 bits previously used for prioritization were distributed in the following way (Figure 12b):
- DSCP (Differentiated Services Code Point): packet priority (6 bits).
- 2 bits are reserved.
Thus, ToS allows to distinguish 8 traffic classes: 0 - the lowest priority, 7 - the highest priority, and DSCP - 64 classes: 0 - the lowest priority, 63 - the highest priority.
Figure 12a - ToS service field in the IP packet header
Figure 12b - DSCP service field in the IP packet header
Many end nodes in the network do not support the handling of the service headers: can not set or remove the priority, so this functionality should be implemented on the corresponding intermediate network devices.
Let's look at the example of a data transmission from Node-1 to Node-2 through a DS-domain and through a third-party telecom operator's network (Figures 13a-c). The DS domain includes three devices, two of them are located at the borderline and one is an intermediate device. Let's look at the steps taken for data processing in a network using an Ethernet frame transmission (the basic principles discussed in the example below are applicable for an IP packet or other protocol that supports data prioritization):
- Step 1: Node-1 generates an Ethernet frame for Node-2. There is no field present for frame priority tagging in the header (Figure 13a).
- Step 2: Border Network Device-1 changes the Ethernet header, setting the priority to 1. Border devices should have rules configured in order to filter the traffic of Node-1 from the general stream and to assign a priority for it. In networks with a large traffic flow number, the list of rules on border devices can be volumetric. Border network device-1 processes the frame according to the set priority, placing it in the corresponding queue. The frame is transmitted towards the outgoing interface and sent to Intermediate network device-2 (Figure 13a).
- Step 3: Intermediate network device-2 receives the Ethernet frame having priority 1, and places it in the corresponding priority queue. The device does not handle the priority in terms of changing or removing it inside the frame header. The frame is next transmitted towards the Border network device-3 (Figure 13a).
- Step 4: Border network device-3 processes the incoming frame similarly to the Intermediate device-2 (see Step 3) and forwards it towards the service network provider(Figure 13a).
- Step 4a: in case of agreeing that the traffic will be transmitted through the provider's network with a priority other than 1, then Border Device-3 must change the priority. In this example, the device changes the priority value from 1 to 6 (Figure 13b).
- Step 5: during the transmission of the frame through the provider's network, the devices will take into account the priority value in the Ethernet header (Figure 13a).
- Step 5a: similarly to Step 4a (Figure 13b).
- Step 5b: if there is no agreement about the frame prioritization according to the priority value specified in the Ethernet header, a third-party service provider can apply its own QoS policy and set a priority that may not satisfy the QoS policy of the DS domain (Figure 13c).
- Step 6: the border device in the provider's network removes the priority field from the Ethernet header and forwards it to Node-2 (Figure 13a-c).
Figure 13a - Example of Ethernet frame priority changing during the transmission through two network segments (the priority setting is coordinated and the priority value matches for the 2 segments)
Figure 13b - Example of Ethernet frame priority changing during the transmission through two network segments (the priority setting is coordinated, but the priority should be changed)
Figure 13c - Example of Ethernet frame priority changing during the transmission through two network segments (the priority setting in the 2 segments is not coordinated)
Queues implementation in Infinet devices
For a device, the process of analyzing the priority in the service headers and the data processing according to these priorities is not a simple task due to the following reasons:
- The devices automatically recognize priorities according to different protocols. For example, the InfiLINK XG family of devices supports 802.1p prioritization, but does not recognize DSCP priority values.
- The devices at the borderline of the DS domain allow to use a different set of criteria to classify the traffic. For example, the InfiMAN 2x2 devices allow to set priorities by selecting all the TCP traffic directed to port 23, while the Quanta 5 family devices does not support this type of prioritization.
- The number of the queues implemented inside the devices differs and depends on the manufacturer. A correspondence table is used to set a relation between the priority in the service header and the device's internal queue.
The tables below show the data types for the queues of the internal architecture, the priority handling possibilities and the relation between the standardized priorities and the internal priorities used by the device.
Please note the architectural queuing feature of the Infinet devices: all queues share a single memory buffer. In case that all the traffic falls into a single queue, the size of the queue will be equal to the size of the buffer, but if there will be several queues in use, the size of the memory buffer will be evenly divided between them.
Internal packet queuing
|Parameter||Description||InfiLINK 2x2 / InfiMAN 2x2||InfiLINK Evolution / InfiMAN Evolution||InfiLINK XG / InfiLINK XG 1000||Quanta 5 / Quanta 6 / Quanta 70|
|Marking criteria||A criteria that can be used to classify the incoming traffic.|
PCAP expressions support
(PCAP expressions allow flexible filtering based on any service header field, see the PCAP filters article)
PCAP expressions support
(PCAP expressions allow flexible filtering based on any service header field, see the PCAP filters article)
|Auto recognition||Protocols for which the family of devices automatically recognizes the priority set in the header and puts the data in the appropriate queue.|
|Number of queues||The number of data queues used by the device.||17||17||4||8|
|Queue management||Supported mechanisms for emptying the packets from the queues.|
|QoS configuration via Web||Documentation about the traffic prioritization configuration using the Web interface.||Switch settings|
|QoS configuration via CLI||Documentation about the traffic prioritization configuration using the command line interface.||qm command||qm command||Commands for switch configuration||-|
Correspondence between the priorities of the standard protocols and the internal priorities used by the InfiLINK 2x2, InfiMAN 2x2, InfiLINK Evolution and InfiMAN Evolution families of devices
|Traffic class (in accordance with MINT)||InfiLINK 2x2, InfiMAN 2x2, InfiLINK Evolution and InfiMAN Evolution||802.1p||ToS (Precedence)||DSCP|
|Regular best effort||15||00||00||0|
|Business 6||14||01||8, 10|
|Business 5||13||12, 14|
|Business 4||12||02||16, 18|
|Business 3||11||20, 22|
|Business 2||10||03||24, 26|
|Business 1||9||02||28, 30|
|Video 2||4||04||05||40, 42|
|Video 1||3||44, 46|
|NetCrit||0||07||07||56, 58, 60, 62|
Correspondence table between the priorities of the standard protocols and the internal priorities used by the InfiLINK XG, InfiLINK XG 1000, Quanta 5, Quanta 6 and Quanta 70 families of devices
|Traffic class (in accordance with 802.1p)||802.1p||InfiLINK XG, InfiLINK XG 1000||Quanta 5, Quanta 6, Quanta 70|
|Background (lowest priority)||00||1||0|
|Network Control (higher priority)||07||7|
Prioritization assumes the use of several packet queues, whose content must be transmitted to the outgoing interfaces through a common bus. Infinet devices support two mechanisms for packet transmission from the queues to the bus: strict and weighted scheduling.
The strict prioritization mechanism assumes a sequential emptying of the queues according to the priority values. Packets with priority 2 will only be sent after all the packets with priority 1 have been transferred to the bus (Figure 14). After the packets with priorities 1 and 2 are sent, the device will start sending packets with priority 3.
The disadvantage of this mechanism is that resources will not be allocated to low-priority traffic if there are packets in the higher priority queues, leading to the complete inaccessibility of some network services.
Figure 14 - Strict scheduling
The weighted scheduling doesn't have the disadvantages of the strict scheduling. Weighted scheduling assumes the allocation of the resources for all the queues according to the weighting factors that correspond to the priority values. If there are three queues (Figure 15), weighted factors can be distributed in the following way:
- packet queue 1: weight = 3;
- packet queue 2: weight = 2;
- packet queue 3: weight = 1.
When using the weighted scheduling, each queue will receive resources, i.e. there will be no such situation with complete inaccessibility of a network service.
Figure 15 - Weighted scheduling
Traffic prioritization recommendations
Universal recommendations for configuring traffic prioritization mechanisms:
- Pay special attention when developing the QoS policies. The policy should take into account the traffic of all the services used in the network and it should provide strict compliance between the service and the traffic class.
- The QoS policy should take into account the technical capabilities of the devices for recognizing and handling the service field values, which indicate the data priority.
- The rules for traffic flow classification must be configured on the border devices of the DS domain.
- The intermediate devices of the DS domain should automatically recognize the traffic priorities.
Throughput limitation mechanism
The distribution of the network resources between the traffic flows can be performed not only by prioritization, but also using the throughput limitation mechanism. In this case, the bitrate of the stream cannot exceed the threshold level set by the network administrator.
The speed limitation principle in Infinet devices
The throughput limitation principle is to constantly measure the throughput of the data stream and to apply restrictions if the this value exceeds the set threshold (Figure 16a,b). The throughput limitation in Infinet devices is performed according to the Token Bucket algorithm, where all data packets above the throughput threshold are discarded. As a result, there will appear losses, as described above.
Figure 16a - Unlimited data flow rate
Figure 16b - Limited data flow rate
Token Bucket Algorithm
For each speed limit rule there is a logical buffer associated, in order to serve the allowed amount of transmitted data. Usually, the buffer size is larger than the size of the limitation. Each unit of time is allocated a data size equal to the threshold for the bitrate limit.
In the example below (video 1), the speed limit is 3 data units and the buffer size is 12 data units. The buffer is constantly filled according to the threshold, however, it cannot be filled over its own size.
Video 1 - Resource allocation into a speed limit buffer
The data received by the device at the inbound interface will be processed only if the buffer has resources for their processing (video 2). Thus, the passing data occupies the buffer's resources. If the buffer's resources are fully occupied at the time of a new data frame arrival, the frame will be discarded.
Video 2 - Dedicated resources usage for data processing
Keep in mind that the resource allocation and the data processing are performed simultaneously inside the buffer (video 3).
The rate of the data flows in packet networks is inconsistent, proving the efficiency of the Token Bucket algorithm. The time intervals in which data is not transmitted allows to accumulate resources in the buffer, and then process the amount of data that exceeds the threshold. A wide band will be allocated to pulse data streams, such as web traffic, in order to ensure a quick loading of the web pages and to increase the comfort level of the end user.
Besides the described advantage of the Token Bucket algorithm, the average throughput will match with the set threshold, as in a long period of time, the amount of available resources will be determined not by the size of the buffer, but by the intensity of its filling, which is equal to the throughput threshold.
Video 3 - Data processing at the speed limit buffer
The Token Bucket algorithm can be applied to separate traffic flows. In this case, a speed limit buffer will be allocated for each flow (video 4).
In this example, two speed limit rules are implemented: for the traffic of vlan 161 - 3 data units per unit of time, for the traffic of vlan 162 - 2 data units. The buffer size for each traffic flow contains 4 time intervals, i.e. 12 data units for vlan's 161 traffic and 8 data units for vlan's 162 traffic. In total, 5 data units are allocated to the buffers in each time interval, then the allocated resources are distributed between the buffers. Since the size of the buffers is limited, the resources that exceed their size cannot be used.
Video 4 - Resource allocation for two speed limit buffers
Each buffer's resources can only be used for the traffic of the corresponding service (video 5). Thus, to handle vlan's 161 traffic, only the resources of the buffer for vlan's 161 traffic are used. Similarly, the other buffer's resources are used for vlan's 162 traffic.
Video 5 - Usage of the dedicated resources for data processing
There are ways to combine the resource buffers. For example, on the Infinet devices, the allocated resource buffers can be combined using classes (see below). If one resource buffer is filled with resources (video 6), its further incoming resources can be provided to another buffer.
In the example below, the buffer for vlan 162 is full of resources, allowing to fill in the vlan's 161 buffer with 5 data units of resources, instead of 3 (its own 3 data units plus the 2 data units of the other buffer). In this case, the vlan's 161 service throughput will increase. But when vlan's 162 traffic resource buffer will have free space, the resource allocation will return to the normal mode: for vlan's 161 buffer - 3 data units, for vlan's 162 buffer - 2 data units.
Video 6 - Redistribution of the allocated resources between various speed limit buffers
Types of speed limits in Infinet devices
The throughput limitation principle described above is implemented in the Infinet devices in two ways:
- Traffic shaping at the physical interface: limitations will be applied to the whole data flow passing through the physical interface. This method is easy to configure - specify the interface and the threshold value - but it does not allow to apply limitations to a specific network service traffic.
- Traffic flow shaping: limitations are applied to the logical data flows. The logical data stream is separated from the main traffic by a specified criteria. It allows to apply throughput limitations per network services, which are separated by the values of the service header fields. For example, the traffic tagged with vlan 42 can be separated to a logical channel and limited in throughput without influencing the other traffic flows.
The Infinet devices allow to configure hierarchical throughput allocation structures. Two object types are used to perform this: a logical channel and a class, which are connected by a child-parent relationship. The class has a throughput value assigned, which is distributed between the child logical channels, and the channel has a guaranteed and a maximum throughput value - CIR and MIR.
Let's look at the example of transmitting the traffic of two services associated with vlan id's 161 and 162, between Master and Slave (Figure 17a). The total traffic of the services should not exceed 9 Mbps.
The Master's device configuration can be performed in the following way (Figure 17b):
- Class 16 has been configured with a 9 Mbps throughput.
- Class 16 is the parent of the channels 161 and 162, i.e. the total traffic at these logical channels is limited to 9 Mbps.
- The traffic with vlan ID 16 is associated with the logical channel 161; the traffic of vlan 162 is associated with the logical channel 162.
- The CIR value for channel 161 is 4 Mbps and for channel 162 it is 5 Mbps. If both services will actively exchange data, the threshold values for their traffic will be equal to the CIR of each channel.
- The MIR value for channel 161 is 9 Mbps and for channel 162 it is 7 Mbps. If there is no traffic in logical channel 162, then the threshold value for channel 161 will be equal to the MIR, i.e. 9 Mbps. In the other case, when there is no traffic in the logical channel 161, the threshold value for channel 162 will be equal to 7 Mbps.
Figure 17a - Throughput limitation for 2 traffic flows tagged with vlan-ids 161 and 162
Figure 17b - Hierarchical channel structure of the throughput limits for the traffic of vlans 161 and 162
The throughput limitation capabilities of all Infinet families of devices are shown in the table below:
Throughput limitation capabilities in Infinet devices
|Parameter||Description||InfiLINK 2x2 / InfiMAN 2x2||InfiLINK Evolution / InfiMAN Evolution||InfiLINK XG / InfiLINK XG 1000|
|Interface shaping||The throughput limitation capabilities of the device's physical interface.||-||-|
|Logical stream shaping||The throughput limitation capability for a traffic stream, filtered according to one or more criteria.||up to 200 logical channels||up to 200 logical channels||-|
|Traffic directions||Ability to apply limitations to the incoming/outgoing traffic flows.||incoming and outgoing||incoming and outgoing||outgoing|
|Limitations hierarchy||The ability to create a system of hierarchical limitations.|
up to 200 logical channels, which are the children of the logical classes
|up to 200 logical channels, which are the children of the logical classes||-|
|Logical stream filtering||Criteria used to filter the data streams.|
PCAP expressions support
(PCAP expressions allow to perform a flexible limitation based on any service header field, see the PCAP filters article)
PCAP expressions support
(PCAP expressions allow to perform a flexible limitation based on any service header field, see the PCAP filters article)
|Traffic shaping in Web||Documentation about throughput limitation settings in the Web interface.||Traffic shaping||Traffic Shaping||Switch|
|Traffic shaping in CLI||Documentation about throughput limitation settings via CLI.||qm command||qm command||Commands for switch configuration|
Recommendations for the throughput limitation configuration
Use the following recommendations during the data throughput limitation configuration:
- The traffic of all network services should be limited. It allows to take control over all traffic flows and separately allocate resources for these flows.
- The throughput limitation should be performed on the devices closest to the data source. There is no need to duplicate throughput limiting rules for the data flows throughout the chain of intermediate devices.
- Many network services are bidirectional, so they require restrictions on devices for both the incoming and the outgoing traffic.
- To set the correct throughput threshold values, evaluate first the average and the maximum values of the service traffic. Pay special attention to the busy hours. Collecting data for analysis is possible via the InfiMONITOR monitoring system.
- The sum of the CIR values of the logical channels associated with one class should not exceed the maximum class throughput.
- RFC 4594.
- RFC 791.
- RFC 1349.
- RFC 2474.
- InfiMONITOR monitoring system.
- InfiLINK 2x2, InfiMAN 2x2 family devices web interface. QoS options.
- InfiLINK 2x2, InfiMAN 2x2 family devices web interface. Traffic shaping.
- InfiLINK Evolution, InfiMAN Evolution family devices web interface. QoS options.
- InfiLINK Evolution, InfiMAN Evolution family devices web interface. Traffic shaping.
- InfiLINK XG, InfiLINK XG 1000 family devices web interface. Configuring QoS.
- Quanta 5, Quanta 6 family devices web interface. Switch settings.
- Quanta 70 family devices web interface. Switch settings.
- QoS configuration in OS WANFleX. | <urn:uuid:241f12c4-39ff-4a5f-b73c-42d4fa48139a> | CC-MAIN-2022-40 | https://wiki.infinetwireless.com/display/DR/Configuring+QoS+Policies | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00107.warc.gz | en | 0.874579 | 9,417 | 2.921875 | 3 |
Painless Project Management
Project management, besides being a growing component for greater success and result –oriented solutions, is fast becoming an integral part of organizations, globally. The term ‘Painless Project Management’ refers to a careful streamlining of all the activities related to motivating, planning, organizing, and controlling of available resources. Additionally, it takes care that the protocols required to be followed, as well as the procedures for achieving specific goals, conform to the laid-down regulations.
The key and also the most common constraints of a project are scope, time, budget and quality. If successful, project management can take the organization to greater and newer heights, else, it can break the organization. Painless project management strategies integrate the various skills required for the successful execution and implementation of a project. They come into play when a new product has to be introduced, or a new application is being developed. These tools and techniques are applicable across different sectors such as construction, banking, defense, credit cards, human resources, and so forth.
Phases of Painless Project Management:
Project Management is divided into five phases and their painless implementation can lead to greater success for any organization. These stages are:
Project conceptions and Initiation: The idea of the project is very carefully analyzed in this phase; along with the benefits that it might bring to specific processes or the organization as a whole
Project Definition and Planning: The project charter; the approach to be followed in a project, as well as the scope of the project ; are defined in this phase. A team is then formed, which frames out the requirements and sets a budget for the project. Listing of all available resources is also done in this phase.
Project execution: In this stage, tasks are well defined and communicated to all team members. All important project related information are generally discussed during this phase to avoid any complications in future.
Project performance and control: Herein, a comparison of the scheduled tasks, as against the ones performed, is done by project managers.
Closure of the project: Once all these requirements have been reviewed and a specific set of goals has been achieved; the client evaluates and approves the outcome. Then, the project is closed. The success or failure of the project is highlighted by the ways in which all the processes were carried out in a painless manner, or otherwise.
Ways of Making Project Management Painless:
The most painful part of project management is connected with the development of a product or application. These steps can lead to better experiences and make the daunting process painless.
Nailing down the project details: Before a project is kicked off, you need to ensure that all requirements of key stakeholders have been taken care of and their approval taken. The interests and expectations of these stakeholders have to be kept in mind, before the requirements are finalized.
Engaging the right people at the right time: The right people, with the right skill sets meant for a specific domain, have to be brought in during the initial phases of a project. This ensures its seamless and successful implementation. Team members with prior experience have more knowledge and understanding, and are better equipped to handle the issues on hand. It’s essential to choose the team accordingly.
Definition of critical milestones: Critical milestones of a project can be identified as per the project phase. A real time evaluation of each project phase would enable the team to track its progress and also ensure that the work is done as per pre-defined specifications.
Keeping all communication lines open: At all stages of the project, it is imperative to have clear, consistent and transparent channels of communication.
Management of project risks: Irrespective of the phase in which a project is, risks of failure can creep in and have to be identified. Open and clear communication channels make it easier to identify risks and also mitigate them.
Test Deliverables: All the deliverable should be thoroughly analyzed and tested at the end of every phase. For the painless implementation of all processes, efforts should be made to see that all expected results are delivered, at every stage of the project.
Project Management can certainly become painless and more impactful if the above measures are adopted and practiced by the team.
Author : Uma Daga | <urn:uuid:a4c914cc-19fd-4d89-b4c4-11ee7cf22b9b> | CC-MAIN-2022-40 | https://www.greycampus.com/blog/project-management/painless-project-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00307.warc.gz | en | 0.952148 | 879 | 2.53125 | 3 |
Despite being aware of the various cyber risks, small businesses choose to ignore the need for a robust cybersecurity posture. They believe malicious actors only target large organizations and not them. However, lack of phishing protection or a business continuity strategy in the event of a cyber-attack leaves them at high risk. These cyberattacks can cost a business dearly if the IT security teams do not have an effective cybersecurity policy. The primary step in this direction is to address the challenges and vulnerabilities characteristic of an SME environment.
Image source: SANS Institute
Types Of Threats That SMEs Face
An SME can face any cyber attack at any time. Many different types of threats and vulnerabilities expose an SME to losses, as discussed below.
- Automated Exploits: An automated exploit of vulnerability refers to the non-targeted attacks that affect the computer’s OS with a known security vulnerability. Most automated attacks target inherent vulnerabilities in the OS and occur if they don’t install the required patches. SMEs may not have adequate technical staff, and software may not be updated with the latest patch.
- Malicious HTML Email: These email attacks enter the system through an HTML email having a malicious site link, which is usually termed ‘phishing.’ When the user unintentionally clicks on the link, it triggers an automatic download of malware. Some email hosting providers offer inbound and outbound SMTP authentication to thwart such threats and ensure email security. Anti-phishing services can also protect SMEs from spear-phishing and other attacks, thereby making the entire business communication, including email forwarding and email archiving, secure.
- Unofficial Web Surfing: Employees can use the organization’s devices to access non-business-related sites. Such practices can infect the enterprise network with spyware, trojans, and malware. Additionally, employees who access social media sites from office terminals can expose the complete network to a malware attack. Restricting access and installing anti-malware software, including ransomware protection, can limit the damage caused by reckless surfing.
- Poor Security Configuration: The computing system’s security configuration of many SMEs could be at the default setting. Organizations need to change it and protect the system with robust security tools. Otherwise, it is easy for malicious actors to find the default credentials and log into the network resources.
Consequences Of A Data Breach
The consequences of a data breach on an SME can be both short-term and long-term.
- Penalties For Non-Compliance: Different regulatory bodies worldwide have defined regulatory cybersecurity standards for organizations. They fine them heavily in case of a data breach due to non-compliance to such regulations.
- Forensic Investigations: Another consequence of a data breach is that organizations will have to perform forensic investigations to determine an attack’s cause. While these investigations can bring valuable insights, they become costly for SMEs with a tight cybersecurity budget.
- Future Security Costs: An organization that underwent an attack needs to invest heavily in gaining back its customers’ trust. Additionally, if it doesn’t have backup facilities like MX backup, then the recovery costs escalate.
Customers share confidential data with a business, thinking that the organization takes adequate security measures and their data stays safe. However, during a cyberattack, the data gets compromised, leading to a loss of trust in the brand.
Diminished reputation and loss of trust following a substantial data breach can have a long-term effect on any organization. Brand reputation is an SME’s most priced asset, and it must continuously work to maintain brand integrity. A single data breach episode can damage the best of reputations. Clients do not want to place their trust in an enterprise that has a shaky cybersecurity foundation.
Implementing A Robust Cybersecurity Posture In The SME Environment
As per research by the Ponemon Institute, there has been a rise in the number of cyberattacks targeting small businesses. The same report mentions that out of 2,300 IT practitioners globally, 45% believed that their organization’s cybersecurity approach was ineffective. These developments make it clear that SMEs need to consider cybersecurity a top priority in the coming years. For implementing a robust cybersecurity posture, they need to overcome the following three key challenges.
- Cost Challenges: Unlike large enterprises, which have healthy IT budgets, SMEs often opt for cheaper choices with limited resources at their disposal. A typical example is a large enterprise that regularly carries out mock drills and invites ethical hackers to test their security systems using penetration tests. While these drills can provide useful insights into the inherent vulnerabilities, they are too expensive for a small business.
- Expertise Challenges: A lower cybersecurity budget of an SME directly translates to the limited ability to hire adequately skilled and experienced professionals. Most IT security teams in small firms lack cutting-edge skills, institutional knowledge, and broad experience. They cannot perform sophisticated and thorough cyber vulnerability assessments. The smart malicious actors leverage such vulnerabilities and launch sophisticated attacks that inefficient IT security teams fail to defend.
- Complexity Challenges: Cloud is the necessity of enterprises, big and small, who wish to collaborate efficiently. With increasing migration to the cloud and complex platforms, security teams must understand the specific challenges and added complexities of tenant migration and operating in sophisticated environments. However, smaller and inexperienced groups do not have the requisite know-how to achieve it.
A cyber-attack can have devastating consequences for a small business. While larger enterprises may have adequate financial resources to recover from the negative PR and monetary losses, SMEs might not always be so lucky. Thus, an SMEs’ need of the hour is to consider a robust cybersecurity posture at the core of its policy-making. | <urn:uuid:73f00d03-dc97-4468-8fab-d32d30ac87e4> | CC-MAIN-2022-40 | https://www.duocircle.com/email-security/why-it-is-crucial-for-smes-to-have-a-robust-cybersecurity-posture | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00307.warc.gz | en | 0.923877 | 1,186 | 2.5625 | 3 |
North Korea: Researchers at McAfee published details in a new report outlining North Korea’s latest campaign that has been named Operation North Star. The attacks were carried out between late March and May 2020. The threat actor targeted individuals within the US defense and aerospace sectors with fake job offers in order to infect the workstations of employees who were actively looking for a new job. The attacks have been attributed to Hidden Cobra, which is an umbrella term used for the North Korean government’s hacking groups. Typically, attacks utilizing fake job offers occur through email, but in this case, the group used variations of this attack including using social media to spread the malware. These attacks were focused on intelligence gathering as the attackers attempted to infect devices to gain access to network resources and steal any information that was available to the victim.
By Anthony Zampino Introduction Leading up to the most recent Russian invasion of Ukraine in | <urn:uuid:89722443-18cf-4335-a426-e1e8be9f39d6> | CC-MAIN-2022-40 | https://www.binarydefense.com/threat_watch/united-states-defense-and-aerospace-sectors-targeted-by-north-korea/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00307.warc.gz | en | 0.974501 | 183 | 2.515625 | 3 |
Effective incident management is the foundation of a successful Network Operations Center (NOC) and can ensure critical infrastructure issues are handled in a timely manner. Establishing an incident management framework at your organization will help your operations run smoothly, transparently, and efficiently.
Incident Management relies heavily on workflows documented in the NOC runbook and delivered through tools like ticketing systems. These workflows are essential to unlocking the value of a tiered NOC organization and its resources.
Definition and Purpose of Incident Management
The Information Technology Infrastructure Library (ITIL)* service framework defines an incident as:
- An unplanned interruption to a service,
- A reduction in the quality of a service, or
- An event that has not yet impacted the service to the customer or user.
ITIL states that incident management aims “to minimize the negative impact of incidents by restoring normal service operations as quickly as possible.” Effective incident management enables your NOC staff to fix what is broken as quickly as possible.
Benefits of Incident Management
The benefits of incident management include:
- Increased transparency and efficient communications for stakeholders regarding incident status and timelines;
- Documented records of past incidents;
- Ability to track, analyze, and report trends in incident data;
- Ability to document solutions for repeat incidents;
- Lower risk of serious outages;
- Quicker resolution times; and
- Increased customer satisfaction.
Incident Management Lifecycle
ITIL’s well-established best practices can help you set up a solid incident management framework for your organization. The incident management lifecycle provides steps to process incidents from beginning to end:
1. Detect and record the incident
Someone, or something, must identify that an incident is happening and log it so it can be tracked. Make sure you have the appropriate tools to report and document incidents. Incidents may be identified by technical staff, detected and reported automatically by monitoring tools, or communicated by end users. Organizations should offer multiple ways for end users to report incidents, including email, phone, and a self-service portal.
2. Categorize and prioritize the incident
After an incident is logged, it needs to be categorized and prioritized to determine how it should be handled and who should perform the next steps. Categorization and prioritization allow NOC support staff to make more informed decisions and quickly understand whether an incident can be easily resolved or requires escalation. Categories and priorities also reduce redundancy and speed up time to resolution.
Every incident should be assigned a logical category and, if necessary, a subcategory based on the type of incidents your organization is likely to encounter. Common examples of incident categories are network, cloud or virtual infrastructure, database, and application. Potential network subcategories include optical layer, switching, routing, and circuit.
Categorization will help you analyze incident data effectively and look for trends and patterns, which is a key part of effective problem management to prevent future incidents. It also helps you build your knowledge base and look for opportunities to automate processes, such as log data collection.
In addition to categories, incidents should be assigned priorities, such as P1, P2, P3, and P4, or High, Medium, and Low, based on the business impact and urgency of the incident. Prioritization helps determine the order in which incidents are sorted and worked on by technical staff.
3. Investigate the incident
Once an incident is categorized and prioritized, engineers can investigate the incident to find a resolution. This step can involve time-consuming research that drains your NOC’s resources. A key piece of this step is having well-trained staff who can investigate incidents efficiently and find the quickest path to resolution, along with a strong knowledge base that staff can reference for guidance. (See the “Best Practices for NOC Incident Management” section below for more on building a knowledge base.)
In most cases, the first-level team should be able to resolve incidents. Incidents that cannot be resolved in this initial investigation need to be escalated. See the “Best Practices for NOC Incident Management” below for more on how to minimize escalations.
4. Escalate the incident (if necessary)
Incidents that require escalation are assigned to the appropriate specialized technical groups, who will use their expertise or additional resources to determine how to resolve each incident.
5. Resolve the incident
The appropriate technical staff working on the incident should focus on resolving it or finding a workaround to restore the impacted service as quickly as possible. The technical staff should then communicate with the end users and/or impacted stakeholders to verify that they are satisfied and that the expected service has resumed.
6. Close the incident
Once the resolution is verified, the incident can be closed and the resolution documented in the knowledge base.
Best Practices for NOC Incident Management
Here are a few best practices to bolster your NOC’s incident lifecycle efficiency and effectiveness:
- Communicate to stakeholders throughout the incident lifecycle: Communicating the status of an incident throughout its lifecycle assures users and stakeholders that the incident is being properly handled. It also manages stakeholder expectations and engages them to follow up if they have additional questions or comments. Well-polished NOC support will standardize these communications as much as possible through NOC automation and templates.
- Minimize escalations whenever possible: Incidents should always be resolved by the lowest tiered team possible so higher-level specialized teams can focus on more complex issues and impacted users receive prompt resolution of incidents. A documented process that defines how and when to escalate an incident and who may do so ensures that incoming incidents end up in the most capable and efficient hands to resolve them as soon as possible.
- Build a robust knowledge base: Another key piece of successful incident management within your NOC is having a robust and well-maintained knowledge base for staff to reference for troubleshooting, which aids in limiting the number of escalations. This knowledge base should include supplemental support documentation, such as runbooks and flowcharts. This helps technical staff quickly identify next steps and probable causes, avoiding unnecessary rework and research. It also ensures that first-level technical staff have the proper resources to resolve incidents, rather than needlessly escalating incidents to more specialized staff.
- Integrate your NOC’s tools for maximum efficiency: An effective NOC receives notifications (like alarms) and information from multiple sources and presents them to staff in a single, consolidated view. The NOC also needs to incorporate input from calls, email, text, customer portals, knowledge bases, documentation, and workflow management tools, each potentially with its own platform.
This incident management framework and best practices can help ensure that your NOC resolves incidents quickly while keeping stakeholders informed. Aligning your NOC’s incident management lifecycle with ITIL best practices creates ease of mind and allows you to focus on your business.
Want to learn more about effective incident management? Contact us to see how we can help you improve your IT service strategy and NOC support or download our free white paper below.
FREE WHITE PAPER
A Practical Guide to Running an Effective NOC
Download our free white paper and learn how to build, optimize, and manage your NOC to maximize performance and uptime.
*Originally developed by the UK government’s Office of Government Commerce (OGC) - now known as the Cabinet Office - and currently managed and developed by AXELOS, ITIL is a framework of best practices for delivering efficient and effective support services. | <urn:uuid:e9104daa-4c1c-4de6-a451-eeddd6e00cf2> | CC-MAIN-2022-40 | https://www.inoc.com/blog/incident-management-the-foundation-of-a-successful-noc | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00307.warc.gz | en | 0.930123 | 1,556 | 2.546875 | 3 |
For organizations around the world, cyber security is a chief concern. The potential of security risks continues to rise, according to a McKinsey & Company report, and the potential financial effect is jaw dropping.
Cybercriminals’ main target
As America’s and the world’s systems are targeted by cyber criminals, productivity can be significantly slowed and the technology which businesses rely on every day can be compromised. As a result, the bottom lines of organizations everywhere continue to take a big hit. The McKinsey & Company report estimates that $3 trillion could be lost by 2020 if defense mechanisms don’t outperform or match the rate of cyberattacks.
The report’s findings come in part from a survey of 200 organizations, and it is easy to see that the cyber security concern is widespread. Nearly 80% of the surveyed executives said their companies are unable to combat the increasing number of threats. “Theft of information assets” and “disruption of online processes” are two main concerns as an organization’s technology and systems are under attack.
The growing vulnerability of companies
Additionally, as companies rely more and more on its systems to operate every day, their vulnerability increases. With that said, executives agree that security needs to be a focus just as much as the technologies themselves to keep the organization up and running without the daily fear of data and system theft.
“The financial impact of cyberattacks is all too real, and integrating security into an organization’s technology systems must be a primary focus in this digital era,” says Dan Fiorito, Director of Security at CLEARNETWORK .
“Data breaches and security threats can bring a company’s productivity to a halt and in turn, directly affect the bottom line. Take the steps necessary to ensure your organization can stand up to the cyber bullies of the world.”
Contact us to see how we can help keep your business secure with cyber security defenses. | <urn:uuid:b2b49e90-04a3-4cf4-833f-2866fde71a18> | CC-MAIN-2022-40 | https://www.clearnetwork.com/trillions-dollars-lost-without-proper-cyber-security-defenses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00307.warc.gz | en | 0.950824 | 406 | 2.53125 | 3 |
The smartphones in our pockets are minicomputers, but the batteries inside of them may need an upgrade. We have been using lithium-ion batteries for many years, but scientists and researchers are currently finding alternatives to this issue of energy. Their discoveries may soon change the way we see batteries and even energy in general.
Just a couple years ago, in 2016, Samsung had to do an enormous recall on one of their flagship smartphones. Many Android users anticipated the Samsung Note7 release that year, but the safety hazard of the device disappointed many people and left Samsung with a hefty bill to pay. The recall was for a defect in the batteries causing excess temperature consequently causing fires. The story was so widespread that the device was even banned on many flights.
Samsung offered their customers a refund or an exchange for their devices. According to TechRadar, the company lost about $3.1 billion for the Note7 recall. Even for an industry giant, this was a huge loss for the company.
A smart device’s battery malfunctioning isn’t always the company’s fault. We can also take precautions to ensure our device’s battery health and safety.
According to Popular Science, shallow charges will help with the longevity of your devices life. This means that one shouldn’t let their phone’s battery level get below 50% before giving it another full charge. You should also unplug your device right before it reaches full charge. Modern batteries will not explode due to overcharging, but it will diminish the lifespan of your device’s battery.
Although this type of charging will help your device’s battery last longer, it is recommended that you do a full discharge once a month. This means, bring your device’s battery level to about 5% in order for it to recalibrate itself.
One should also avoid storing their device in extremely hot or cold areas. And you should always use official chargers from the device’s manufacturer. Buying a cheaper alternative could be detrimental to your device.
Apple has also had its own issues, and we’re not discussing the iPhone bending debacle of 2014. They recently apologized for slowing down the processing power of older phones to save battery life, which upset many people. Because of this situation, Apple’s new update iOS 11.3, one can check on their battery health.
This is a great new feature that helps you determine if your battery is still in good condition. But if you conclude that your battery’s capacity is degraded, there isn’t much you can do besides purchase a new iPhone or pay for a new battery and installation through Apple since the battery isn’t removable. Because of the recent criticism, they have brought down the price for battery replacement from $79 to $29.
They have also added more transparency in regards to this by letting users view their battery health in the settings menu. “You can see the estimated battery capacity and the screen explains whether the battery can offer peak performance. iOS will now tell you in this screen if your device is being throttled due to a degraded battery.”
This version of Apple Vs. Samsung may have taken a turn for the worse. The two industry giants are both having difficulties with their batteries.
Although one can purchase a battery from Apple, replacing the battery still is not as accessible as other smartphones on the market. Phones with removable batteries are so simple to replace. It’s as easy as ordering a new battery and popping into place. Samsung, LG, and Motorola all offer smartphones that can be a good option if you are looking into this.
There are many reasons newer smartphones have a built-in battery. In order to have a removable energy source, the battery itself has to have a certain shape. Thus, the design opportunities will be limited to a phone with a removable battery. For example, the use of metal and glass bodies (like many of the phones we are currently using) is not as compatible with removable batteries.
It’s also difficult to build a water-resistant phone with a removable battery. The fewer openings a phone has the more impermeable it is.
Since the battery is removable and will be exposed to the elements, it needs an extra sealing layer of protection, which causes the phone to be just a little bigger. Also, current batteries aren’t just rectangles anymore, they come in strange shapes intended specifically to fit within a design.
Lithium-ion batteries have been around for a long time, and we struggle with their shortcomings on a daily basis. One of the main problems is that they run out of power quickly, and they take a long time to charge. Most of us experience this every day with our devices.
The race to find a new energy source to replace lithium-ion has been ongoing. Scientists from all over the world are studying and exploring the many alternatives including “…fuel cells, photosynthesis, solid-state technologies, sodium-ion, solar, foam, aluminum graphite, sand and even human skin” according to Financial Times, but graphene (not to be confused with graphene oxide) seems to be at the top of the list.
“Graphene is made of a single layer of carbon atoms that are bonded together in a repeating pattern of hexagons. Graphene is one million times thinner than paper; so thin that it is actually considered two dimensional.” Under a microscope, graphene looks like a perfectly shaped honeycomb.
It is a common perception that graphene will somehow be involved with future energy technology. Because of its strength and powerful electrical conduction, many people believe it is the future of battery technology.
“Imagine a strong, light, relatively inexpensive material that can conduct electricity with greatly reduced energy losses: on a large scale, it could revolutionize electricity production and distribution from power plants; on a much smaller scale, it might spawn portable gadgets (such as cellphones) with much longer battery life.” Graphene powering smartphones and other portable devices is just the beginning. This could change the way we see energy altogether.
There are different ways to produce graphene, but the easiest way to make an extremely small sample size of graphene is by taking the lead tip of a pencil (which actually is not made of lead, but of graphite), and taking a piece of scotch tape and peeling off a small layer of the lead. Then folding the piece of tape over and over again will produce graphene.
This is how Andre Geim and Konstantin Novoselov discovered graphene in 2004, which they later won a Nobel Prize for in 2010.
Besides graphene, there are a couple other upcoming battery technology solutions that could potentially take the place of lithium-ion batteries.
One of the most promising battery technology solutions is the solid-state battery. The problem with our old batteries is not the lithium or the electrons because they both work well. What is problematic though is the polymer electrolyte gel/liquid (delivery system) that connects the anode and cathode. Since it’s a natural liquid it isn’t as stable as we would all hope for, it also doesn’t work well in the cold, and it loses its power over time.
Solid-state batteries will potentially solve this problem by swapping out the gel using a tetrahedral framework made of lithium, germanium, phosphorus, and sulfur. This new battery technology could potentially be the breakthrough researchers, scientists, and everyone who uses batteries has been waiting for.
Some of the other latest developments in battery technology are gold nanowire batteries, which could potentially bring us batteries that do not die. The molten metal battery is another promising idea in the world of batteries, which some believe is the frontrunner for the battery of the future. But many emerging battery technologies are being worked on. Other notable mentions are iron flow batteries, zinc and lithium-air batteries, saltwater batteries, hydrogen batteries, and batteries that run on carbon and water.
One of the main goals for those researching within this field is creating something that is sustainable and environmentally friendly. With all of these new ideas—the future of battery technology looks promising and green.
Graphene seems to be the future of battery technology. The only problem is making a big enough usable amount of graphene. This is what scientists are currently researching and exploring. In recent graphene news, scientists in Australia found a new way of making graphene by heating up soybean oil to 800 Degrees Celsius on a nickel foil causing the carbon atoms to arrange themselves in a one-layer lattice, which is basically graphene. But still, the only problem is making a big enough usable amount. The largest amount they were able to produce was about a size of a credit card.
Some of the smartest scientists are working full-time on this technology. The idea of a graphene supercapacitor could change the way we see batteries and energy in general. The future looks bright and full of energy. | <urn:uuid:be0f8032-0b57-475f-a140-199ce29fbf91> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/future-of-mobile-batteries | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00707.warc.gz | en | 0.954404 | 1,851 | 2.796875 | 3 |
Although it may consume much of the attention of IT professionals and it is a growing menace, ransomware isn’t the only malware threat in town. Phishing messages can also bring other varieties of dangerous malware to a company’s doorstep, and that malware can cause just as much or more damage as a ransomware attack. Over the last decade, there has been an 87% increase in malware infections. Two new phishing tricks that are being used by bad actors right now illustrate the insidious nature of malware threats and why it is so important that businesses take action to reduce their exposure to phishing-related malware threats.
See how to avoid cybercriminal sharks in Phishing 101. DOWNLOAD IT>>
What is Malware?
CompTIA defines malware as “any software that is intended to threaten or compromise information or systems. Hackers use malware to compromise networks or devices with the intent of stealing information or making a system inoperable. Malware is most often used to illicitly obtain information or disrupt business operations.” The word malware is a mash-up of “malicious software“. The first malware came on the scene in the 1980s and malicious software has been wreaking havoc ever since. The first documented computer virus, dubbed Elk Cloner, was discovered on a Mac in 1982, and the first strain of PC-based malware titled Brain made its debut in 1986.
Just like everything in tech, malware has evolved tremendously since those days. Innovation isn’t just the purview of the good guys; cybercriminals are constantly innovating too. Malware is a growth industry and cybercriminals have been quick to develop new strains of malware to do their dirty work. Ten years ago, the number of detected malware types stood at 28.84 million. By 2020, that number had ballooned to nearly 678 million varieties. Revenue in the malware industry has been steadily growing and is expected to reach 8 billion U.S. dollars by 2025.
Learn the secret to ransomware defense in Cracking the RANSOMWARE Code. GET BOOK>>
What Are the Most Common Types of Malware?
Trojans are the most common variety of malware that IT teams will encounter. This type of malware masquerade as harmless software and can initiate a variety of attacks on systems. Some trojans are aided by human action while others function without user intervention. The second most common type of malware is viruses. Viruses are responsible for about 13% of total malware infections. Similar to a real-life virus, this type of malware attaches itself to benign files on your computer and then replicates, spreading itself and infecting other files. Of course, ransomware is also a variety of malware.
What is Dridex Malware?
Dridex is banking malware distributed through phishing emails containing malicious Word or Excel attachments. It is the third most common type of malware attack. In a Dridex malware scenario, when an employee opens an attachment infected with Dridex malware and takes certain prompted, seemingly harmless actions like enabling macros, the malware is then downloaded and installed on the victim’s device. Experts point to legendary cybercrime group Evil Corps aka REvil as the originators of Dridex malware. That group is also notorious for ransomware variants including BitPaymer, DoppelPaymer, WastedLocker and Grief.
The road to security success begins with 5 Steps to Ransomware Readiness! GET IT>>
How Are Cybercriminals Spreading Dridex Malware?
Over 92% of all malware is delivered by email. This variety of malware is currently being spread through social engineering in two sophisticated phishing campaigns that capitalize on fear and uncertainty around the Omicron variant of COVID-19.
Fake COVID-19 Exposure Warnings
In one phishing campaign, discovered by MalwareHunterTeam and 604Kuzushi, bad actors send prospective victims phishing emails with the subject line “COVID-19 testing result”. Inside, the message informs the recipient that they were exposed to a coworker who tested positive for the Omicron COVID-19 variant. The recipient is instructed to open an Excel document to learn more. The email helpfully includes the relevant password-protected Excel attachment and the password needed to open the document.
After the unwitting victim enters the password, they’re shown a blurred document that looks like it contains legitimate data about COVID-19. They’re prompted to “Enable Content” or “Enable Macros” to view it. But after macros are enabled, and the device becomes infected with Dridex. Bleeping Computer reports that in some cases the threat actor taunts their victims by displaying an alert containing the phone number for the “COVID-19 Funeral Assistance Helpline.”
Fraudulent Employee Termination Notices
Security researchers have also uncovered another new Dridex malware phishing campaign that is preying on people’s fears in a time of economic uncertainty. This nasty scheme uses fake employee termination emails as a lure to draw users into taking action that infects their device with Dridex. Targets receive emails with subjects like “Employee Termination” A recent campaign of this scam told the unfortunate recipient that their employment was being terminated on December 24th, 2021, and that “this decision is not reversible.” These phishing messages are accompanied by an attached Excel spreadsheet with a name like “TermLetter.xls” that allegedly contains information on why they are being fired. The password required to open the document is also provided.
When the recipient opens the Excel spreadsheet and enters the password, a blurred form with the title “Personnel Action Form” or something similar is displayed, along with a prompt to “Enable Content” to view it properly. That enables malicious macros to be executed that create and launch a malicious HTA file saved to the C:\ProgramData folder.
How safe is your email domain? Find out now with our domain checker. CHECK YOUR DOMAIN>>
Stop Malware Infections by Keeping Phishing Messages Away from Employees
The vast majority of malware is spread by phishing messages. By making sure that employees never have the chance to make that fatal click on a phishing message, businesses reduce their exposure to malware threats. Stop phishing immediately with Graphus – the most simple, automated and affordable phishing defense available today. When you choose AI-powered, automated email security, your business gains an array of strong defenses against phishing that stop today’s nastiest phishing threats cold. Graphus’ AI technology refines your protection daily to ensure that your business is protected against tomorrow’s phishing threats too.
- You’ll gain a powerful guardian that protects your business from some of today’s nastiest threats like spear-phishing, business email compromise, ransomware and other horrors that will fit perfectly into your IT budget.
- Plus, automated security is up to 40% more effective at spotting and stopping malicious messages like phishing emails than a SEG or conventional security.
- Get detailed, actionable threat intelligence with the Graphus Threat Intelligence add-on, featuring detailed reports on the malicious or compromised IP and email addresses, URLs, and attachment hashes used in cyberattacks that target your users.
- Click here to watch a video demo of Graphus now.
What’s next in phishing? Find out in the 2021 State of Email Security Report! GET IT NOW>> | <urn:uuid:056e512a-4d45-4e99-8190-6b50e441c3bf> | CC-MAIN-2022-40 | https://www.graphus.ai/blog/covid-19-phishing-may-be-sending-dridex-malware-your-way/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00707.warc.gz | en | 0.920611 | 1,559 | 3.125 | 3 |
While it may seem to be a new buzzword invented by the tech world, the concept of observability was actually introduced in engineering control theory almost a century ago. In simple words, observability is when you infer the internal state of a system only by observing its external outputs.
When translating this concept to software development and modern IT infrastructure, a highly observable system exposes enough information for the operators to have a holistic picture of its health. When observability is implemented well, a system will not require operations teams to spend much effort on understanding its internal state.
Observability doesn’t center around technology. It’s a practice involving a set of processes and the associated tools to achieve the desired level of insight into the system. In this post, we’ll look at the key concepts involved in observability:
- The key components that make up observability
- Why observability is important
- The difference between monitoring and observability
- What to look for in an observability platform
The Basics of Observability: Key Components
Most observability tools deal with the three pillars of observability: logs, metrics, and traces. Some tools provide an interface to deal with a separate aspect of observability: events.
Metrics are counters or measurements of a system characteristic during a time period. Metrics are numeric by definition and represent aggregated data. Examples of metrics can be average CPU usage per minute per server or the number of requests returning errors per JVM each day. Metrics can be collected from infrastructure, applications, load balancers, and even applications.
Logs are intended to leave clues on what part of the codebase a request has reached, and if the application encountered anything unexpected or abnormal in processing that request. Logs can also be used to capture access attempts, as in the case of access logs. Logs can be generated by the application responding to requests or by the operating system (for example, syslog or the Windows Event Log).
Traces are similar to logs, but they provide operators with visibility into actual code steps. For example, traces could shed light on which method or service a certain request traversed before finishing (or crashing). Due to their nature, traces tend to be sampled and not stored for all requests. The ability to capture traces depends on the capabilities of your chosen observability platform or library.
Using metrics, an operator can identify when the system is operating slower than usual. Operators can then use traces to identify which part of the system is slower than usual and if it needs to be addressed; for further analysis, they can check logs for errors and exceptions.
In addition to the three pillars, you can use events to increase the observability of a system. For example, you can decide that every time an admin user executes a privileged task, the system registers an event in an observability tool. Events are registered with specific actions (for example, the execution of a function, the updating of a database record, or an exception thrown by the code). Analyzed over time, events can help determine patterns. Alternatively, structured logs can also be used as low-level events.
Observability is important for the business continuity of your most critical systems. Those critical systems often include:
- Data sources
- Edge computing nodes
The more critical a component is to your overall system, the more important it is to invest in its observability.
Why do we need observability?
Observability isn’t a goal in itself, but rather a practice to reach the availability and reliability requirements of the business. Its goal is to reduce the mean time to repair (MTTR) and increase the mean time between failures (MTBF). This can happen only if operators are able to troubleshoot production problems quickly, identify problems before they become incidents, and apply proactive measures.
Operations teams use observability to get a complete picture of the systems they manage, and SecOps can use observability tools to find any breaches or malicious activity.
From an engineering perspective, observability allows developers to catch bugs early in the development cycle, resulting in higher confidence in software releases. This encourages the drive for innovation while maintaining quality software and higher release velocity. Support teams are also empowered, particularly when using Real User Monitoring (RUM), which leads to better collaboration between teams and better support for customers.
Not only do customers receive better products, but they also have a more reliable service. This is because engineers and support teams can identify issues and apply proactive fixes. A high level of observability can also expose the “unknown unknowns”—issues that were previously not known to have existed.
Monitoring and Observability
One point of confusion often encountered is the difference between observability and monitoring.
Monitoring is the action of continuously checking the metrics and logs of a system to determine if the system is unhealthy or needs manual intervention. Monitoring also centers around measuring individual components in isolation (such as the server, network, or database).
Observability, on the other hand, has a broader scope. That’s because it has to correlate all the data collected—including monitoring data—to show where exactly something is going wrong. In other words, monitoring tells you that something is not right, and observability tells you where that problem lies. While different, monitoring and observability go hand-in-hand, both dealing with the outputs of a system.
Choosing an Observability Platform
A good observability platform is an enterprise asset. It can help the business achieve security, reliability, and availability goals. Therefore, the choice of observability platform is an important one.
Modern IT systems are complex. Most are distributed, potentially multi or hybrid cloud, and have requirements for high availability. They are also often the target of malicious attacks.
A distributed system as complex as this can generate an enormous amount of observable data. A good observability platform should be able to retrieve data from all these sources, store and sift through it in a timely fashion, and build meaningful pictures from that data. Additionally, it should be able to separate the signal—that is, events of interest—from the noise. A good observability platform should correlate and enrich data to find anomalies and trends for operators.
You can use the list below to assess the suitability of an observability platform. In short, the platform of choice should be able to:
- Integrate with all of your systems across each of your application stacks, either natively or through reliable plugins.
- Install in an automated, reproducible way.
- Capture real-time data from all target components and store, index, and correlate them in a meaningful and cost-effective way.
- Show an overall picture of your complex system in real time.
- Provide traceability to show where exactly something is going wrong and how. It should be able to do this by separating important information from noise.
- Provide historical trends and anomaly reports.
- Show all relevant, contextual data to any alerts or reports.
- Help users with an easy-to-use interface while still allowing for the creation of customized, aggregated reports for different teams.
Introducing CrowdStrike Falcon LogScale: Modern Log Management
Falcon LogScale is a modern log management solution that addresses the current needs of system observability with a cost-efficient, unlimited licensing plan. It provides real-time alerts that can increase your observability and ensure business continuity.
Log everything and anything you want with CrowdStrike’s Falcon LogScale | <urn:uuid:6a9cfa23-645a-4f41-b48f-10e94f62086a> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/observability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00707.warc.gz | en | 0.924797 | 1,567 | 3.015625 | 3 |
A very serious flaw has just been discovered in OpenSSL – an open-source and very popular cryptographic library, which has already incited a minor (for now) panic amongst security experts. According to the freshly released security bulletin by The OpenSSL Project, a missing bounds check in the handling of the TLS Heartbeat Extension can be used to reveal up to 64k of memory to a connected client or server.
In practice, this allows the stealing of protected information (under normal conditions) by the SSL/TLS encryption used to secure the Internet.
SSL/TLS protocols provide communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs). Attackers can steal secret keys, user names and passwords, instant messages, emails and business’ critical documents and communication – all of this without leaving a trace.
This makes the flaw (which has already received an alias ‘Heartbleed bug’) absolutely critical, so countermeasures should be taken promptly.
There is no word (yet) on how widely the flaw might have been exploited so far. However, the vulnerable OpenSSL 1.0.1 was released in March 2012. Whoever might have learned about the security flaw in question could have been eavesdropping any TSL/SSL-encrypted communications ever since. This makes the problem a potentially global one: OpenSSL is used by very popular server software such as Apache and nginx. Their combined market share is over 66%, according to Netcraft’s April 2014 Web Server Survey, and they are commonly used by businesses of all sizes.
As of today, a number of Nix*-like operating systems are affected too, since they are packaged with vulnerable OpenSSL:
- Debian Wheezy (Stable), OpenSSL 1.0.1e-2+deb7u4)
- Ubuntu 12.04.4 LTS, OpenSSL 1.0.1-4ubuntu5.11)
- CentOS 6.5, OpenSSL 1.0.1e-15)
- Fedora 18, OpenSSL 1.0.1e-4
- OpenBSD 5.3 (OpenSSL 1.0.1c) и 5.4 (OpenSSL 1.0.1c)
- FreeBSD 8.4 (OpenSSL 1.0.1e) и 9.1 (OpenSSL 1.0.1c)
- NetBSD 5.0.2 (OpenSSL 1.0.1e)
- OpenSUSE 12.2 (OpenSSL 1.0.1c)
Packages with older OpenSSL versions – Debian Squeeze (oldstable), OpenSSL 0.9.8o-4squeeze14, SUSE Linux Enterprise Server – are free of this flaw.
Amongst the possibly affected parties are operating system vendors and distribution, appliance vendors, along with independent software vendors. They are strongly encouraged to adopt the fix – OpenSSL 1.0.1g – ASAP and notify their users about possible password leaks. New secret keys and certificates must be generated as well.
Service providers and users have to install the fix as it becomes available for the operating systems, networked appliances and software they use.
An online tool, which allows for testing of any server by its hostname for CVE-2014-0160 bug is already in place, and it’s recommended you check it out.
Again, an attacker that might have exploited that vulnerability would leave absolutely no traces in the attacked systems, so there’s no way to learn if anyone was actually compromised. Every business that uses OpenSSL 1.0.1 through 1.0.1f is in danger, so the only reasonable action now is to plug this security sinkhole as soon as possible. | <urn:uuid:f91ce38b-569a-4735-bbe6-4d60a0202112> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/the-heart-is-bleeding-out-a-new-critical-bug-found-in-openssl/1640/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00707.warc.gz | en | 0.935178 | 793 | 2.640625 | 3 |
To stay ahead of hackers, consumers create more elaborate passwords filled with dashes, digits and capitalizations. As a result, our passwords have become so complex that we need password managers just to keep track of them all.
Forgotten passwords lead to abandoned purchases and password resets that leave our most valuable data vulnerable to social engineering and brute-force attacks by hackers.
From a 21st-century perspective, passwords are antiquated. But all that is about to change as we enter the age of biometrics.
The Password is YOU
Passwords are merely avatars — stand-ins designed to prove that you are, in fact, you. So why not remove the middleman altogether? This is the motivation behind biometric security. Similar to DNA, our body is equipped with all sorts of unique biological markers. With biometric security, your fingerprint, heart rate, tone of voice and even the pattern of your iris can be turned into code that proves your identity. The longest, most obscure password is still inferior to biometric authentication, since you can’t guess, hack or fool unique biological attributes — you must possess them.
Using microphones, fingerprint sensors and cameras already built into our smartphones and laptops, “multimodal” biometric authentication employs a series of voice- and facial-recognition markers to prove your identity.
With biometrics, consumers no longer have to create and remember passwords — authentication is simply part of who they are. Irish biometric company Daon uses voice, facial and fingerprint recognition to verify a phone’s owner, but it also picks up on GPS signals. If a phone’s owner is buying a cup of coffee or shopping online from home, the authentication threshold is less stringent.
But if a user is making a large purchase 300 miles from home, the number of biometric markers required to authenticate their identity increases. When combined, these biometric markers create an authentication score. Only when the score passes a certain threshold can a user gain access. While it may sound rather complicated, this kind of authentication can be done in seconds.
Biometrics Gets Serious
Companies like Daon, along with major players like Google and MasterCard, founded the FIDO Alliance, a nonprofit designed to develop biometric best practices for tech companies who manufacture microchips, hardware and software for mobile devices.
In the future, smartphones will use a variety of physiological and behavioral biometrics to authenticate users, ranging from the way a phone is held to body odor, heartbeats, vein matching and even ear-shape recognition.
One of the most critical standards promoted by the FIDO Alliance is that biometric data should never leave a user’s smartphone. Any time large amounts of data are aggregated, it becomes a tempting target for hackers. With client-side registration and authentication, you control access to your unique biometric markers — preventing the possibility of corporate biometric breaches.
Reaching New Audiences
Motorola’s flagship smartphone started the biometric revolution in 2011, closely followed by the iPhone 5S and Samsung’s Galaxy S5, which shipped with embedded fingerprint scanners. With the launch of Apple’s Face ID system, built into the iPhone X, 2018 is turning out to be a landmark year for biometric security.
By 2019, 770 million biometric authentication apps will be downloaded — dramatically reducing the need for passwords. But for biometrics to take hold, consumers must become educated about this new technology, and comfortable about how their data will be stored and used. For many consumers, the convenience of a world without passwords, PINs and security questions will eventually win out over their fears.
As biometric payments reduce friction and increase seamless transactions, PINs will become a thing of the past. Biometric EMV cards will come equipped with built-in fingerprint readers that allow users to rest their finger on the card reader to verify their identity. Fingerprint data will be stored on the card itself and not on the bank or store’s servers.
Fraud detection will also become smart and seamless. Forget about those “was this you?” verification texts from your bank. Financial institutions will analyze a matrix of biometric factors and assign a “risk score” to each transaction — making fraud detection less disruptive to users.
We may even see a world with zero payment friction. Payments could become so seamless that banking applications won’t have a user interface at all. Consumers may be able to transfer money to a friend with a simple voice command.
State-of-the-Art Security Risks
Declared the most disruptive technology in digital commerce, 62% of consumers already feel more secure using fingerprint IDs over passwords. And by 2020, biometrics will secure 65% of mobile commerce and net $34.6 billion in annual revenue.
As biometric security becomes mainstream, ambitious hackers will search for workarounds. And if biometric data is stolen, consumers can’t get a new face or new fingerprints. These sensitivities are the reason why so many companies are wary of adopting biometric security markers. The moral and legal implications of storing and securing biometric data are immense. And many consumers will only accept this new technology if the markers are stored on their own devices.
Hackers have already proven that they can fool Apple’s facial recognition technology with masks made on 3D printers. And biometrics breaks the core tenants of authentication — secrecy and the ability to replace them if compromised.
Our smartphones also have a 1 in 10,000 chance of making a mistake. These false verifications mean that for every 1 billion fingerprints authenticated via smartphone, a million could be wrong.
Despite the security risks, the age of biometrics is coming, offering seamless, frictionless and secure payments for all. Customers already use their faces to log into Lloyd’s, Wells Fargo and HSBC banks, and MasterCard has already unveiled “selfie pay” to its European customers.
In China, companies have access to a government image database of 700 million people — half the country’s population. This facial data is being used for security, policing and counter-terrorism, but commercial ventures too — predicting people’s food choices and offering entrance to amusement parks and trains.
Engineers at an Israeli company called Face-Six have invented a way to transform photos into texts — measuring data like the distance between eyes and transforming it into simple texts that can identify one face from millions in less than a second — and with 99% accuracy.
In Norway, this kind of facial recognition is already screening travelers at airports. Soon this technology will be used to identify missing persons, call role at universities and feed shoppers personalized ads at stores.
Insurance giant Aetna will soon replace passwords altogether — relying on fingerprints and behavioral biometrics like the way you move your mouse. And Barclays is introducing vein-ID scans that examine the arrangement of your blood vessels.
While this kind of technology may sound very Big Brother, it will soon become a part of everyday life.
A New Security Standard
Riddled with loopholes and outdated for over 30 years, hackers may be the only people who mourn the death of passwords. In 2016, 40% of the world’s 1 billion smartphones were equipped with biometric recognition sensors. By 2020, that number will reach 100%.
Whether the payments of the future are made with a plastic or a retina scan, Bluefin protects customer data the moment the information enters a payment system. To learn more, contact Bluefin today. | <urn:uuid:92cac9cf-7e64-4fff-9b67-1df04ae9967b> | CC-MAIN-2022-40 | https://www.bluefin.com/bluefin-news/rise-biometrics-consumer-applications-risks-rewards/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00707.warc.gz | en | 0.916057 | 1,574 | 2.734375 | 3 |
Matching technology to your business needs is always complicated. When it comes to cloud, understanding the different “as a service” models is important to make sure you receive the benefits you’re hoping to get by using this technology.
Software as a Service
Software as a Service (SaaS) is perhaps the simplest cloud model. In SaaS, you are a subscriber to an application. The application vendor hosts the application on their own site and stores all the data at their site as well. You have no responsibility for supporting the hardware or for making sure there’s sufficient storage for your data. You are also not responsible for applying any patches or updating the application. However, you are responsible for ensuring that user privileges are granted only to authorized employees.
Infrastructure as a Service
With Infrastructure as a Service (IaaS), the cloud provider gives you a virtual machine and storage, both sized according to your requirements. You’ll also get the basic networking services. The cloud provider handles the hardware support. You are responsible for all application support, usually including the operating system. The cloud provider will ensure the physical facility is secure, but you’ll need to handle the security of your virtual machines and your applications.
Platform as a Service
Platform as a Service (PaaS) provides you a virtual machine and frameworks needed to deploy and run applications. The goal is for these services to allow your internal developers to write, test, and deploy code faster, often using a “DevOps” methodology. In some ways, PaaS is like SaaS, where your developers are the subscribers to the development software.
All three of these models offer agility and scalability. You can add users and resources on demand, as you need them. There can be large cost savings, as you don’t need to have spare capacity purchased and provisioned in advance.
Software as a Service is ideal when there’s a vendor product that provides the exact functionality you need or the application isn’t a core business function. Platform as a Service is the right choice when you need to build custom applications but don’t want or need to handle the lower-level infrastructure that supports them. Infrastructure as a Service gives you the most control and the most flexibility to tailor your cloud environment to your exact specifications.
Everything as a Service
In addition to the three “standard” cloud models, you’ll find many other products offered “as a Service.” This includes Database as a Service (DBaaS), Disaster Recovery as a Service (DRaaS), Desktop as a Service (DaaS), Identity as a Service (IDaaS), Security as a Service (SECaaS), and more. These offerings provide access to specialized functions and may be appropriate choices to meet specific technology needs.
Adopting and Adapting to Cloud
Whichever cloud model you choose, it will take time to adapt to it. You’ll need to migrate your existing technology to the new cloud platform and train your team to monitor and manage it. It’s often helpful to get support from a managed services provider with expertise in cloud to make sure your new environment operates properly and you get the benefits you expected.
CCS Technology Group provides cloud services that help you turn the flexibility of cloud into a competitive advantage. Contact us to learn more about choosing and using the right “as a Service” model for your business. | <urn:uuid:884006fc-725f-447f-bb9e-b0f8c2d49d1e> | CC-MAIN-2022-40 | https://www.ccstechnologygroup.com/choose-the-right-cloud-model-to-meet-your-business-needs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00707.warc.gz | en | 0.951494 | 716 | 2.546875 | 3 |
As desktops, laptops, tablets and smart boards continue to have an increasingly prominent presence in classrooms, the role of IT administrator is suddenly thrust upon educators at the primary and secondary school levels.
In addition to worrying about malware intrusions that would disrupt courses and the possibility of students downloading games and other web content that is unrelated to the curriculum, educators will also be in the position of trying to determine what programs the students are using most. This latter bit of knowledge is often critical for budget reallocation in the future.
That said, computers don’t appear to be on their way out of the classroom since they’ve become such an important part of how we learn – the democratization of information is a beautiful thing. For teachers, it’s just a matter of being able to properly govern the applications and endpoints being used in their classroom. That’s why today’s post is about classroom management software.
Lesson 1 : The Basics of Application Control
One of the most effective classroom management software features is application control. The ability to restrict applications from running on classroom computers helps keep students on task while they’re logged into these endpoints. Likewise, teachers need to have the ability to restrict certain websites for obvious reasons, while whitelisting others that are essential to student learning.
Most importantly, this should be intuitive enough that:
- It won’t take time, effort or tedium for teachers to get the desired outcome.
- No intervention from IT staff will be necessary.
With the right application whitelisting tools, educators will have everything they need to eliminate classroom distractions.
Lesson 2 : Endpoint Insight On Command
Of course, students can still fool around on endpoints without access to certain applications and websites, which is part of the reason why insights on command are so important. Educators can actually view a single student’s screen, or all students’ screens if necessary, to see what they’re doing.
Classroom management solutions like Faronics Insight allow teachers to take certain actions, including the following:
- Automatically open certain applications on student endpoints.
- Create digital quizzes, tests or questionnaires for students.
- Send instant messages to students.
The result is a digital classroom environment in which educators are always informed, and empowered.
Lesson 3 : IT Troubleshooting With the Push of a Button
The one thing that we haven’t addressed yet is IT troubleshooting. In the event that a student deletes a shortcut, changes configurations or accidentally downloads malware, teachers would ideally like to have the ability to not disrupt the lesson, or have to call in an IT administrator, who might be tied up with other users’ issues as well.
Solutions like Faronics Deep Freeze basically enable users to restore their lab PCs to a pristine state, with a simple restart, in the event of any disruption/downtime due to mundane system related issues. Such solutions can be really useful in situations, where IT teams get overburdened with IT support tickets across multiple systems. In other words, students and teachers alike can pro-actively deal with system unavailability in such situations, rather than being completely dependent on IT admins. | <urn:uuid:0bb2050a-233f-419d-bb5c-cee798dde62e> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/todays-lesson-classroom-computer-management-101 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00707.warc.gz | en | 0.934583 | 667 | 2.65625 | 3 |
New Training: Query Access Tables
In this 10-video skill, CBT Nuggets trainer Ben Finkel discusses how SQL is written and how it’s used to query databases. Watch this new Microsoft 365,Microsoft Office training.
Watch the full course: Microsoft Access 2019 Training
This training includes:
49 minutes of training
You’ll learn these topics in this skill:
Query Access Tables
Understanding SQL and Access
Retrieving Data with Select
Adding Data with Insert
Modifying Data with Update
Removing Data with Delete
Filtering Data with the Where Clause
Summing Data with Group By
Query Access Tables Summary
The Strengths of SQL
SQL (pronounced "sequel") is a language purposely designed for managing data, particularly structured data, stored in relational database management systems. SQL is used to define data schema, control access to the data, query the data, and manipulate data by inserting new data or updating and deleting existing data. It is one of the most widely used programming languages, even though it is almost 50 years old, because it is efficient, stable, and relatively easy to use.
SQL is a declarative programming language, which means the user states what they want, such as SELECT * FROM Customers WHERE Last_Name = "Doe", which would retrieve records containing the Last_Name "Doe". Databases that use SQL interpret SQL queries and respond appropriately. Unfortunately, there are variations of SQL, so code may have to be modified to be used in different databases, but it's easier to modify SQL than to learn an entirely new language for each type of database. | <urn:uuid:92df19fb-0401-4350-84a6-b1fc85a3d08f> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-query-access-tables | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00707.warc.gz | en | 0.88189 | 342 | 3.28125 | 3 |
People spend a lot of money on the Internet. From an individual standpoint, the amount the average person spends on Internet-based services is their largest expense outside of the money they spend on their residence, and perhaps their transportation costs. In order to understand the landscape of what is effectively a battle for Internet supremacy, you first have to take a look at the battlefield itself.
As of September, of the 7.5 billion people on the planet, nearly 3.9 billion of them (51.7%) use the Internet. In North America, 88.1 percent of people (or roughly 320 million) use the Internet in some fashion. This presents opportunities for thousands of companies. Some provide Internet access to would-be consumers. Some deliver content services. Some deliver applications, computing storage, or processing. This has led to the marketing boom you see on the Internet today; and, is where you find a battle raging between the demand created by billions of consumers, and the companies that deliver the services needed to access that customer base.
A lot of questions have been asked recently about what the Internet is. Questions like:
- How do you monetize access to billions of potential customers?
- Should Internet access be free?
- Is Internet access a utility (and thus governed by different rules)?
- Who is in charge of the Internet?
- What is the Internet of Things?
Questions like these produce a variety of answers. With the smoldering embers of the U.S. net neutrality laws suggesting further corporate control of the Internet, we’ll look at the way the Internet is set up in 2018, the costs for businesses and individuals, and why the companies that control access to the Internet are licking their proverbial chops; and, how it challenges the core interpretation of what exactly the Internet is.
The Internet in 2018
The Internet has come a long way in a short time–so far it seems, that it’s hardly recognizable. The Internet of 2018 will continue to be the predominant marketplace in the world. It is seemingly in the process of being consolidated. In fact, 50% of Internet traffic in North America is from 35 websites. In 2007, that same amount of traffic was spread around several thousand websites.
Whether or not a handful of companies own most of it is irrelevant to a consumer, but it’s getting to the point where the product is so consolidated that prices will almost assuredly increase. It’s like Gap Inc. They own The Gap (obviously), Old Navy, Banana Republic, J Crew, H & M, and a few other companies that do largely the same thing: manufacture and sell clothes. Each of these retailers has its own branding and its own management teams, but the money goes to the same place.
The Internet, for all its vastness and entrepreneurial promise, is seemingly controlled, like many industries in the United States, by heavy hitters–companies like Google, Amazon, Facebook, etc. Without playing by their rules, many companies may as well peddle their wares from a kiosk at the local mall. 80% of referral traffic comes from Facebook and Google. This is why many retailers’ sustainability is tied to how they are able to properly advertise their product…for these two company’s search algorithms. Today, tens of thousands of marketing companies have supplied the immense demand to build constructs that meet the demands outlined by the masters of the Internet.
The Internet is extremely important to us. Our business, and our clients’ businesses, rely on it every single day. We aren’t alone, and in many ways, the Internet is the newest (and arguably last) frontier. If something were to happen and the Internet were to go out for an extended period of time, tens-upon-tens of thousands of businesses, including ours, would likely cease to be. The truth is that we’d pay what we’d have to pay for Internet service.
For individuals who have come to depend on the Internet, they likely feel the same way. There is a story of Stamford professor, Jeff Hancock, who used to ask his students to try and stay off the Internet for 48 hours over the weekend and see how it affected their lives. In 2009, when he assigned the task, “…there was a class revolt,” Hancock said, “The students emphatically said the assignment is impossible and unfair.” They stated several reasons for their near-constant Internet use, but it was clear that the biggest reason is that every one of those students had a mobile device that had made near-ubiquitous Internet accessibility a major part of their lives. They paid for access, and didn’t think it was reasonable for their professor to run rough-shot over their lives (for 48 hours). This new world that is so dependent on computing can be seen in the numbers. In September of 2009, a quarter of the world’s population, 1.7 billion people used the Internet. That number would double by December of 2013.
This growth in demand for Internet access (affordable or otherwise), which you can now see in parts of central Asia and Africa, created markets, which in turn created more markets, and nearly overnight, the Internet went from what could be considered a novelty to a must-have utility-like entity–as necessary as food and water to some. In fact, it’s easy to relate. Think about how much you use the Internet. It’s enough to purchase it for your home AND for a mobile device. If you take your mobile data plan into account, the average U.S. and U.K. consumer pays over $100 a month (just over £73) to have near-ubiquitous Internet service.
The invention of social media has made it even more necessary to people. In the United States alone, over half of people (nearly 180 million) use social media. As a culture, we rely so heavily on it, the President uses social media nearly every day to comment on situations, and in some cases, state changes to federal policy. The combination of mobile devices and social media, e-commerce, and secure payment has created an insatiable demand for Internet access, Internet-based services, and the speed to properly broadcast all the content that people today have come to use.
In some places Internet costs more, and people pay it. In fact, people have so bought into the Internet that a whole generation of people would be completely lost without access to it. Whether that is a problem or not is a matter of opinion, but whether you pay $60 a year for Internet like the average Iranian, or $3000 per year like some do in Southeast Asia, people will compromise their own well-being for an Internet connection.
The Economy of the Internet
Projecting the economy of the Internet into the future is like projecting anything’s future state. It is largely unreliable. Today, the Internet is going through another shift. Looking past the hosted utility computing craze that we see today, you see a world that is completely connected. The Internet of Things, the title given to the act of connecting all things, is in its infancy, but some estimates have it being as big as 11 percent of the global economy by 2025. This strategy not only warrants the production of products that have the capabilities of being connected, it also makes certain that more money will be put into securing these systems.
The current state of the Internet’s economic success is a hotly debated issue by economists. On one hand, many, like Northwestern professor, Robert Gordon, suggest that as good as the Internet has been at growing economic productivity, it’s shockingly less important than the establishment of electricity-producers at the turn of the 20th century. This is largely because productivity (that is, the creation of tangible goods) has shifted from manufacturing goods to fulfilling service requests. The Internet of Things, however, alters this thought, clearly presenting ways to boost efficiency and revolutionize traditional business operations in manufacturing, transportation, communications, and retail.
The new economy of the Internet is tied up in “things,” but that’s not to say that there isn’t an immense amount of commerce taking place. Ecommerce generates over $2.3 million every minute, every day, or roughly $3.3 billion a day; almost a quarter of which changes hands over wireless mobile networks. These levels of enterprise will be growing exponentially as the IoT grows.
Net Neutrality and Its Elimination
There are currently around 2,700 Internet Service Providers (ISPs) in the United States. With this amount, it is really impossible to consider that there isn’t enough competition. Some critics, however, have stated that there are a handful of ISPs that control access to the Internet for most Americans. The FCC had stated many times that Net Neutrality actually hurt ISP competition. While there were some that would corroborate that claim, many smaller ISP owners considered it only a pain when they couldn’t secure capital that would allow them to deliver service that customers have come to expect. Some even considered net neutrality a good thing, since the money content providers such as Netflix would be forced to invest, would be invested in larger ISPs such as AT&T, Comcast, and Spectrum.
When the FCC decided to roll back the net neutrality laws in December of 2017, it became evident that a more laissez-faire attitude over the regulation of ISPs will have a long-term effect on consumers. Thus far there hasn’t been much of a change, but going forward it will be corporate entities regulating themselves, as the FCC gave up all regulation of Internet providers.
Expect the ISPs to consolidate further through acquisition, and to establish a pricing structure that will allow them to maximize the profitability of their service. It remains to be seen if net neutrality is finished forever, or if the repeal of the mandate will increase infrastructure spending as many of the ISP lobby have suggested.
The Internet is important for individuals and business, alike, and it’s growing larger and larger by the day. The more the third-world develops infrastructure and allows the other half of the human race access to the Internet, the more important the Internet will seemingly become. For more great information about today’s most important technology topics, sign up for our newsletter today. | <urn:uuid:09bfa51b-a940-4b8c-beb8-fe3e9fe633f5> | CC-MAIN-2022-40 | https://www.activeco.com/internet-stands-2018/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00707.warc.gz | en | 0.965369 | 2,139 | 2.5625 | 3 |
Data security refers to the process of securing digital information from unauthorized access, corruption, or all-out theft through its lifecycle.
When we discuss data security, we mainly talk about security practices within an organizational setting. The concept covers every aspect of information security, such as hardware, software, access controls, and organizational security policies. A sound and thoughtful data security strategy can make a difference in a business environment because it helps organizations protect one of their most valuable assets — data — against cyberattacks.
Why is data security important?
In the digital age, data reigns supreme. These days, all businesses deal with data in one way or another. Whether it's a financial institution handling sensitive customer data or an individual operation collecting contact information of its clientele, data is a significant part of all enterprises regardless of their size or industry. Data informs decision-making, improves efficiency, enables better customer service, and plays a major role in marketing.
With growing public awareness about the importance of data security and more data-related laws and regulations coming into play, companies face challenges in creating secure infrastructures and processes to handle enormous amounts of data.
Failure to establish a secure perimeter frequently results in a data breach, leading to substantial regulatory fines and reputational damage. According to IBM's Cost of Data Breach Report 2022, the global average data breach cost is estimated at $4.35 million. It's not hard to imagine that a data breach could spell the end of a company.
As data breaches and cybercrime continue to rise and become more sophisticated, companies of all sizes and industries look for ways to ensure the security of their data. And the first step in doing so is understanding the threats you're facing.
Data security threats
Cyber threats related to data security come in various shapes and forms. Here are some of the most common ones that every organization has to deal with.
Phishing attacks are designed to acquire sensitive information from unsuspecting users. Hackers achieve their goal by crafting email messages that appear to be from a reputable source. In those messages, you are usually urged to download a malicious attachment or click on a malicious link. If you follow through, the attackers can access your device and get their hands on your sensitive data.
Accidental data exposure
Not all data breaches are caused by a cyberattack. Sometimes it's the byproduct of human error or lack of awareness. In the day-to-day of office life, employees will inevitably share data and exchange access credentials. Unfortunately, security might not be at the top of their priority list, and accidents can happen: data can end up on an unsecured server, and passwords can be stored in a publicly accessible sheet. And that's why cybersecurity training sessions are critical. Once employees grasp what's at stake and what to pay attention to, the risk of accidental data exposure can be drastically minimized.
Malware is usually spread via email. In most instances, hackers will launch a phishing campaign to trick users into downloading and installing a piece of malicious software. Once malware is on a corporate network, hackers can do pretty much anything, from tracking network activity to downloading enormous amounts of data without authorization.
Ransomware is a type of malware that is designed to encrypt data on the affected machine. If a ransomware attack is successful, bad actors will demand a ransom in return for decryption services.
Insider threats might be the hardest to anticipate. As you can guess, insider threats are employees who intentionally harm an organization's security perimeter. They might share sensitive data such as passwords with dubious third parties or steal business data and sell it on the black market.
Password security for your business
Store, manage and share passwords.
30-day money-back guarantee
Types of data security
As already discussed, data security comprises many different approaches and practices. Usually, the most effective way to ensure data security is to use a combination of security practices to limit the potential surface area of an attack.
Data encryption is one of the easiest ways to ensure the security of sensitive information. Fancy terminology aside, data encryption converts readable data into an unreadable encoded format. Think of it this way: even if a hacker can get their hands on data in your servers, if it is encrypted, the attacker can’t do anything unless they can decrypt it. Fortunately, contemporary encryption is unbelievably hard to crack without a decryption key.
Data, as with anything else in life, can become irrelevant. Like stuff clogs your attic, data can clog your servers. Often, irrelevant data is not thought of as a priority security-wise. And sometimes it's best just to get rid of it for good. Data erasure is an effective data management and security method because it shrinks the potential attack surface and potential liability in an instance of a data breach.
Data masking is a data security technique during which a data set is duplicated but with sensitive data obfuscated. The benign copy is usually used for testing and training for cybersecurity purposes. Masked data is useless for a hacker because it is essentially incoherent unless the hacker knows how that data has been obfuscated.
Data backups are one of the easiest steps an organization can take to mitigate the potential dangers of data loss in a cyber event. Backups ensure that even if data is compromised or stolen, it can be recovered to its previous state rather than entirely disappear.
Data security vs. data privacy
Today, the terms data security and data privacy are used a lot. At times, they might seem interchangeable. While in a sense that can be true, the two terms are technically distinct concepts.
Data security is a broad term that encompasses data privacy. However, when we talk about data privacy, we mainly refer to cybersecurity practices that are aimed at protecting data from unauthorized access or corruption.
Data privacy, on the other hand, is a concept that aims to ensure that the way businesses collect, store, and use data is compliant with legal regulations.
Data security compliance
Today, most countries have laws and regulations that govern the way organizations should collect, store, and use data. Regulatory compliance can be a challenge for companies of all sizes and industries. Still, they're vital in ensuring that your data will not be abused and remain secure at all times. Here are some of the most important regulations that relate to data security.
General Data Protection Regulation (GDPR)
The GDPR is the European Union's primary data protection and privacy legislation. Passed in 2016 and implemented in 2018, the GDPR ensures that organizations handle consumer data responsibly and securely. The GDPR was one of the first legislative efforts requiring companies to ask for user consent to collect their data.
California Consumer Privacy Act (CCPA)
The CCPA went into effect on Jan. 1, 2020. It provides consumers in California with additional rights and protections regarding how businesses use their personal information. The CCPA is very similar to the GDPR and imposes many of the same obligations on businesses that the GDPR does.
Health Insurance Portability and Accountability Act (HIPAA)
HIPAA is the United States' data protection and security legislation that regulates electronically protected health information (ePHI). It is aimed mainly at healthcare providers and partnering institutions that deal with such data. HIPAA lays out requirements for the security of ePHI, which involves specific physical, technological, and administrative safeguards.
Sarbanes-Oxley (SOX) Act
The SOX act was passed in 2002 to protect shareholders and the general public from fraudulent corporate practices and improve corporate disclosures' accuracy. Even though the act does not specify how an organization should store records, it does define which documents should be stored and for how long. The SOX act primarily applies to public corporations.
Payment Card Industry Data Security Standard (PCI DSS)
The PCI DSS is a set of regulations geared toward organizations that process, store, and transmit credit card data. It lays out requirements to ensure that all credit card-related data is handled securely.
International Standards Organization (ISO) 27001
ISO/IEC 27001 is an Information security management standard that outlines how business entities should manage risk related to cybersecurity threats. Defined within the ISO 27001 standard are data security guidelines and requirements intended to protect an organization's data assets from unauthorized access or loss. The ISO/IEC 27001 is not a piece of legislation in the sense that the GDPR is. It is rather a standard that helps businesses comply with regulations such as the GDPR cost-effectively.
Data security best practices
Data security is a complex concept that includes a variety of practices and processes working together like a well-oiled machine. The data security strategy within the organization depends on its size, IT infrastructure, resources, and a number of other variables. However, a few security measures can be applied in any organization.
Access management and controls
Access management and controls help organizations set rules for who has access to networks, systems, files, and various accounts within the digital ecosystem. Proper access management and control integration can significantly shrink the potential attack surface area.
One of the leading causes of data breaches is human error. The obvious counter is education. For an organization that wishes to be successful security-wise, a team that is aware of the risks that might face and how they would be handled is crucial.
Weak, reused, or old passwords also play a significant role in data breaches. It's understandable because today, an average person needs about 100 passwords. Ensuring that each one is unique and complex is impossible without help from technology. Password managers are tools designed to help individuals as well as organizations to create strong passwords and securely store them and access them whenever there's a need. Today's business password managers improve organizational security as a whole and spur productivity with handy features such as autofill and autosave.
Cloud data security
Many organizations rely on cloud technologies to carry out daily operations. While cloud technology offers significant benefits, it simultaneously opens up additional security risks. Misconfigured cloud technology services can lead to data leaks and breaches. Therefore, you must take action to ensure that any cloud apps you use are properly configured to limit potential risks.
As discussed earlier, data encryption is a way to secure information within databases and servers by making it unreadable without the decryption key. Encryption is essential to overall data security and should always be employed.
Data loss prevention and backups
These days, most business related information is stored in databases. The data they contain may be customer records, credit card details, or internal company documents. Backing up data protects the organization from accidental data loss or corruption. Regularly scheduled backups can also help in the case of a ransomware attack because the backups could be used to restore the affected data.
Incident response and disaster recovery plans
An incident response plan is an organization's systemic approach to managing a security-related event. Usually, such plans are purpose-built to address malware attacks, data breaches, unauthorized network intrusions, and other cybersecurity-related events. With a comprehensive incident response plan, the organization has a clear pathway to mitigating a cyber attack in a swift and coordinated manner.
How NordPass Business can help
As mentioned, weak, old, or reused passwords are often the cause of a data breach. Password fatigue is a major factor that leads people to use weak and easy-to-remember passwords across multiple accounts. However, password fatigue can be mitigated with the help of a corporate password manager.
NordPass Business is purpose-built to improve organizational security and take a load off employees when creating and remembering passwords. Keep all your business passwords, credit cards, and other sensitive information in a single encrypted vault and securely access it whenever you need. Thanks to company-wide settings present in NordPass Business, you can set password policies across your organization. And with the help of the Admin Panel, access management is easier than ever.
Because NordPass Business is certified according to ISO/IEC 27001:2017 and SOC 2 Type 1 regulatory standards, it can be a critical security tool for companies trying to meet GDPR and HIPAA compliance standards.
Try NordPass Business with the 30-day free trial and enjoy improved productivity and security within your organization.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:35541964-8e3c-4742-b427-afefb654eed2> | CC-MAIN-2022-40 | https://nordpass.com/blog/what-is-data-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00107.warc.gz | en | 0.930737 | 2,561 | 2.96875 | 3 |
Although Content Delivery Network (CDN) is to reveal all of its true power in the future decades, an insight into how it came to be is quite useful. CDNs are surrogate web servers distributed across different data centers in different geographical regions to deliver content to end-users, based on proximity of this user. Thus, using a CDN hosting system instead of a standard one, offers a cost-effective solution for an online vendor, or an e-commerce owner, to keep his impatient customers satisfied.
It also means the faster performance of a hosted website and a better security from hacker attacks. This is because CDNs maintain multiple Points of Presence, i.e. servers store copies of identical content and and apply a mechanism that provides logs and information to the origin servers. Instead of a standard client-server communication, two communication flows are used. Between client and the surrogate server, and then between the surrogate server and the origin server.
To sum it up, CDNs keep:
- the Internet less clogged
- the Web customers more satisfied
- the e-commerce business more cost-effective
CDNs Through Generations
The development of content delivery networks sought to deal with extreme bandwidth pressures, first as video streaming was growing in demand as well as the number of content providers. That was in the past. Now, CDNs are a continual trend, with the emergence of cloud computing, involving all the layers of cloud computing:
- SaaS (Software as a service), e.g. Google Docs
- IaaS (Infrastructure as a service), e.g. Amazon
- PaaS (Platform as a service) e.g. Google App Engine
- BPaaS (Business Process as a service) e.g. advertisements, payments
The first generation CDNs were not encountered before the late 90’s. However, some technological innovations that preceded this generation of CDNs, such as server farms, hierarchical caching, caching proxy deployment and so on, were crucial for paving the ground of the desired infrastructure of such internet un-clogging technology.
First Generation CDNs were designed to address the higher demand for audio and video streaming, to accelerate websites, to support growing volumes of content. Finally to enable the companies providing products or services to handle all requests from the Internet users, and still not face a significant loss in revenue while dealing with their IT infrastructure.
The main focus of a second generation CDNs however was a peer to peer production, cloud computing and energy awareness. Also, serving the demand of the internet crowd for more – interactivity. And not only from their desktop browser, but also their mobile devices. Also, many many ISPs, telcos, IT companies and traditional broadcasters spread across the globe. Some moved into the CDN industries themselves (Amazon, AT&T).
The third generation CDNs are expected to be completely community driven. Autonomous and self-manageable. Its main focus, the quality of experience for the end-user.
Some Historical Events that Speeded Up the Development of CDN technologies
- 9/11 Terrorist attack: a sudden, unanticipated mass of Internet users tried to access the particular news site, simultaneously. This caused severe caching problems, and finally more money invested in developing CDN hosting to provide protection from the flash crowds for the websites
- Akamai Technologies evolved out of an MIT research effort aimed at solving the flash crowd problem
- Broadband Services Forum (BSF), ICAP forum, Internet Streaming Media Alliance organizations all took initiatives to develop standards for delivering broadband content, streaming rich media content – video, audio, and associated data – over the Internet
- By 2002, large-scale ISPs started building their own CDN functionality, providing customized services
- In 2004, more than 3000 companies were found to use CDNs, spending more than $20 million monthly
- In 2005, CDN revenue for both streaming video and Internet radio was estimated to grow at 40%
- Combined commercial market value for streaming audio, video, streaming audio and video advertising, download media and entertainment was estimated at between $385 million to $452 million in 2005
- In 2008 Amazon launched their Content Delivery Network
- In 2011 AT&T announces their new cloud-based Content Delivery Network that enables content to flow from its 38 data centers around the world to reduce transit and latency times
- In May 2011, Google said it had 200,000 active applications and 100,000 active developers.
- The stocks of leading CDN market players (Akamai, Limelight, EdgeCast) slumped; Akamai’s total revenue for 2011 was $1,159 million, a 13% increase over 2010 revenue of $1,024 million
- Akamai’s stock revenue for 2012 is reported to be $345.32 million
- Cisco projected 2012 Video CDN revenues at around $1 billion with growth for 2013 between 40% and 45%, and the complete market to grow from $6 billion to $12 billion by 2015. | <urn:uuid:1f3049ec-7280-4810-82ba-2a8a847e7b6e> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/the-history-of-content-delivery-networks-cdn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00107.warc.gz | en | 0.941142 | 1,036 | 2.703125 | 3 |
On Beer, Basketball, and IKEA
As defined in Wikipedia, cognitive biases are tendencies to think in certain ways that can lead to systematic deviations from a standard of rationality or good judgment. Cognitive biases are patterns of thinking and behavior which we consistently use despite them leading us to make less than optimal decisions. Wikipedia lists over 90 examples of cognitive bias. Check out the link, and I promise you’ll find several examples that influence your thinking daily. With all this irrational thinking going on, how can we avoid making bad decisions in our work and personal lives?
The field of economics defines a “rational actor” as one who makes decisions in their own self-interest in order to maximize their utility (i.e. maximize benefits received). Most economic theories either implicitly or explicitly assume that all actors are rational. Behavioral Economics is the study of other factors that influence economic decision making, like social or emotional factors. Below are some examples of “irrational” behaviors and a discussion of how you might avoid letting these biases influence you into making bad decisions.
IKEA Effect / Not Invented Here
Research has shown that many of us are likely to overvalue things that we have built ourselves, such as furniture from IKEA. People seem to hang on to IKEA furniture much longer than they would other pieces of similar price and quality. This additional layer of value could be caused by an emotional attachment to something you “built” with your own two hands. Now, I cannot make heads or tails of IKEA furniture directions, so we won’t be keeping anything I put together. My wife, however, is one of the rare breed that can actually make sense of those directions, so those pieces might stick around a bit. Regardless of your level of IKEA craftsmanship, the fact remains that one’s participation in the production of a product impacts their perceived value of said item.
Kenway has been working with a long-time client (let’s call them Company A for identification’s sake) that is currently involved in an integration effort following a merger (with Company B). Given that the two companies were in the same industry and region, they naturally had faced almost identical business problems. While both companies were successful, they had solved these problems with different approaches. One great example of this was a key business system which both companies had purchased independently, but implemented differently using custom configuration, 3rd party add-ons, and in-house custom development. As I sat in a meeting to determine the future direction of this platform for the merged company, I was struck by how each of the people that had been involved in implementing the software initially were strongly attached to “their” version. I even found myself questioning some elements of Company B’s implementation, because I had been involved with Company A’s implementation. This is dangerous territory. Not only were decision makers being influenced by their individual bias, but the project would also face significant change management challenges if the end product was perceived by some users as a step back from their current tool.
Thankfully, the group eventually was able to set aside their attachment and identify a path forward that involves using a unified approach but making some changes to account for key features that the entire group agreed were valuable in the end-state solution. These sorts of challenges will continue to arise throughout the integration.
It is never easy to see something that you’ve worked on get tossed aside for another’s idea. In these sorts of circumstances, looking to a truly unbiased source for an opinion is helpful. You should ask someone that wasn’t on either project, find someone from another department that has enough knowledge to evaluate the options, or use an external consultant to provide an unbiased assessment of the options before moving forward with a decision. All of these options encourage the use of fact-based evaluation while minimizing the emotion or opinion based reasoning of a biased decision process.
Confirmation Bias and the Effect of Expectations
The expectations that we have prior to experiencing something profoundly impact the way we perceive that event. Dan Ariely’s book, Predictably Irrational, describes several examples that illustrate the Effect of Expectations. One of my favorites is this experiment with beer. College students given free, blind samples of beer tended to choose MIT Brew, which was essentially Budweiser with some balsamic vinegar added, over a regular Budweiser. However, if the subjects were told before the taste test that one of the beers included vinegar, they were far more likely to dislike the vinegar beer and choose Budweiser instead.
You may have seen a similar effect when watching a sporting event with a fervent fan of one of the teams. I recently had the opportunity to watch my nephew play in a high school conference championship game. Two of my close friends, both graduates of the University of Illinois, attended the game with me. My nephew is an excellent basketball player (if I do say so myself), but in this game he was asked to guard a 6’9” player that’s being recruited to play at Division I schools. At about 6’1”, this was a tall order for my nephew and several of his teammates that also took their turns. Both teams played hard and, given the significant size difference, you won’t be surprised to hear that the game got physical. The big guy took some hard fouls and delivered a few, as well; including an offensive foul that sent my nephew to the sideline bleeding (as you might imagine, my sister was not pleased with this turn of events). In the end, my nephew’s team lost the game. The next day, my two friends were not happy to learn that the player is being recruited by their alma mater. They could not believe that their school wanted an athlete that was so clearly a dirty player. However, as luck would have it, one of my two friends is a teacher. One of his students is the son of a referee that worked the game. As they discussed it, his student related that his dad was impressed at how well the player had kept his cool given the physical defense that was played on him the whole game and some of the fouls that he absorbed. My friends had succumbed to a mixture of Confirmation Bias and the Effect of Expectations. Exposed to the very same actions that the referee saw, they saw them as dirty play because they were rooting for my nephew’s team. The referee saw those same actions as showing great restraint.
This effect is at work in the business world as well. A few years back, a Kenway client decided on an implementation strategy to solve a reporting problem that involved creating an intermediate data warehouse (the decision pre-dated Kenway’s involvement). However, one of the key business sponsors disagreed with the approach at the time. Nonetheless, the project went forward, and the data warehouse continues to be a part of ongoing projects for this client. Despite its continued use, the same business partner that disagreed with the approach initially continues to raise objections to the cost or technical approach of each solution that proposes to leverage this particular data warehouse. Regardless of whether or not the original decision was correct, it is clear that the original objector continues to oppose everything to do with the solution. Moreover, the technology partners struggle to respond to his objections, because they have become so conditioned to expect the objection.
It takes significant effort to set aside our expectations and original opinions on a topic, but try to take a neutral approach. Moreover, look at others in the room as sources of new information, people from whom we can learn, not as adversaries. If you go into each encounter seeking to learn something, you will be more likely to do just that.
Overcoming Cognitive Bias
The first step in overcoming cognitive bias is to accept that it exists and that it may impact you. Just having that knowledge will help you think more objectively about the business problems that you face. Furthermore, don’t hesitate to ask for outside help, make an honest attempt to learn from and understand the perspective of others every time you engage with them, and acknowledge your own bias. Finally, if you really must buy IKEA furniture, call my wife before you attempt to assemble it. | <urn:uuid:3f941d1f-a83c-4cd2-98fd-5ea8a44f3285> | CC-MAIN-2022-40 | https://www.kenwayconsulting.com/blog/on-beer-basketball-and-ikea/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00107.warc.gz | en | 0.980164 | 1,706 | 2.859375 | 3 |
What Is a Kubernetes ConfigMap?
A Kubernetes ConfigMap is an API object that allows you to store data as key-value pairs. Kubernetes pods can use ConfigMaps as configuration files, environment variables or command-line arguments.
ConfigMaps allow you to decouple environment-specific configurations from containers to make applications portable. However, they are not suitable for confidential data storage. ConfigMaps are not encrypted in any way, and all data they contain is visible to anyone who can access the file. You can use Kubernetes secrets to store sensitive information.
Another potential drawback of ConfigMaps is that files must be limited to 1MB. Larger datasets may require different storage methods, such as separate file mounts, file services or databases.
Related content: read our guide to Kubernetes architecture ›
In this article, you will learn:
Creating and Viewing ConfigMaps
The code examples in this section were shared in the Kubernetes documentation.
Creating a ConfigMap
To create a new ConfigMap, use this kubectl command:
kubectl create configmap <name> <data-source>
The <name> is the name of the ConfigMap, which should be valid for use as a DNS subdomain. The <data-source> indicates the files or values from which ConfigMap data should be obtained.
You can create ConfigMaps based on one file, several files, directories, or env-files (lists of environment variables). The basename of each file is used as the key, and the contents of the file becomes the value.
|ConfigMap Data Source||Example kubectl command|
Viewing a ConfigMap
To view the data in a ConfigMap via console output, use the command:
kubectl describe configmaps <name>
The output looks like this. Each key, which was created based on a filename, is followed by the separator “—-”. In this example the ConfigMap was created from two files, game.properties and ui.properties.
To view the ConfigMap in YAML format, use this command:
kubectl get configmaps <name> -o yaml
The output looks like this:
How to Consume Kubernetes ConfigMaps
There are three main ways to ConfigMaps can be accessed:
- Mounting a ConfigMap as a data volume
- Accessing the ConfigMap remotely by pods in the same namespace
- Defining a ConfigMap separately from pods and using them for other components of the Kubernetes cluster
Using ConfigMaps as Files from a Pod
To ingest a ConfigMap in a volume within a pod:
- Use an existing ConfigMap or create a new one. The same ConfigMap can be referenced by multiple pods.
- Modify the definition of your pod, adding a volume under
.spec.volumes.Name the volume and set a
.spec.volumes.configMap.namefield to reference the ConfigMap object.
- Add a
.spec.containers.volumeMountsto every container using the ConfigMap. Specify
.spec.containers.volumeMounts.readOnly = trueand set
.spec.containers.volumeMounts.mountPathto a path where the ConfigMap should be mounted.
- Each key in the ConfigMap will now become a file in the mounted directory. To access the data, change your command line instructions to look for files in the directory matching the keys in the ConfigMap.
See the example below.
Related content: read our guide to Kubernetes pods ›
Immutable Secrets and ConfigMaps allow you to prevent changes to data for clusters that use ConfigMaps extensively (tens of thousands of unique mounts or more). This can protect against unintended updates that may cause an application outage, and also improve cluster performance, due to a reduced load on kube-apiserver.
ImmutableEphemeralVolumes feature gate controls this feature – to make a ConfigMap immutable, set immutable to true, as follows:
If a ConfigMap is set as immutable, you cannot change this setting or modify the
binaryData fields. You have to delete the ConfigMap and recreate it. Existing pods maintain mount points to deleted ConfigMaps, so it is advisable to recreate these pods.
Here are a few important configuration for day-to-day management of ConfigMaps.
Defining ConfigMaps as Optional
Before a pod is deployed, the namespace must contain the ConfigMap. You can set the
optional flag, to prevent pods failing to start in the absence of the ConfigMap. An admission controller prevents deployments that lack specific configuration values.
If you are using Helm, you need to ensure the ConfigMap template exists before the Deployment starts running. A lifecycle hook is usually the best way to achieve this.
Injecting ConfigMaps into Environment Variables
If your application uses system environment variables, you can create environment variables in a pod using ConfigMap data. When ConfigMap data is consumed as environment variables, the pod is not updated if the ConfigMap data is updated. You will need to restart the pod, either by deleting and creating a new pod or initiating a Deployment update.
Any change to a ConfigMap requires updating the entire Deployment to ensure that the new configuration data will reach the pods. You can use a CI/CD pipeline to update the name property of ConfigMaps, and then change the reference in the Deployment. This will trigger a normal Kubernetes update, which will refresh ConfigMaps in the deployment. | <urn:uuid:b0c28f4f-c1d3-4941-aa31-56fd9ce4aa5d> | CC-MAIN-2022-40 | https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-configmap/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00107.warc.gz | en | 0.783922 | 1,220 | 3.21875 | 3 |
Tunable Optical Transceivers: What are they and when should they be used?
March 14, 2019
Tuneable optical transceivers for DWDM (Dense Wavelength Division Multiplexer) systems, such as XFP and SFP+, have been widely available within the telecommunications industry for a number of years. In this article, we detail exactly what tunable optical transceivers are, how they work and when they should be used.
What is a tunable optical transceiver?
Tunable optical transceivers are similar in operation and appearance to fixed transceivers, however they have the added capability of allowing you to set the channel (or colour) of the emitting laser. This reduces the need to have multiple devices that each operate at fixed wavelengths installed within a network. Instead, you have one transceiver that can be tuned according to the requirements of the operator.
Tunable transceivers are only available in DWDM form, because of the format of the dense wavelength grid. Typical tunable optics are designed for the C-Band 50GHz. They support approximately 88 channels which are set with a 0.4nm interval. They usually start from channel 16 and go up to 61, but this is dependent on the manufacturer of the router or switch and which channels it supports.
There are two main types of tunable transceivers:
Tunable XFP transeviers are designed with an integrated full C-Band tunable transmitter and high-performance receiver. This means that wavelengths can be set as default in the 50GHz DWDM grid. With single mode fiber, XFP tunable transceivers can operate at distances up to 80km.
Depending on the manufacturer, the names for these products can vary even though they have the same operational features.
These optics can be tuned in different ways. Most devices make it possible to tune over the CLI (Command Line Interface), but not every switch or router is capable of this.
Tunable SFP+ transceivers are full duplex, serial optical devices. The transmit and receive functions are contained within a single module which provides a high-speed serial link at 9.95 to 11.3Gbps signalling rates.
Again, these products can operate at distances of up to 80km with single mode fiber.
When should tunable optical transceivers be used?
Tunable transceivers are mostly kept as spare parts.
If, for example, you are running a large scale DWDM network with various nodes located in lots of places, and you are using up to 80 different wavelengths (with 50GHz spacing), with fixed transceivers you would need a few spares for each wavelength. This would result in huge amounts of stock and complex stock management processes. With tunable optics, the number of devices is significantly reduced, lowering storage and management costs.
They are often more expensive than fixed transceivers but savings are made in large networks where tunable transceivers are used to replace multiple fixed wavelength products.
What are the benefits of tunable transceivers?
As technology has progressed, tunable transceivers have improved drastically. They are now very popular within DWDM transmission systems because of their capabilities and ease of use.
The key benefits are:
- Wide tuning range
- Suitable for 100G systems because of reduced line-width
- Convenience of wavelength adjustment depending on transmitting needs
- Reprogramming takes seconds
- Saves money in the long term
Tunable optical transceivers are able to operate at various wavelengths and adjust their wavelength according to each users’ needs. They are very popular in DWDM systems due to cost-saving factors and flexibility of use.
At Carritech, we stock and support a full range of optical transceiver products. To view our stock, learn more about our products or enquire about purchasing visit our optical transceiver page.
For more information on any of the information in the article, contact us at firstname.lastname@example.org or call +44 0203 005 1170.
Get all of our latest news sent to your inbox each month. | <urn:uuid:f7635941-2a9f-4e01-9cb9-f883b10b574a> | CC-MAIN-2022-40 | https://www.carritech.com/news/tunable-optical-transceivers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00107.warc.gz | en | 0.948297 | 880 | 2.53125 | 3 |
With Industrial IoT devices increasingly exponentially, security has to be bolstered with DHCP, NAT, and other customized security protocols
Until recently Industry leaders believed that IoT sensors are placed on an air-gapped network and can automatically counters safety threats. Today, the devices are causing a plethora of security concerns even though they are becoming abundant and cheaper to leverage and maintain.
The massive amounts of IoT data sent to data scientist banks in the public cloud is a security problem. With the increase in the number, especially in the industrial environment, the devices are also becoming less personalized.
Security experts reckon that industrial IoT sensors now strongly require specialized routing. It would have the potential to provide segmentation along with security across the WAN or a public network.
However, it is necessary to segregate IoT devices from the rest of the traffic because the devices carrying propriety data cannot be left at risk. While the process control traffic is crucial for maintaining operations, IoT traffic is mostly just data. Separating the two will protect each of their capability, and one device’s problem will not affect the other.
If left unseparated, the data mixed with other traffic in the network will create further security issues, and customized security required for IoT traffic data protection will nearly be impossible. Additionally, IoT sensors require specialized routing for traffic engineering, load balancing, and redundancy. These aspects are responsible for the value efficiency, productivity, and uptime of the network. Meanwhile, several industrial applications add sensors to their process controls. Here, as the network holds the maximum sensors, it is used to control all crucial processes.
IoT devices that require IP addresses acquire it by using Dynamic Host Configuration Protocol (DHCP). When there are too many devices, experts recommend edge routing equipment to isolate the IP address requests by handling them locally with a secure source Network Address Translation (NAT).
The devices can use NTP, DNS, or other network services to acquire information. While the protocols are not secure, the information should be guaranteed with security. For these purposes, experts recommend secure relay services. Localized DNS resolution for IoT endpoints can also be beneficial.
Called the initiators of all communication, these IoT devices should be invisible and untraceable. Companies should leverage a router that can understand the server communications to achieve invisibility. However, some low-cost IoT devices are incapable of high-level encryption, and for them, high-grade router equipment will do the needful. The system will be able to authenticate and encrypt IoT data flows from the sensors to the data centers.
Certain IoT devices need power from Ethernet switches. Experts recommend a single management control plane to manage these devices on Wi-Fi, wired, or secure edge routers. On the other hand, some devices produce excess data, which calls for pre-processing. Here, it is recommended to use smart edge routing that can reside with the containers for data processing applications.
Can AI be of any aid? When a company strategy includes a massive IoT network, security leaders suggest using AI to automate maintenance and control. From bad cables to inoperative IoT sensors, AI technology can be part of the industry strategy to bridge the IoT security gap.
For more such updates follow us on Google News ITsecuritywire News. | <urn:uuid:ff915758-c500-4222-89be-6f2d75239727> | CC-MAIN-2022-40 | https://itsecuritywire.com/featured/tying-the-iot-security-loose-ends/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00107.warc.gz | en | 0.920703 | 662 | 3.078125 | 3 |
Let's go for a trek in the Beşparmak Mountains of western Turkey. This mountain range is in south-western Turkey, near the city of Aydın and fairly close to the ancient city of Ephesus and the modern town of Selçuk. People lived here in prehistoric times, around 6000 BC. A Greek settlement grew up along the lakeshore at its base, starting in the 400s BC. Its ruins can still be seen. The mountain ridges later became a Byzantine monastic center from the 600s through the 1300s AD, and you can trek up into the mountains and find ruins of those monasteries.
The Beşparmak Mountains form a mountain ridge with several spurs. In Turkish, it's Beşparmak Dağları or the Five-Finger Mountains. Yes, there are at least five major spurs.
Name and Geology
What's in a name? Beşparmak Dağları—is it one mountain or five? It depends on who you ask among the very few sources of information...
The main peak is labeled on some maps as Beşparmak Dağı, or Five-Finger Mountain, singular. But that name comes from the fact that from above it is shaped somewhat like a hand. There is one highest peak but five (or so) major ridge-like formations radiating out. From above you might say there is one, but from below it looks like five.
This is still a geologically active area. Anatolia (the landmass of today's Turkish Republic) and the Aegean sea contain inactive volcanos and experience earthquakes, quite severe at times. This area of southwestern Turkey is close to the Anatolia boundary between the Anatolian Plate and the Aegean Plate. The terrain is fairly rugged.
A river flows through this area into the Aegean, coming down from the east and the region of Denizli, the formations of Pamukkale, and the ancient cities of Hierapolis, Laodicea, and Aphrodisias.
This is the great Meander River, or Büyükmenderes Nehri in Turkish. You know that English word meander, don't you? This river's ancient Greek name Μαίανδρος is the source of that word, because it does a lot of twisting back and forth to cover the distance toward the coast.
Long ago, this mountain range was on the coastline of a gulf of the Aegean known to the ancient Greeks as the Latmian Gulf. This gulf once extended as far inland of today's city of Aydın. The gulf filled with sediment from the Meander River, forming a nearly flat flood plain just barely above sea level. All the streams, including the Meander River, well, they meander quite a bit as they flow toward the Aegean!
Here is a map of the area. Aydın is the city (the purple blob) at the northeast corner of the map. The Greek island of Samos lies just off the Turkish coast at the northwest corner of the map. The broad Meander River valley is the nearly white band running from east to west along the north edge, then turning southwest to the Aegean.
See the lake labeled Bafa Gölü near the center of the above map, along the south edge of the Meander River valley. Gölü is Turkish for lake, this is also known as Lake Bafa. Below is a small region of that same map.
Notice the mountain peak directly east of Bafa Gölü, encircled by mountain roads joining the small villages of Çavdar, Kızılcabolük, Sakarkaya, and an unnamed village. That's the Beşparmak.
You can see that there is a central ridge running northwest to southeast, and at least five major ridges radiating out from there.
Note that a TPC can be rather generous about what it shows as a road. These roads can be seen from the air, they're useful for navigation on low-level tactical air missions, but they are not necessarily easy driving. Click here if you want a higher resolution version of the TPC.
We came in by van to the village of Çavdar, and trekked in from there. We came out near the east end of Bafa Gölü.
Here is another U.S. Government map showing a much larger area with far less detail. Notice that this map labels Bafa Gölü as Çamici Gölü.
Names change, and there are often multiple names for a place. The ancient Greeks called the mountain range Λάτμος or Latmos, and Strabo reported that it had been called Mount Phthires in the Catalogue of the Trojans, and that the western ridge was called Mount Latmos while the ridge on to the east was called Mount Grium in his time.
This map shows an ancient ruin named Herakleia at the east end of the lake. This was where we came out and met our van back to Selçuk.
On to the trek and the pictures!
Here we are, starting up the path into the mountains from near the village of Çavdar. We're carrying most of our stuff, plus we have one donkey. The folks at Globo Surf have a lot of great recommendations for outdoor gear, check out their recommendations.
We stopped along the way for a water-and-shade break. It's maybe 10 to 15 kilometers from the road to where we're going.
The area has a number of warm springs and vapors, showing the region's geological activity. About 170 rock paintings have been discovered in the area, in a site overlooking Lake Bafa. The earliest of these has been dated back to 6000 BC. The archaeologists' report is available in the original German. Right-click on that page within Chrome and ask Google to translate it.
This page has some nice pictures of the petroglpyhs. They have been found around small caves and overhangs, mostly springs and streams. They are done in a distinctive style, with the human heads shaped like a broad "M". People appear in most of the pictures, sometimes hunting animals and sometimes with other figures.
Those Bronze Age people apparently worshipped a dragon, the firey embodiment of a deity living within the mountain.
Some of the Ancient Greek people coming into the area later may have found these paintings from the earlier inhabitants, and they may have syncretically picked up some of the really old-time religion.
Our Home in the Mountains
We have arrived at our home for the next few days! This is a cabin where Mahmet, our guide, has lived while keeping sheep. But he decided that it was much more interesting to lead treks than to be a shepherd.
You have your choice — sleep inside, or outside in a tent.
Once we're settled in, some relaxing with Mahmet's sister Anatay and much younger brother Basley. OK, maybe "relaxing" isn't the right word...
Continuing up the Mountain
We are leaving our camp and continuing up the mountain. The cabin is just below the center, maybe you can see the two brightly colored tents?
We have arrived at the monastery ruins!
The mountain was known as Λάτρος or Latros during Byzantine times, changing slightly from its earlier name of Λάτμος or Latmos.
Tradition holds that it was founded by monks fleeing Sinai during Muslim incursions in the 600s AD.
There were three monasteries here in the early 900s and eleven by 1222. It was an active monastic center with influence throughout the Byzantine Empire.
The monastery is higher than where we are camped. So it's a little cooler, slightly less dry, and there is some nice shade from evergreen trees.
Joseph the Hymnographer was a monk here in the 800s. He was one of the greatest liturgical poets and humn writers of the Eastern Orthodox Church, and his feast day is celebrated on April 4th. He is credited with about 1,000 works, including most of the hymns in the liturgical book Paracletike. He was also an opponent of iconoclasm.
Joseph had been born in 810 in Sicily. He fled Sicily after his parents had died and then the Arabs were invading, became a monk in Thessalonica, and ended up here. He died in either 883 or 886.
Remember, this is a monastery. The design criteria may seem a little unusual.
So what if it's hard to get to a place — it was intended for quiet contemplation. Inaccessability can be a good thing.
It's not completely inaccessible, there is a way up with just a bit of scrambling.
Here are some of the surviving frescos. This monastery operated until around 1100, so these are about 900 years old!
The monastic community began declining in the late 1200s beacuse of increasing Turkish military advances into the region, and it was abandoned in the 1300s.
Here is another set of frescos. Some monastery structures were built by partially carving them into a cliff face and then building out from there.
The carved section, complete with its frescos, still survives in places, while the constructed section is gone. You see what appears to be an alcove of a chapel carved into a cliff face with a long drop below it. Rectangular holes that held the ends of wooden beams are below and beside the alcove. You can see where the other ends of the beams fit into a nearby rock outcropping, and you can imagine the structure that was once like a bridge or balcony.
Back to the Cabin
We're hanging out around the campfire after a day of exploration. We needed to bring in our food, the donkey helped with the carrying.
We carried water canteens or bottles for our use on the way in, and then filled large jugs at a nearby spring.
Mahmet and Basley, at center, talk with a couple of the "neighbors" who just strolled 5 km over the mountain in the dark to drop in and say hello.
Along the King's Highway
We're on the way out of the mountains, continuing to the south and leaving in a different direction than how we arrived.
This is the ruins of the King's Highway or the Royal Road, the Myra-to-Smyrna link of the all-weather trade route between Babylon and Constantinople.
Latmus or Latmos, Λάτμοσ to the locals, became part of the Delian League in the fifth century BC.
It was then captured by the Persian satrap Mausolus during his rule of 377-353 BC. He fortified the city with a wall, to prevent the next would-be conqueror from doing what he had done.
The settlement moved a kilometer to the west during the Hellenic Greek times, and took on the new name of Έράκλεια Σ ύπό Λάτμῳ or Herakleia under Latmos in dedication to the hero Herakles, which we English speakers usually call Hercules. It was built to a more modern rectilinear grid system.
By that time the former seaport had been isolated from the sea by sediment.
The Royal Road was repaired under Justinian in the early 500s AD when it was already several centuries old.
We're getting closer — that's Bafa Gölü in the distance.
Bafa Gölü formed behind the marshes of the Meander estuary, and it is still joined to the Aegean by some irrigation canals. It would be fresh water now, but the canals make it somewhat brackish. The lake covers about 7 square kilometers, and it has been made into a bird sanctuary.
We have reached the shore of Bafa Gölü, where you can visit the ruins of ancient Herakleia. That's another monastery on that small island.
The modern small village of Kapıkırı is built in the middle of the ruins.
Just to the south, on a small rise, you can see the temenos or Sanctuary of Endymion, originally dating back to pre-Greek times but rebuilt in Hellenistic times. Endymion was a figure in early Greek mythology, a shepherd who loved Selene, the Titan goddess of the moon. Zeus made him sleep forever so he would not age and Selene could look at him every night. The story was that his coffin (he eventually died, it seems) was opened annually and his bones emitted musical tones.
People wanted to witness that, so his tomb became a pilgrimage site. He was retroactively declared a Christian saint, despite not even being a monotheist, not really existing, and having supposedly existed long before Christ.
Well, he was too prominent a part of the local religious scene to leave alone, so they said he was a mystic Christian saint. Visit Herakleia, see the musical skeleton.
There aren't many pages mentioning this. Some of the few include Eric and Sylvia's site. This couple stayed at Bafa Lake, and got some very nice pictures of the ruins. They refer to this as the "Seven Brothers Monastery".
It seems that the area has not been studied as much as you might expect. The few archaeologists who have done some exploration seem to agree that the region was probably filled with ruins and only a small percentage are now known to anyone other than the local shepherds. | <urn:uuid:eaf066a5-b8d7-4e7e-b2a0-b7772e7a9576> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/turkey/mountain-trek/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00308.warc.gz | en | 0.976647 | 2,901 | 3.5625 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.